[ad_1]

There’s a lot of noise right now about how generative AIs like ChatGPT and Bard are going to revolutionize various aspects of the web, but companies targeting narrower verticals are already experiencing success. Writer is such a one, and it just announced a new trio of large language models to power its enterprise copy assistant.

The company lets customers fine-tune these models on their own content and style guides, from which point forward the AI can write, help write, or edit copy so that it meets internal standards. More than just catching typos and recommending the preferred word, Writer’s new models can evaluate style and write content themselves, even doing a bit of fact-checking when they’re done.

But the real draw is that the whole thing can be done internally, from fine-tuning to hosting, at least when it comes to the smaller two of the Palmyra series of models.

“No enterprise leader wants their data to be fodder for someone else’s foundation model, including ours,” said CEO May Habib in a press release. “We give customers all the benefits of the AI application layer without any of the risks of other AI applications and commercial models. Enterprise leaders want to invest in solutions that will essentially give them their own LLM.”

Palymra comes in three sizes: 128 million, 5 billion, or 20 billion parameters respectively for Small, Base, and Large. They’re trained on business and marketing writing, not Reddit posts and Project Gutenberg, so there are less surprises to begin with. Then you load up its maw with the last ten years of annual reports, financials, blog posts, and so on to make it yours. (This and any derived data do not filter back to Writer, to be clear.)

Having written my share of enterprise and marketing copy, I can say this isn’t the most exciting of applications. But what it lacks in thrills it makes up for in practicality: companies need to do lots of this kind of writing and editing, and tend to actually pay for it. Writer already hooks into lots of development and productivity suites, so there’s not much friction added.

Mockup of Writer generating a product description.

The business model is similar to other generative AI companies: you get all set up and fine-tuned for free, then pay a penny per thousand tokens, which gets you about 750 words. (This article is just over 500, as a quick reference.)

Alternatively, you can self-hose the Small or Base models free of charge if you have the compute.

A few dozen companies have been using the models since late last year, and we haven’t heard about any egregious problems like we did on day one of Microsoft and Google’s attempts at popularizing generative AI… so that’s a good sign. This is the success of which I spoke earlier. While ChatGPT is certainly impressive, as a generalist or dilettante AI it’s hard to say what it’s actually capable of being used for. The next year or two will see more targeted plays like Writers while Microsoft and Google kick the tires on their latest toy.

[ad_2]

techcrunch.com

Previous articleBakkt sunsets its consumer-facing crypto app to focus on B2B solutions
Next articleSubdued Volatility Expectations Suggest Traders Relaxed About Bitcoin, Ethereum Price Risks