Anthropic AI runs its own blog under human supervision

0
253
Anthropic AI runs its own blog under human supervision

Anthropic has a blog for its artificial intelligence.

A week ago, Anthropic quietly launched Claude Explains, a new page on its website that is primarily created by the company’s family of artificial intelligence models, Claude. The blog, filled with posts on technical topics related to various Claude use cases (e.g., “Simplify complex codebases with Claude”), is intended to be a showcase for Claude’s writing abilities.

It is currently unclear exactly how much of Claude’s raw texts end up in the posts on Claude Explains. According to the spokesperson, the blog is overseen by Anthropic’s “experts and editorial teams” who “enhance” Claude’s drafts with “insights, practical examples, and […] contextual knowledge.”

“It’s not just a vanilla result of Claude’s work – the editing process requires human expertise and goes through iterations,” the spokesperson said. “From a technical standpoint, Claude Explains demonstrates a collaborative approach, with Claude [creating] educational content and our team reviewing, refining, and improving it.”

None of this is obvious from Claude Explains’ homepage, which reads: “Welcome to a little corner of the Anthropic Universe where Claude writes about every topic under the sun.” It can be easy to be misled into thinking that Claude is responsible for copywriting the blog from start to finish.

Anthropic says it sees Claude Explains as “a demonstration of how human expertise and AI capabilities can work together,” starting with educational resources.

“Claude Explains is the first example of how teams can use artificial intelligence to improve their work and deliver more value to their users,” a company spokesperson said. “Instead of replacing human expertise, we’re showing how AI can amplify what human experts can achieve […] We plan to cover a wide range of topics, from creative writing to data analysis to business strategy.”

Anthropic’s experiment with AI-generated text, which comes just a few months after rival OpenAI announced that it had developed a model tailored for creative writing, is not the first to be announced. Meta’s Mark Zuckerberg has said he wants to develop an end-to-end AI-powered advertising tool, and OpenAI CEO Sam Altman recently predicted that AI will one day be able to handle “95% of what marketers use agencies, strategists, and creative professionals to do today.”

Elsewhere, publishers have been piloting AI-assisted newswriting tools in an effort to increase productivity and, in some cases, reduce the need to hire employees. Gannett has been particularly aggressive, introducing AI-generated sports reviews and headline summaries. In April, Bloomberg added AI-generated summaries to its top articles. And Business Insider, which laid off 21% of its staff last week, urged authors to turn to AI assistive tools.

Even older publications are investing in AI, or at least making vague hints that they might do so. The New York Times is reportedly encouraging employees to use AI to suggest edits, headlines, and even questions during interviews, and The Washington Post is rumored to be developing an “AI article editor” called Ember.

However, many of these efforts have failed, largely because AI today tends to be confidently fictional. According to Semaphore, Business Insider was forced to apologize to employees after recommending books that turned out not to exist but may have been created by artificial intelligence. Bloomberg had to correct dozens of AI-generated article annotations. G/O Media articles written by AI with mistakes and published against the wishes of editors caused widespread ridicule.

An Anthropic spokesperson noted that the company continues to hire employees in marketing, content, and editing, as well as in “many other areas that are related to writing,” despite the fact that the company has plunged into AI-assisted blog writing. Take it for what it’s worth.

LEAVE A REPLY

Please enter your comment!
Please enter your name here