Amid OpenAI Chaos, Rival Anthropic Announces Update to Claude Chatbot

While its most visible competitor remains preoccupied with executive upheaval and a potential employee walkout, Anthropic has released a new version of its "Claude" AI chatbot.

Version 2.1 of Claude is now available, Anthropic announced Tuesday. 

The timing coincides with an ongoing and very public breakdown at Anthropic rival OpenAI, which has, in the span of four days, been dealt the loss of its CEO Sam Altman to Microsoft, the appointment of two different interim CEOs, a potential mass exodus of employees and, now, uncertainty as Altman reportedly ponders a return. (UPDATE: Altman did, in fact, return. Full story here.)

Anthropic, too, looks to have been caught in the blast radius of the OpenAI implosion; a report by The Information alleges that OpenAI's board, facing blowback for firing Altman, had approached Anthropic CEO Dario Amodei with the idea of merging the two companies. 

So far, those discussions don't seem to have come to anything. Instead, Anthropic updated Claude, its answer to OpenAI's popular ChatGPT bot.

Anthropic touts itself as an "AI safety and research company" and the Claude chatbot as "helpful, honest, and harmless." Anthropic built Claude using what it calls a "constitutional AI" training model, which comprises two learning stages -- supervised and reinforcement -- that are described thusly in this research abstract:

In the supervised phase we sample from an initial model, then generate self-critiques and revisions, and then finetune the original model on revised responses. In the RL phase, we sample from the finetuned model, use a model to evaluate which of the two samples is better, and then train a preference model from this dataset of AI preferences. We then train with RL using the preference model as the reward signal, i.e. we use 'RL from AI Feedback' (RLAIF). As a result we are able to train a harmless but non-evasive AI assistant that engages with harmful queries by explaining its objections to them. 

"We experiment with methods for training a harmless AI assistant through self-improvement, without any human labels identifying harmful outputs,"  Anthropic said. "The only human oversight is provided through a list of rules or principles, and so we refer to the method as 'Constitutional AI'"  

Version 2.1 of Claude features developer-related improvements, including the ability to use API tools. 

Another new improvement is a significant reduction in hallucinations. "Claude 2.1 has also made significant gains in honesty, with a 2x decrease in false statements compared to our previous Claude 2.0 model," according to Anthropic.

In addition, it has greatly expanded the number of tokens it can process at a time -- as many as 200,000 tokens, which is "roughly 150,000 words, or over 500 pages of material," per Anthropic. Version 2 of Claude supported just half that, while OpenAI's just-announced GPT-4 Turbo model supports 128,000 tokens.

Claude's 200,000 token limit is an "industry first," according to Anthropic.     

"Our users can now upload technical documentation like entire codebases, financial statements like S-1s, or even long literary works like The Iliad or The Odyssey. By being able to talk to large bodies of content or data, Claude can summarize, perform Q&A, forecast trends, compare and contrast multiple documents, and much more," the company said. 

The 200,000 limit is available only to users of Claude's paid tier, Claude Pro, which costs $20 per month.

About the Author

Gladys Rama (@GladysRama3) is the editorial director of Converge360.