News

Anthropic Releases Opus 4.7 AI Model with Focus on Coding, Visual Tasks, and Cybersecurity Guardrails

On Wednesday, Anthropic launched Claude Opus 4.7, an updated large language model that it says outperforms its predecessor on software engineering tasks, image analysis, and multi-step autonomous work, while maintaining pricing at $5 per million input tokens and $25 per million output tokens.

The model is now generally available across Anthropic's own products and through its API, as well as on Amazon Bedrock, Google Cloud's Vertex AI, and Microsoft Foundry.

Anthropic said the upgrade delivers the most pronounced gains on demanding coding tasks. Users report being able to hand off difficult coding work that previously required close supervision, with the new model handling complex, long-running tasks with greater consistency and paying closer attention to instructions.

The company also said the model can verify its own outputs before reporting results compared to users, a behavior it described as new relative to earlier versions.

On vision, Opus 4.7 can now accept images up to 2,576 pixels on the long edge, roughly 3.75 megapixels, more than three times the resolution supported by prior Claude models.

Anthropic said this expands the model's usefulness for tasks requiring fine visual detail, including reading dense screenshots and extracting data from complex diagrams.

Perhaps the most notable aspect of the release is its role in Anthropic's broader safety rollout strategy. Last week, the company announced Project Glasswing, which highlighted both the risks and potential benefits of AI for cybersecurity, and stated that it would keep its more powerful Claude Mythos Preview model restricted while testing new cyber safeguards on less-capable systems first. Opus 4.7 is the first such model.

Anthropic said it experimented during training by selectively reducing Opus 4.7's cybersecurity capabilities and is releasing the model with automatic safeguards designed to detect and block requests that indicate prohibited or high-risk cybersecurity uses.

The company added that findings from this deployment will inform its eventual broader release of what it calls "Mythos-class" models. Security professionals seeking to use the new model for legitimate purposes, such as vulnerability research or penetration testing, can apply through a new Cyber Verification Program.

Regarding alignment, Anthropic's evaluations show that Opus 4.7 exhibits low rates of concerning behavior, such as deception, sycophancy, and cooperation with misuse, and performs better than its predecessor in honesty and resistance to malicious prompt-injection attacks. However, the company acknowledged the model is modestly weaker in some areas, including a tendency to give overly detailed harm-reduction advice on controlled substances.

Anthropic's internal alignment assessment described the model as "largely well-aligned and trustworthy, though not fully ideal in its behavior," and noted that Mythos Preview remains the best-aligned model the company has trained.

Developers upgrading from Opus 4.6 should account for two cost-related changes. Opus 4.7 uses an updated tokenizer that can map the same input to roughly 1.0 to 1.35 times as many tokens, depending on content type. The model also produces more output tokens at higher effort levels, particularly in later turns of agentic tasks, because it engages in more reasoning.

Anthropic said users can manage token consumption through an effort parameter, task budgets, or by prompting the model to be more concise.

Alongside the model release, Anthropic introduced a new "xhigh" effort level, sitting between the existing "high" and "max" settings, giving developers finer control over the tradeoff between reasoning depth and latency. In Claude Code, the default effort level has been raised to "xhigh" for all plans.

The company also launched task budgets in public beta on its API platform, and added a new "/ultrareview" command in Claude Code that reads through code changes and flags bugs and design issues.

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at [email protected].

Featured