News
Predictions About AI in 2026, Part 2: The Year the Vibes Got Audited
- By John K. Waters
- 01/06/2026
The first big AI wave was about awe. The second was about access. The third was about agents, everywhere, doing things on your behalf, quietly, inside the software you already use.
The fourth wave, if you believe this year's pile of forecasts, looks less like a movie trailer and more like a compliance checklist.
Across predictions from investors and analysts, corporate research teams, and academics, the through-line for 2026 is not "AGI is imminent." It's "prove it works." It's "show your work." It's "who pays for the chips." It's "who gets fired." It's "who owns the protocol." It's "who owns the data." It's "whose culture the model actually understands."
I thought I could do it in one, but the "AI in 2026" predictions just kept on coming through the holiday break. There are some great insights in Part 1 ("Practical AI: Why 2026 Is the Year Intelligence Becomes Infrastructure.") Part 2 is a map of the year ahead, grouped by the kinds of futures these predictions keep circling: evaluation over evangelism, agents over chatbots, silicon over slogans, and a creeping realization that the hardest part of AI is not intelligence. It's everything around it.
The Great Cooldown: From AGI Talk to ROI Receipts
If 2024 was the year everyone became a prompt engineer and 2025 was the year everyone became a venture capitalist, several predictors argue that 2026 is the year everyone becomes an auditor.
In a widely shared Fortune roundup of predictions, Rob Toews, a venture capitalist at Radical Ventures, says that "discourse about AGI and superintelligence will become less fashionable and less common," arguing that incremental model gains and slower-than-hyped agent reliability are already deflating breathless timelines. He points to influential researchers turning more sober, including OpenAI founding members Ilya Sutskever and Andrej Karpathy. Sutskever, currently the co-founder and Chief Scientist at Safe Superintelligence Inc has estimated that AGI could take five to 20 years. Karpathy, founder of Eureka Labs, predicts 10 years and says the ecosystem will shift its attention to "enterprise AI adoption" and to nearer-term stakes, such as job displacement.
Stanford's Human-Centered AI community foresees the same vibe shift, but with a professor's insistence on measurement. "After years of fast expansion and billion-dollar bets, 2026 may mark the moment artificial intelligence confronts its actual utility," a Stanford HAI roundup argues, describing "the era of AI evangelism" giving way to "an era of AI evaluation." James Landay, Stanford HAI Denning co-director, puts it bluntly: "My biggest prediction? There will be no AGI (artificial general intelligence) this year." Erik Brynjolfsson, HAI's Digital Economy Lab Director, predicts "high-frequency AI economic dashboards" that track, by task and occupation, where AI is "boosting productivity, displacing workers, or creating new roles." "In 2026, arguments about AI's economic impact will finally give way to careful measurement," Brynjolfsson says.
In Snowflake's Data + AI Predictions 2026 report, CEO Sridhar Ramaswamy frames enterprise expectations as quantifiable, not philosophical: "Enterprises are already demanding that reliability be quantified, because that's what it takes to succeed in the enterprise," he says. "I mean, there is exactly one answer to 'How much money did Snowflake make yesterday?' That's not a matter of doubt or opinion."
Info-Tech Research Group's list of AI trends arrives at the same destination from the IT side: risk management becomes "the price of admission," foundational AI principles reshape organizational strategy, and leaders face a choice between "AI platform or best-of-breed AI tools."
This is the year the hype meets procurement. The question is no longer whether the model can write. It's whether it can survive legal review, security review, board review, and the cold stare of a spreadsheet that insists value is not a vibe.
Agents Grow Up, Then Get Managed
Predictions about agentic AI are everywhere, and they are less romantic than the moniker sounds. Agents are not digital beings. They are software systems that plan, execute, and iterate across multiple steps, ideally with humans watching closely enough to prevent disaster.
Snowflake bets that 2026 is when agentic AI "really takes hold in the enterprise," but it also insists the real work is organizational, not magical. The report forecasts that context windows and memory will unlock more autonomous agents. "It's a more human-like capability, to be able to remember the larger context of a situation to solve the problem at hand," Snowflake SVP Vivek Raghunathan says.
Then comes the hangover: if agents can act, someone has to define what "good" looks like. Snowflake predicts the rise of verification frameworks, human oversight boundaries, and observability so "every agent action can be audited, explained, and trusted." It even imagines a formal "AI quality control function" dedicated to monitoring and evaluation.
The report's agents are not a single monolith either. Snowflake CIO Mike Blandina likens the near-term buildout to a microservices approach. "In the next couple of years, we'll see 'micro-agents' that do a task or a few small tasks really, really well," he says. "Then we'll combine those agents like Lego blocks to do bigger tasks." Dwarak Rajagopal, Snowflake's VP of AI Engineering, adds the architecture lesson: "Multiple bounded agents will excel at specific tasks, and you'll have an orchestrator on top of them to route queries to the correct agents. That makes it much easier from a verification perspective."
That "verification perspective" is what keeps showing up in 2026 forecasts. The more the industry talks about agents, the more it talks about grading them, tracing them, and forcing them to cite their sources. It's an emerging "show your work" norm, more AI outputs arriving with context, labels, and records of human review. Also expect the expansion of "do-it-for-me" features, primarily for specific tasks rather than entire jobs.
In this version of 2026, "agentic" does not mean freewheeling autonomy. It means autonomy in a fenced yard with cameras, logs, permissions, and a human who gets paged when something looks weird.
The New Chokepoints: Protocols, Lock-In, and "AI Sovereignty"
Once you accept that AI systems need to work across tools, databases, and vendors, you run into a very old internet problem: standards.
Snowflake predicts that "a dominant AI protocol" will emerge to let agents communicate, and it name-checks three contenders: Anthropic's Model Context Protocol (MCP), Google's Agent-to-Agent (A2A), and IBM and the Linux Foundation's Agent Communication Protocol (ACP). Benoit Dageville, Snowflake co-founder, makes the stakes plain: "There are multiple attempts right now to create the defining protocol," he says. "The acceptance of that winning protocol will be super important."
The prize is not elegance. The prize is power. If your protocol wins, you become TCP/IP for the agent era, or at least you get to charge like you are.
On the policy side, Stanford's James Landay predicts "AI sovereignty will gain huge steam" as countries seek to demonstrate independence from AI providers and from US politics. He notes that sovereignty can mean building a domestic model, or running someone else's model on domestic GPUs, so data stays within borders. In his telling, the term itself is "not well defined," and Stanford HAI is working on a project to help people understand the models.
Meanwhile, regulation continues to shape product design, whether Silicon Valley likes it or not. Snowflake's Jennifer Belissent argues that, in Europe, regulations such as the GDPR and the EU AI Act can serve as accelerants because they foster reassurance and transparency. "The mandated transparency has sparked collaboration across the organization because different teams are seeing what others are doing," she says. "It's not stifling innovation, it's accelerating it."
Practically speaking, even outside Europe, buyers ask for documentation, legal and security teams get involved earlier, and vendors build a single approach that works everywhere.
In 2026, the AI stack is not just models and GPUs. It's treaties, rules, standards, and the boring decisions that determine who gets locked into what for the next decade.
Silicon, Money, and the Accounting Term That Might Ruin a Party
If you want a prediction that feels like it was written by a finance professor who has seen some things, Rob Toews has one: depreciation schedules.
His argument is simple and unsexy. AI infrastructure is becoming one of the largest capital investment cycles in human history, with massive spending on data centers and chips. If chips become obsolete faster, companies may shorten depreciation schedules, which can swing profitability on paper by tens of billions. That paper swing matters because it affects borrowing, valuations, and whether an AI infrastructure boom looks like a sustainable buildout or a leveraged bubble.
Toews warns of an "impairment bomb" scenario if chips are treated as five-year assets but lose economic value in two years. He singles out CoreWeave, noting that the company uses long depreciation schedules for GPUs and carries significant debt secured by those assets, setting up a 2026 storyline where accounting assumptions become market drama.
At the same time, the industry continues to predict chip innovation and chip nationalism. Toews expects China's domestic AI chip sector to make "concrete, meaningful progress," even if it does not achieve parity with Nvidia's most advanced hardware. He also anticipates that "many more AI companies will begin building custom chips," citing OpenAI's partnership with Broadcom and the idea that reinforcement learning could speed up chip design. Meanwhile, the predictions from Arm, the British semiconductor and software design company, go even further, forecasting modular "chiplets," advanced packaging and 3D integration, and "secure-by-design silicon" becoming non-negotiable as AI embeds itself in critical infrastructure. Arm also predicts distributed AI compute pushing more intelligence to the edge, cloud-edge-physical convergence, and smaller specialized models gaining ground.
Then there's the infrastructure constraint everyone quietly tiptoes around: energy. It's all but certain that energy and data center limits will shape which AI features scale widely, pushing companies toward lighter-weight features and more selective deployment of "heavy" AI. For example, The Guardian reports on a growing backlash in Saline Township, Michigan, where residents are fighting a proposed $7bn, 1.4-gigawatt data center they say could raise electricity bills, threaten groundwater, and erode the area’s rural character—while also complicating the state’s clean-energy trajectory.
Put it together and 2026's most influential AI arguments might not happen on X or in keynote speeches. They might happen during earnings calls, data center permitting meetings, and in the footnotes to financial statements.
Work, Jobs, and the Election-Year Fight Over Who Gets Protected
Predictions about AI and labor are getting sharper, less theoretical, and more politically radioactive.
Toews predicts AI will become a central issue in the 2026 US midterm elections, with "AI-driven job loss" turning into the biggest political fault line. He cites an MIT study that concludes that AI can replace 11.7 percent of the US workforce, representing more than $1 trillion in wages, and argues that politicians will be forced into messy balancing acts: Republicans navigating a pro-industry posture against populist job protection, Democrats trying to mitigate harm without appearing anti-innovation or weak on national security.
Brynjolfsson's forecast for "AI economic dashboards" slots into this same future. If executives check "AI exposure metrics daily alongside revenue dashboards," as he predicts, then labor displacement becomes visible in near real time, not after the fact in annual reports. In politics, visibility is fuel.
Snowflake's report examines how work will change due to AI, even when jobs do not disappear. Workers must learn "human-AI collaboration and communication," Chris Child, Snowflake's VP of Product, Data Engineering, says, because models can go deep on data, but humans still need judgment. "AI models will have a deep understanding of your data," he says. "But you'll still have to know when to doubt, when to ask deep follow-up questions before taking action." Baris Gultekin, Snowflake's Vice President of AI, describes the skill shift inside software engineering: "Rather than being good at writing code, the engineer has to be good at describing very clearly what they want built," he says.
The same report asks the uncomfortable pipeline question: "You used to train up an intern or entry-level worker. And now you're going to use AI assistants instead," says Benoit Dageville, Snowflake Co-founder, President of Product. "AI has arrived so quickly that we don't yet know how the world is going to reorganize itself."
A lot of 2026 predictions read like this: the technology accelerates, the institutions wobble, and everyone realizes that "efficiency" is not a neutral word when it shows up in a community with a shrinking tax base.
Trust, Labels, and the Culture Wars Inside Advertising Tech
Not all 2026 predictions are about enterprises and governments. Some are about the weirdest parts of the internet, and the business models built on top of them.
Expect, for example, more transparent labeling for edited and synthetic media, with apps offering "this was edited" indicators and export options that keep that information attached as content travels.
Snowflake's retail predictions echo the same trust crisis from a marketing angle. Rosemary DeAragon, the company's Global Head of Retail & Consumer, says "AI slop" is eroding confidence in what people see online. "Digital natives are starting not to trust what they see on the internet, because so much content is now AI-generated," she says. "They're more likely to trust a human influencer." She predicts privacy approaches will drive the success of shopping agents, and argues that platforms like Apple could benefit from consumer trust. "Apple knows who I talk to, what's on my calendar, where I go, and more, so it can create smart agents that really know me," she says.
Crystal Foote, founder of Digital Culture Group, pushes the story further into cultural strategy. She predicts that "diversity and cultural intelligence" will become AI's strategic edge. She also warns that, without diverse talent guiding development, campaigns risk automating bias. She sees voice as the next frontier for contextual targeting, imagining voice commands as new ad touchpoints, from asking Alexa for recipes to cueing podcasts on smart speakers.
Foote's says 2026's ad-tech winners won't be the ones with the biggest datasets. They are the ones who can decode what people feel and mean, without flattening culture into a template. That forecast comes with a sting: agencies that cut DEI roles may discover they cut the very expertise needed to keep AI from producing expensive, viral cringe.
It's a paradox: the more AI content floods feeds, the more valuable humans become as proof-of-life signals.
Medicine and Biology: The Platform Wars Move into the Body
AI in health care is likely to split into two simultaneous storylines; industry watchers tell us: breakthrough excitement and evaluation fatigue.
Curtis Langlotz, Professor of Radiology, Medicine, and Biomedical Data Science and Senior Associate Vice Provost for Research at Stanford University, forecasts a "ChatGPT moment" for AI in medicine, driven by self-supervised learning and massive high-quality healthcare data that can "boost the accuracy of medical AI systems" and enable tools diagnosing rare diseases.
Russ B. Altman, the Kenneth Fong Professor of Bioengineering, Genetics, Medicine, Biomedical Data Science, and Computer Science at Stanford, predicts a surge in the "archeology" of high-performing neural nets, arguing that science needs insight, not just accurate predictions. "In 2026, I expect more focus on the archeology of the high-performing neural nets," Altman says, describing work that examines attention maps and internal representations to understand why models perform well.
Nigam Shah is Professor of Medicine at Stanford, and Chief Data Scientist for Stanford Health Care. He predicts that vendors will try to bypass slow health system procurement cycles and go "directly to the user" with free applications, and argues that patients will need to know the basis on which AI help is being provided.
Meanwhile, Toews predicts the business side will get aggressive: "One of the large global pharma companies will acquire one of the leading protein AI startups," he writes, pointing to recent progress in AI-generated antibody therapeutics and listing possible acquirers ranging from AbbVie to Takeda Pharmaceuticals. The prediction is not that AI-designed drugs magically hit the market overnight. It's that the platform and the scarce talent behind it become too essential to rent.
In 2026, medicine's AI fight is not just about model quality. It's who gets trusted, who gets regulated, and whether the veritable tsunami of noise from startups turns into a smaller number of systems that hospitals can actually live with.
The Wild Cards: IPOs, Leaks, CEO Swaps, and Brain Interfaces
Not every prediction is about governance frameworks and protocols. Some are about drama, and some are about bodies.
Toews predicts that Anthropic will go public in 2026, while OpenAI won't, arguing that both companies are uniquely capital-hungry but that OpenAI can likely raise enough private capital to delay public scrutiny. He also predicts details of Ilya Sutskever's secretive Safe Superintelligence lab will leak, and that the idea will be significant enough to force big labs to "recalibrate their own research roadmaps." Sutskever's own hint is practically engineered for speculation. "We've identified a mountain that's different from what I was working on," Sutskever told a reporter last year. "Once you climb to the top of this mountain, the paradigm will change … Everything we know about AI will change once again."
Then comes the biggest boardroom forecast: Toews predicts that Sam Altman will actually step aside as CEO of OpenAI. Why? Public companies live under a microscope: regulators, investors, and the public expect tight controls, transparent reporting, and steady execution—often more than private companies do. A rough comparison is Uber: as it geared up to go public, it replaced its big-vision founder, Travis Kalanick, with Dara Khosrowshahi, an experienced executive better suited to running a public-company operation. Toews wonders if OpenAI might be heading for a "Dara era" dynamic.
Whether that happens or not, the prediction is really about maturation. The industry is drifting from founder mythology to corporate governance, because the bill is too large to keep paying on vibes alone.
Finally, there is the prediction that makes the rest feel quaint: brain-computer interfaces go mainstream. Toews expects BCI to shift from the fringe to a "mainstream technology and startup category," with non-invasive approaches gaining momentum and Neuralink's dominance becoming "shakier." He highlights competitor arguments about invasiveness, quoting Precision Neuroscience cofounder Ben Rapaport: "For a medical device, safety often implies minimal invasiveness," Rapaport said in an interview last year, explaining why his team believes it can "extract information-rich data from the brain without damaging the brain."
If 2026 is the year the AI industry grows up, BCI is the reminder that "up" can also mean "into."
The 2026 Meta-Prediction: AI Stops Being a Feature and Starts Being a System
One line from Snowflake's report reads like a mission statement for this whole forecast genre: "By the end of 2026, the central question won't be what AI can do; it will be how people and AI work together."
That is the hidden center of almost every prediction I have received this year.
Agents get better, then get managed. Models get cheaper, then get audited. Chips get faster, then get depreciated. Synthetic media gets better, then gets labeled. The political fight gets louder, then gets quantified. The protocols compete, then someone wins, or nobody does, and enterprises scream.
The future here is not a single breakthrough. It's an accumulation of scaffolding.
If these prognostications are to be believed, in 2026, AI is likely to become less like a demo and more like infrastructure. Which means the next big question is not whether it's intelligent.
It's whether it's governable.