News

Practical AI: Why 2026 Is the Year Intelligence Becomes Infrastructure

At 2:13 a.m., a resident physician rereads the same sentence for the third time: "Patient denies chest pain but feels crushing pressure." The note is full of the kind of contradictions clinicians live with and software hates. The patient’s family speaks two languages at home. The vitals are jittery because the patient keeps shifting under the leads. Somewhere in the hospital’s back end, billing codes must be assigned, orders placed, alerts triaged, and a dozen systems updated, all of which were never designed to agree with one another.

For the last few years, the pitch has been that AI could mop up this mess. In 2026, the prediction wave says the pitch changes. The demo is no longer the product. The benchmark is no longer the finish line. The era of AI as a science project ends when AI touches reality, and reality touches back.

That shift shows up again and again across a pile of 2026 forecasts from executives, analysts, and industry observers: AI moves from impressive to accountable, from centralized to distributed, from assistants to agents, and from "trust us" to "prove it." Even the chip story, usually the most inside baseball corner of tech, starts to read like a political thriller.

My inbox has been inundated with predictions about AI in the coming year. What follows is a tour of those predictions, grouped by the themes that recur, even when the sources come from very different worlds.

Practical AI Replaces "Look What It Can Do" With "Prove What It Did"

Dinakar Munagala, cofounder and CEO of Blaize, a provider of full-stack artificial intelligence solutions for automotive and edge computing, calls the next phase in the evolution of enterprise artificial intelligence "Practical AI," and says we'll be seeing a shift away from experimental systems toward intelligence that "delivers reliable outcomes in the physical world." In his view, 2026 is when organizations feel real pressure to move beyond pilots because "AI that works only in controlled environments will no longer be enough."

This is not just a vibe shift. In its report, "Predictions 2026: The Race to Trust and Value," the industry analysts at Forrester declare: "In 2026, organizations will face a reckoning. AI hype will yield to pragmatism, business buyers will demand hard evidence, and CX fatigue will reach a breaking point. Amid ongoing uncertainty, trust will provide a critical competitive edge." In another report, "2026 outlook for CIOs and CISOs," the industry watchers frame 2026 as a year when the "AI hype period is over" and the margin for error keeps shrinking. The headline is not bigger models; it's measurable results from secure AI initiatives under tighter scrutiny.

Forrester’s specific predictions turn that pressure into corporate physics:

  • A quarter of CIOs will be asked to bail out business-led AI failures as agentic projects stall or compound errors.
  • Enterprises will delay 25% of planned AI spending into 2027 because ROI is not landing, and CFOs get pulled deeper into AI investment decisions.

Those two predictions pair neatly with Munagala’s argument: pilots and prototypes are tolerated when money is cheap and expectations are vague. In 2026, the forecasts say the tolerance ends. The questions become: Where does the value show up? Who owns the downside? Who gets fired when the agent confabulates in production?

Tom Evans, Chief Partner Officer at Cloudflare, a provider of AI infrastructure and security tools, sketches the same reality from the channel side: customers are drowning in digital sprawl, and consolidation becomes the dominant strategy as they demand outcomes over tool clutter.

The common thread is clear: 2026 is going to be the year AI stops being judged on cleverness and starts being judged on consequences.

Accountability Arrives, and Healthcare Drags AI into Court, Culturally and Clinically

If there's one industry that can force a technology to grow up, it's healthcare.

Lars Maaløe, CTO and cofounder of healthcare-focused AI company Corti, predicts that "AI will enter its first era of clinical accountability" in 2026. He reasons that healthcare exposes failure modes that do not show up in benchmarks: noisy inputs, ambiguous phrasing, multilingual shifts, and unknown behaviors that only appear in production.

He also offers a prediction that exemplifies a broader pattern in the AI market: coding AI in healthcare, historically brittle and inaccurate, finally becomes deployable because multi-agent architectures and clinical-specialized models can break the work into auditable reasoning steps rather than "guessing labels." In this framing, getting the tedious infrastructure task right becomes the prerequisite for doing the flashy clinical stuff safely.

Here's the 2026 pattern in miniature:

  • The world stops caring what the model can do in a lab.
  • The world starts to care about what the system does on a Tuesday night when the data is messy and the stakes are high.
  • The "unsexy" scaffolding (coding, auditability, workflow design) becomes the real innovation.

Healthcare is where those demands are hardest to dodge, which is exactly why Maaløe predicts that it will become the standard-setter for reliability.

Agents Stop Being Sidekicks and Start Becoming Coworkers

A surprising number of 2026 predictions are about the same plot twist: the assistant gets promoted.

Aniket Shaligram is VP of Technology at Talentica Software, a product engineering firm partnering with startups and high-growth tech companies. He predicts AI agents will shift from assistants to full team contributors, with organizations formalizing their responsibilities and redesigning workflows into human-AI hybrid models.

Udo Sglavo leads Applied Artificial Intelligence and Modeling Research and Development at SAS. He uses even stronger language: agents are "teammates," not tools, and enterprises will be expected to operate with mixed human-AI teams where agents execute tasks, share context, and learn alongside people.

R Systems echoes this prediction with an HR-flavored forecast. Neeraj Abhyankar, VP of Data and AI at R, expects the AI workforce to evolve toward specialized roles as AI becomes "autonomous agents acting as digital co-workers with defined responsibilities and KPIs." He expects growth in roles like AI integration architects, governance and ethics leads, and operations specialists focused on real-time reliability and compliance.

Kalpak Shah of R Systems pushes the interface angle: he expects enterprise UX to shift from "learn our menus" to "tell the system what needs to happen and supervise it," turning knowledge workers into agent managers.

Abhishek Gupta, Head of Data Science at Talentica, predicts that the underlying architecture will follow: AI systems will move away from model training and serving pipelines toward agent ecosystems, where context routing, memory, tool integration, and distributed decision-making become the platform.

Paul Aubrey, Director of Product Management at NetApp Instaclustr, NetApp's fully managed service for open-source databases and data pipelines (including Cassandra, Kafka, and PostgreSQL), describes the same destination with different plumbing: "composable intelligence" replaces monolithic AI, with reusable micro-agents connected through frameworks like the Model Context Protocol (MCP).

If you stitch these together, you get a 2026 office where "AI" is not a chatbot tab. It's an org chart.

And that is where the mood shifts, because the same forecasts also predict that agent ecosystems will create the next great security and governance headache.

The Agent Era Expands the Attack Surface, and Security Moves to Runtime

Dor Sarig’s essay "The New AI Attack Surface: 3 AI Security Predictions for 2026" reads like a warning label for the future of agents. Sarig, the CEO and Co-Founder of Pilla Security, predicts that breaches will increase in volume and severity as AI use cases grow, sensitive data access expands, and agent-to-agent communication spreads faster than controls can keep pace.

Sarig’s three predictions map directly onto how agent systems work:

  • Indirect injection becomes the silent data poisoner, with malicious instructions embedded in seemingly legitimate data sources that AI systems treat as executable guidance.
  • The coding agent becomes a backdoor factory, not just generating vulnerable code, but enabling supply chain infiltration through poisoned toolchains and trusted integrations.
  • Agent-to-agent attack propagation accelerates through "toxic combinations," in which individually safe tools and agents become dangerous when chained together via implicit trust graphs.

Author and industry insider Ken Underhill’s "5 Cybersecurity Predictions for 2026" (TechRepublic) hits similar notes from a broader security lens, predicting:

  • AI-powered attacks meet AI-driven defense, turning the SOC into an arms race where automation accelerates both sides.
  • Deepfakes force a trust revolution, pushing for provenance, watermarking, and cryptographic signatures to go mainstream.
  • ERP and OT systems become prime targets, with operational systems treated like crown jewels because compromise has immediate physical consequences.
  • A shift to a predictive SOC, where defenders aim to anticipate intent and prevent impact, not just close alerts.
  • The rise of on-device AI malware, where local models generate and adapt malicious behavior without obvious network indicators.

The key insight from Sarig and Underhill: 2026 security cannot focus solely on the model. It must address how the entire system behaves in context, spanning tool calls, permissions, memory, and chain-of-command.
That's why Sarig argues that the security focus must shift "from securing code to securing runtime behavior." In an agentic world, the exploit is often simply a sentence.

Trust Infrastructure Becomes a Product, not a Promise

Once AI systems are making decisions, writing code, touching finance workflows, or influencing healthcare outcomes, "trust" stops being a slogan and becomes an engineering discipline.

AlixPartner's "2026 Enterprise software technology predictions report" explicitly calls this out, labeling it a differentiator: trust infrastructure becomes the critical innovation enabling widespread AI adoption. It frames trust-as-a-product as identity, privacy, safety, audit, and interoperability built into the platform, not bolted on after a breach.

SAS reinforces the compliance side. Iain Brown, Head of Data Science, Northern Europe, at SAS, says, "2026 is the Year of the AI Audit and the fines bite," citing EU Artificial Intelligence Act obligations kicking in from August 2026 and boards demanding provable model lineage, data rights, and oversight as standard. He adds that "explainability theatre" disappears, and techniques such as synthetic data and differential privacy become the default tools for safer model refreshes.

Forrester’s report concludes that, in executive accountability, CIOs and CISOs are expected to lead with precision, resilience, and governance because the "race to trust and business value is on."

Leostream develops connection broker software for virtual desktop infrastructure. Its namesake platform helps organizations provide remote access to high-performance computing resources needed for AI development and deployment. The company supports GPU-enabled environments that are essential for AI/ML to work. The company's 2026 predictions bring the identity and privileged access layer into the story:

  • Passwordless moves from pilot to production in privileged environments.
  • AI-assisted session security becomes proactive, with models summarizing risky activity and enforcing policies like termination or step-up authentication.
  • Browser-based, clientless privileged access rises as organizations ditch thick clients and VPN dependencies.
  • Threat-driven urgency pushes IAM and PAM to the board level, especially for vendor access risk.
  • Hybridization of everything expands identity complexity across employees, vendors, and machine identities.

All of these forecasts are, in different ways, predicting the same institutional pivot: trust shifts from "we tested it" to "we can prove what it did, why it did it, and who approved it."

AI Moves Closer to the Action: Edge Inference, Local Compute, and New Chips

If 2026 is the year of Practical AI, it's also the year of where AI runs.

Munagala’s Practical AI thesis leans hard into proximity: intelligence moves "closer to where data is created," with cities, retailers, and industrial teams relying on localized analytics and on-device decision-making when bandwidth, power, or time are limited.

That same decentralization shows up across infrastructure predictions:

  • Pankaj Mendki, Head of Emerging Tech at Talentica Software, predicts that GPU-as-a-Service will become the default infrastructure for GenAI workloads as organizations shift from managing clusters to on-demand, serverless GPU models.
  • Forrester predicts that neoclouds will generate $20 billion in revenue, with providers such as CoreWeave and Lambda challenging hyperscalers by prioritizing GPU workloads, open-source models, and sovereign AI solutions.
  • Itiel Shwartz, CTO and cofounder of Komodor, maker of an autonomous AI SRE platform for cloud-native Infrastructure, predicts AI workloads will continue shifting from training to massive-scale inference, with Kubernetes scheduling needing a "makeover," job queueing systems like Kueue seeing major uptake, and GPU overprovisioning becoming a pressing problem.
  • Catalin Voicu, Cloud Solutions Engineer at N2W (Not to Worry), a provider of cloud-native backup, disaster recovery, and data lifecycle management for workloads on AWS and Azure, predicts that outages and dependency fatigue will push organizations toward multi-cloud "shadow cloud" backups and "invisible cloud" architectures with predictive failover.

Then there's the chip story, where geopolitics and architecture collide.

Reuters Breakingviews journalist Robyn Mak predicts that open-standard, customizable chips could emerge as AI's dark horse in 2026. She highlights RISC-V as a potential breakout architecture, noting that companies are seeking alternatives to x86 and Arm as customization becomes increasingly essential for AI workloads.

That prediction rhymes with Talentica’s: Mendki predicts that RISC-V will become a powerhouse for edge AI, citing its open, modular flexibility and cost advantages for inference on sensors, embedded systems, and robotics.

It also rhymes with cybersecurity company Venn’s remote work and endpoint view: David Matalon, Founder and CEO at Venn, predicts local AI on the PC will become standard as enterprises adopt AI-powered PCs that process generative AI on-device for privacy and security benefits.

Talentica agrees: Mendki predicts RISC-V will dominate edge AI, thanks to its open architecture, modular design, and cost advantages for sensors, embedded systems, and robotics.

So does Venn (from a different angle): Founder and CEO David Matalon sees on-device AI becoming standard as enterprises embrace AI-powered PCs that run generative AI locally, keeping data private and secure.

Put those together, and you get a 2026 hardware narrative that is not about a single magic chip. It's about a migration:

  • from centralized to distributed compute,
  • from general-purpose to customized architectures,
  • from "cloud-first" to "location matters."

In Practical AI terms, the AI has to show up where the world happens.

The Software Business Model Breaks, and Pricing Becomes a Sensor for Reality

If you want to know whether AI is becoming infrastructure, don't look at model size. Look at invoices.

Several predictions converge on the idea that per-seat pricing, the economic engine of SaaS, cannot survive agentic software.

AlixPartners predicts that hybrid SaaS models featuring usage- and outcomes-based elements will comprise up to 40% of software revenue by 2026. It also expects valuation models to shift away from pure ARR multiples toward hybrid frameworks that incorporate AI leverage ratios and outcome-based metrics.

Talentica makes the same call more directly: Shaligram predicts that token-based pricing will replace GenAI subscription models because businesses demand transparency and flexibility as AI usage scales.

Kalpak Shah, Founder and CEO of Velotio Technologies, a product engineering and digital solutions company, says vendors will be pushed away from per-seat licensing toward usage, token, and outcome-based pricing, with cost drivers shifting from "more users" to "more compute."

Forrester’s ROI pressure and delayed spending prediction give this pricing shift its motive power. If CFOs are gating deals and ROI is questioned, pricing becomes a referendum on whether AI is delivering value.

Even Cloudflare’s partner predictions fit here. Evans predicts that consolidation will become the dominant strategy as customers seek to reduce tool sprawl and costs, and partners are judged on their ability to deploy secure architectures for AI workflows.

This is how "hype is over" becomes a spreadsheet:

  • If AI is a feature, you can bundle it.
  • If AI is labor, you can meter it.
  • If AI is outcomes, you can price it like performance.

A market that moves from seats to tokens is a market admitting that the unit of value has changed.

Interfaces Shift from Documents and Dashboards to Conversation and Orchestration

Several of the prognosticators hitting my inbox recently argue that the front end of software is about to feel alien.

AlixPartners predicts that conversational interfaces will become the default, with 75% of enterprise software companies embedding conversational interaction as the primary way users execute tasks.

Talentica adds a quieter, but radical complement: Mendki predicts that AI-native documents will start replacing PDFs, because PDFs are optimized for visual consistency, not machine comprehension. The forecast is for a new format with semantic structure and embedded metadata built for both humans and models.

Predictions from Komodor’s Shwartz, R Systems’ Gupta, and other infrastructure-focused industry watchers suggest the same for operations: dashboards do not scale to agent systems, and orchestration and automation become the new UI for running the business.

NetApp Instaclustr's Aubrey predicts observability will become table stakes for multi-agent AI, which means new interface expectations: replayable run histories, "explain my output" views, and end-to-end traces across tool calls.

If 2025 was the year everyone added a chat box, 2026 is likely to be the year the chat box grows roots into everything behind it.

Hiring, Jobs, and the Culture Wars Move into the AI Layer

AI does not simply change how work is done. It changes who gets hired, who gets promoted, and who gets pushed out.

Forrester predicts the time to fill developer positions will double, driven by demand for experienced AI developers, reduced junior hiring in some orgs, AI-generated application floods, and slower verification by hiring teams.

R Systems' Abhyankar predicts that new specialized roles will emerge, including AI integration architects and governance and ethics leads focused on real-time compliance and responsible deployment. Komodor’s Schwartz’s platform predictions imply greater demand for Kubernetes and GPU reliability expertise, plus AI SRE as a new catalyst for adoption.

SAS offers the most contentious labor forecast: Manisha Khanna, Global Product Marketing Lead for AI and Analytics at SAS, predicts agentic analytics will eliminate half of all data analyst jobs, as agents generate queries, write code, and recommend insights autonomously. Whether you read that as displacement, reshaping, or marketing bravado, it fits the larger theme: AI pushes work upward into oversight, governance, and domain judgment, while automating the mechanics.

Then there's the cultural layer, where "trust" is not only compliance, but representation.

Crystal Foote, Founder and Head of Partnerships at Digital Culture Group, an "audience-first, performance-engineered" ad tech company, predicts that diversity and cultural intelligence will become AI’s strategic edge in marketing. She argues that without diverse talent guiding development and deployment, campaigns risk automating bias instead of insight, and agencies that cut DEI roles missed the chance to evolve them into AI and innovation leadership.

Foote also predicts that voice will become the next frontier for contextual targeting, with voice-driven ad infrastructure and retargeting becoming a breakout channel as people speak to devices more than they type.

Vikrant Mathur, Co-Founder at Future Today, a streaming technology provider specializing in OTT and CTV ad-supported solutions, says that AI will empower SMBs to compete with giants in advertising through infrastructure-level intelligence, such as real-time optimization and contextual tracking, that work without IDs or cookies, with human insight as the differentiator. David Di Lorenzo, SVP of Kids and Family at Future Today, predicts that kids’ privacy will become the defining issue in brand safety, with advertisers demanding verifiable proof and transparency as the new trust signal.

These predictions are not only about marketing. They are about legitimacy. In 2026, the forecast is that the AI layer will become a cultural interface, and culture has a way of auditing you in public.

Quantum Anxiety Becomes Budget Reality

Although most of the forecasts I'm referencing are about agents and economics, Forrester’s quantum prediction is a reminder that security timelines do not care about hype cycles.

The tech industry research and advisory firm predicts that quantum security spending will exceed 5% of the overall IT security budget as organizations plan cryptographic migrations, track vendor readiness, and invest in discovery and crypto agility. It's a prediction about preparation, not panic; a slow, expensive shift that starts early because the downside is existential.

In a year defined by trust infrastructure, quantum is the ultimate version of the same demand: you cannot hand-wave your way through math.

So, What Is 2026, According to These Predictions?

If you compress this entire bundle of forecasts into one sentence, it's this:

2026 will be the year AI becomes operational infrastructure, and that infrastructure comes with audits, budgets, attackers, and accountability.

Blaize's Munagala calls it Practical AI, intelligence that holds up outside controlled environments. Forrester says we're entering the post-hype era, where CIOs and CISOs are judged on outcomes and trust under volatility. Corti's Maaløe calls it clinical accountability, where messy reality is the test. Sarig and Underhill point to a new attack surface in which language becomes the exploit and the runtime becomes the battlefield. AlixPartners says it's the end of the old software era, where pricing, valuation, and M&A mutate under the pressure of agentic economics. Reuters Breakingviews foresees a chip shakeup in which openness and customization suddenly matter.

Different sources, same drift: the technology becomes non-optional. It starts being responsible.

 

Featured