Practical AI

Overcoming Fear of AI

Anyone who has been involved with computers for any length of time is unlikely to ever have felt anything like fear of any kind of software. And yet, many have acknowledged their fear of artificial intelligence (AI) software for several years now. Perhaps even you.

You Are Not Alone
If you do find yourself having some level of trepidation about the advent of AI (or plain old fear), you're in incredibly good company.

Tesla CEO Elon Musk, who owns an AI company called xAI, in addition SpaceX, and X, all technology-based companies, has said, "I'm increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to ensure we don't do something very foolish. I mean, with artificial intelligence, we're summoning the demon."

Prof. Stephen Hawking, the genius who wrote the Theory of Everything, A Brief History of Time, On the Shoulders of Giants and so much more, was very clear about his concerns surrounding AI. "Success in creating AI could be the biggest event in the history of our civilisation," he said. "But it could also be the last – unless we learn how to avoid the risks.  Alongside the benefits, AI will also bring dangers like powerful autonomous weapons or new ways for the few to oppress the many."

He continued, adding, "We cannot predict what we might achieve when our own minds are amplified by AI. Perhaps with the tools of this new technological revolution, we will be able to undo some of the damage done to the natural world by the last one – industrialisation."

Bill Gates, the fabled founder of Microsoft and major philanthropist, has been quoted as predicting that doctors, teachers, and others could all be replaced within the next 10 years. Gates has often said he shares Stephen Hawkings "end of humanity" concerns. When asked, "Will we still need humans?" he replied, "Not for most things."

Demis Hassabis, Nobel Prize winner and co-founder of Google's DeepMind, has acknowledged that even he does not fully know how AI functions. In a recent interview on CBS 60 Minutes, he said, "We have theories about what kinds of capabilities these systems will have. That's obviously what we try to build into the architectures. But at the end of the day, how it learns what it picks up from the data is part of the training of these systems. We don't program that in. It learns like a human being would learn. So new capabilities or properties can emerge from that training situation."

His other concerns focused more on the other player in the AI relationship. "There's two worries that I worry about," he said. "One is that bad actors, humans you know, users of these systems repurpose these systems for harmful ends. And then the second thing is the AI systems themselves as they become more autonomous and more powerful. Can we make sure that we can keep control of the systems? That they're aligned with our values, they-- they're doing what we want that benefits society. And they stay on guardrails."

Geoffrey Hinton, winner of the Turing Award and Nobel Prize, often called the "Godfather of AI" who served as a Vice President at Google Brain, has recently become very vocal about his concerns. "I have suddenly switched my views on whether these things are going to be more intelligent than us," he has said. "I think they're very close to it now and they will be much more intelligent than us in the future... How do we survive that?"

"Don't think for a moment that Putin wouldn't make hyper-intelligent robots with the goal of killing Ukrainians," Hinton also said. "He wouldn't hesitate. And if you want them to be good at it, you don't want to micromanage them—you want them to figure out how to do it."

He is also quite confident about the outcome of the human/I competition. Supporting his position that there is a 10% to 20% chance AI will lead to human extinction in three decade, he has said, "There is no question that machines will become smarter than humans—in all domains in which humans are smart—in the future. It's a question of when and how, not a question of if."

Sam Altman, CEO of OpenAI, the creators of ChatGPT has been known for warning that AI systems may start to think for themselves and even seek to take over or eliminate human civilization, joining hundreds of top AI scientists in signing a letter that declared, "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

The Changing Role of Humans
Some are signaling a more reasonable change. Especially with the introduction of Agentic AI, many have predicted a paradigm shift in which humans cease being the "doers" and become managers with tens or hundreds of AI Agents serving as their "digital employees" or "digital workers."

Development of this concept is much further along than most may think. In fact, the usually lugubrious United States federal government has published a "Digital Worker Identity Playbook," product of a collaboration between the Identity, Credential, and Access Management Subcommittee of the Federal Chief Information Security Officer (CISO) Council and the General Services Administration Office of Government-wide Policy identity Assurance and Trusted Access Division

The Playbook defines a digital worker as being "an automated, software-based tool, application, or agent that performs a business task or process similar to a human user and uses Artificial Intelligence (AI) or other autonomous decision-making capabilities." It then defines a four-step process for digital worker identity management that includes determining the impact, creating an identity, provisioning that identity, and maintaining and ultimately deprovisioning the identity.

The explanations are substantial and worth reading. They even take the time to point out that digital workers may not have rights to the benefits usually accorded to human workers, suggesting their perception of how people may approach this.

Workday VP of Thought Leadership and Customer Advocacy Michael Brenner said in a blog post, "I believe AI can elevate human potential. But this evolution requires a shift in our thinking. We need to move beyond viewing AI agents as mere tools and start considering them as integral components of a broader workforce—a digital workforce that augments and empowers human employees."  

With the recent introduction of Agent-to-Agent (A2A) Protocol from Google partnered with Anthropic's Model Context Protocol (MCP), the development of digital workers accelerates. A2A enables collaboration among agents working together by combining their capabilities. Since each agent accesses various large and small language models (LLM and SLM) MCP enables the combination of data harvested from each. Agents also have the ability to seek and find other agents with specific capabilities they need to achieve completion of their objectives.

Many have suggested that instead of replacing humans completely, AI agents will become the digital employees of human managers.

Closing his blog post, Brenner explains, "The rise of AI agents marks a pivotal moment in the evolution of work. As we navigate this new digital workforce, we need to consider how best to "manage" these digital agents. By prioritizing human values and ethical considerations, we can create a future where AI and humans work together to achieve greater outcomes, working together seamlessly by contributing their unique strengths to achieve common goals. This requires a thoughtful and proactive approach to digital workforce management, ensuring responsible development, ethical considerations, and clear governance structures."

Nobody has promised that this would be easy. But there are viable alternatives to AI overrunning, eliminating, or otherwise damaging humankind. Let's hope we're wise enough to take the right path.

About the Author

Technologist, creator of compelling content, and senior "resultant" Howard M. Cohen has been in the information technology industry for more than four decades. He has held senior executive positions in many of the top channel partner organizations and he currently writes for and about IT and the IT channel.

Featured