News

Tech Leaders from Microsoft, Google, OpenAI Warn About AI Dangers

AI mavens warn about the risks of AI.

Prominent figures associated with current or former positions at Microsoft, Google, and OpenAI have recently raised concerns about the risks associated with advanced artificial intelligence.

Adding to the sense of alarm, these individuals are part of a group of over 27,000 signatories, including industry luminaries such as Elon Musk and Steve Wozniak, who have expressed apprehensions regarding the perils of AI. They have collectively endorsed an open letter, urging all AI research laboratories to promptly halt the training of AI systems with capabilities surpassing those of GPT-4, OpenAI's latest large language model (LLM), for a minimum period of six months.

No doubt about it, the growing backlash against rapidly progressing and unregulated AI is showing no signs of easing.

Here's the latest from those corporate execs.

Michael Schwarz, Chief Economist, Microsoft: 'It Will Cause Real Damage'

As Bloomberg reported on Wednesday, May 3, Schwarz sounded his alarm during a World Economic Forum panel in Geneva.

Michael Schwarz (source: Bloomberg).
"I am confident AI will be used by bad actors, and yes it will cause real damage," Schwarz was quoted as saying. "It can do a lot of damage in the hands of spammers with elections and so on."

He believes regulation is needed, but only after evidence of clear harm is found.

"Once we see real harm, we have to ask ourselves the simple question: 'Can we regulate that in a way where the good things that will be prevented by this regulation are less important?'" Schwarz said, according to Bloomberg. "The principles should be the benefits from the regulation to our society should be greater than the cost to our society."

Like other pundits, he sees AI as a double-edged sword that can be used for good or bad. "We, as mankind, ought to be better off because we can produce more stuff with less work," he said. What's more: "I like to say AI changes nothing in the short run and it changes everything in the long run."

Paul Christiano, Former Researcher, OpenAI: 10-20 Percent Chance of 'Most Humans Dead' AI Takeover Scenario

Now heading the non-profit Alignment Research Center, Christiano formerly ran the language model alignment team at OpenAI, creator of ChatGPT. OpenAI provides the advanced, generative AI tech behind that sentient-sounding chatbot and many Microsoft products and services, thanks to a multi-billion-dollar investment from Redmond. During a recent podcast he predicted "maybe a 10-20 percent chance of AI takeover."

Paul Christiano (bottom) with Ryan Sean Adams and David Hoffman (source: Bankless).

That was said during a Bankless podcast where hosts Ryan Sean Adams and David Hoffman spent a lot of time talking about an AI "Doomsday" scenario and other warnings of doom with Christiano, described as an AI safety alignment researcher who was recommended for the podcast by Eliezer Yudkowsky, a prominent AI researcher and writer.

"We have a special guest in the episode today," said Adams. "Paul Christiano. This is who Eliezer Yudkowsky told us to go talk to, someone he respects on the AI debate. So, we picked his brain -- this is an AI safety alignment researcher. We asked the question, 'how can we stop the AIS from killing us, can we prevent the AI takeover that others are very concerned about?'"

Adams listed four main questions of discussion:

  • How big is the AI alignment problem?
  • How hard is it to actually solve this problem?
  • What are the ways we solve it the technical ways can we coordinate around this to solve it?
  • What's a possible optimistic scenario where we live in harmony with the AI eyes, and they improve our lives and make it quite a bit better?

Adams noted that Yudkowsky had a much more pessimistic view about the possibility of an AI doom scenario than did Christiano.

Here's one exchange about that:

Adams: Why don't we just start by wading into the deep end of the pool here: What is your percentage likelihood of the full-out Eliezer Yudkowsky doom scenario where we're all gonna die from the machines?

Christiano: "I think this question is a little bit complicated, unfortunately, because there are a lot of different ways, we could all die from the machines. So, the thing I most think about -- and I think Elias most talks about -- is this sort of full-blown AI takeover scenario. I take this pretty seriously. I think I have a much higher probability than a typical person working in ML. I think maybe there's something like a 10-20 percent chance of an AI takeover [with] many, most humans dead."

Geoffrey Hinton, Former Google AI Researcher: 'He Worries [AI] Will Cause Serious Harm.'

The 75-year-old Hinton, widely recognized as an AI pioneer, recently quit his job at Google and is warning about its dangers, according to the New York Times, article, "'The Godfather of A.I.' Leaves Google and Warns of Danger Ahead." The article's subhead reads: For half a century, Geoffrey Hinton nurtured the technology at the heart of chatbots like ChatGPT. Now he worries it will cause serious harm."

Geoffrey Hinton (source: Wikipedia).
"It is hard to see how you can prevent the bad actors from using it for bad things," Hinton was quoted as saying in the article, which noted that Hinton believes advanced AI systems are becoming increasingly dangerous. "Look at how it was five years ago and how it is now. Take the difference and propagate it forwards. That's scary."

Here's another Hinton quote from the article: "The idea that this stuff could actually get smarter than people -- a few people believed that. But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that."

And More...

The increasing backlash against runaway AI advancements is continuing to grow, with many more industry luminaries weighing in. That includes Musk, in a unique position as he was a co-founder of OpenAI who has criticized the company's switch in direction from research to chasing profits. On Monday, he tweeted "Even benign dependency on AI/Automation is dangerous to civilization if taken so far that we eventually forget how the machines work."

That might be seen as even more alarming considering that we can't even yet understand how they work, much less forget how they work.

 

About the Author

David Ramel is an editor and writer at Converge 360.

Featured