News

AI 'Godfather' Quits Google to Voice His Concerns About the Technology He Pioneered

Geoffrey Hinton, one of a trio of scientists known as the "Godfathers of AI" because of their early pioneering work on artificial neural networks, has left his position as VP of engineering at Google because he wants to talk about the potential dangers of the rapid evolution of artificial intelligence.

Speaking with Cade Metz in an interview published today in The New York Times, Hinton said that he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze.

Hinton is only now voicing his concerns publicly. He was not among the signatories of a now famous open letter posted on the Internet in March ("Pause Giant AI Experiments: An Open Letter") urging a slowdown of AI development, which included Apple co-founder Steve Wozniak; Elon Musk, CEO of SpaceX, Tesla, and Twitter; Andrew Yang, 2020 presidential candidate and entrepreneur; and Rachel Bronson, president of the Bulletin of the Atomic Scientists, which sets the Doomsday Clock.

In a post-interview tweet, Hinton clarified that he left Google, not to criticize the company, but so that he could talk about the dangers of AI freely.

In an interview with CNN, Hinton said he planned to "blow the whistle" on AI development, and said he was worried about the technology becoming "smarter than us."

Hinton, along with Yoshua Bengio and Yann LeCun, won the 2018 ACM Turing Award, known as the "Nobel Prize of Computing," for their work laying the foundations for the current boom in artificial intelligence. The ACM (Association for Computing Machinery) is the world's largest educational and scientific society computing educators, researchers, and professionals who share resources and address the field's challenges. The Turing Award is an annual prize generally recognized as the highest distinction in computer science. The Award is named for Alan M. Turing, the British mathematician who articulated the mathematical foundations of computing. It comes with a $1 million prize, thanks to financial support provided by Google.

In a 1986 paper ("Learning Internal Representations by Error Propagation"), co-authored with David Rumelhart and Ronald Williams, Hinton demonstrated that a backpropagation algorithm (a now widely used algorithm for training feedforward artificial neural networks) allowed neural networks to discover their own internal representations of data, making it possible to use neural nets to solve problems that had previously been thought to be beyond their reach.

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at jwaters@converge360.com.

Featured