A man often referred to as the “Godfather” of artificial intelligence (AI) has quit his job at Google this week over concerns about where AI is headed.
Geoffrey Hinton is best known for his work on deep learning and neural networks, for which he won the $1 million Turing Award in 2019. At the time he told CBC that he was pleased to have won because “for a long time, neural networks were regarded as flaky in computer science. This was an acceptance that it’s not actually flaky.”
Since then, with the rise of AI programs like Chat-GPT and AI image generators among many others, the method has gone mainstream. Hinton, who began working at Google in 2013 after the tech giant acquired his company, now believes that the tech could pose dangers to society and has quit the company in order to be able to talk about it openly, the New York Times reports.
In the short term, Hinton is worried about the possibility of AI-generated text, images, and videos leading to people not being able “to know what is true anymore”, as well as potential disruption to the jobs market. In the longer term, he has much bigger concerns about AI eclipsing human intelligence and AI learning unexpected behaviors.
“Right now, they’re not more intelligent than us, as far as I can tell,” he told the BBC. “But I think they soon may be.”
Hinton, 75, told the BBC that AI is currently behind humans in terms of reasoning, though it does already do “simple” reasoning. In terms of general knowledge, however, he believes it is already far ahead of humans.
“All these copies can learn separately but share their knowledge instantly,” he told the BBC. “So it’s as if you had 10,000 people and whenever one person learnt something, everybody automatically knew it.”
One concern, he told the New York Times, is that “bad actors” could use AI for their own nefarious ends. Unlike threats such as nuclear weapons, which require infrastructure and monitored substances like uranium, AI research by such actors would be much more difficult to monitor.
“This is just a kind of worst-case scenario, kind of a nightmare scenario,” he told the BBC. “You can imagine, for example, some bad actor like Putin decided to give robots the ability to create their own sub-goals.”
Sub-goals such as “I need to get more power” could lead to problems we haven’t imagined yet, let alone thought how to deal with. As AI-focused philosopher and leader of the Future of Humanity Institute at Oxford University Nick Bostrom explained in 2014, even a simple instruction to maximize the number of paperclips in the world could cause unintended sub-goals that could lead to terrible consequences for the AI’s creator.
“The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off,” he explained to HuffPost in 2014. “Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.”
Hinton, who told the BBC that his age played a part in his decision to retire, believes that the potential upsides to AI are huge, but the technology needs regulating, especially given the current rate of progress in the field. Though he left in order to talk about potential pitfalls of AI, he believes that Google has so far acted responsibly in the field. With a global race to develop the technology, it’s probable that – left unregulated – not everyone developing it will be, and humanity will have to deal with the consequences.
Source Link: "Nightmare Scenario": Godfather Of AI Quits Google To Warn About Dangers Of AI