Site icon Medical Market Report

AI Researcher Calls For All AI Experiments To Be Shut Down Immediately

An open letter signed by artificial intelligence (AI) researchers, directors of institutes, and CEOs of social media sites, including Elon Musk, has asked for all AI experiments to be paused, immediately, given the “profound risks to society and humanity” if an advanced AI was created without proper management and planning. 

Meanwhile, another researcher writing in Time argues that this isn’t going far enough and that we need to “shut it all down” and prohibit certain tech if humanity is to survive long term.

Advertisement

The open letter says that an “AI summer” where all labs pause any work on anything more powerful than Open AI’s ChatGPT-4 is needed, as in recent months AI researchers have been in an “out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

During the pause, the open letter asks AI labs and experts to come together to develop shared safety protocols for designing AI, which are overseen by independent outside experts. It also suggests AI researchers should work with policy-makers to create systems of oversight for AI, as well as smaller practical steps like watermarking for images created by it.

“Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an ‘AI summer’ in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt,” the letter concludes. 

“Society has hit pause on other technologies with potentially catastrophic effects on society. We can do so here. Let’s enjoy a long AI summer, not rush unprepared into a fall.”

Advertisement

The letter was signed by researchers from Google and DeepMind, as well as Apple co-founder Steve Wozniak.

Some are calling it a PR exercise, aimed at bigging up the power of the tech and the potential future dangers, while not addressing real-world problems created by current (and near-future) AIs in the short term.

However, for American computer scientist and lead researcher at the Machine Intelligence Research Institute, Eliezer Yudkowsky, the letter doesn’t go far enough.

“Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die,” he writes in a piece for Time, likening humanity competing with AI to everyone from the 11th Century attempting to fight everyone from the 21st Century.

Advertisement

Yudkowsky believes that currently, we are far behind where we need to be in order to create an AI safely, which won’t eventually lead to humanity’s demise. Catching up to this position could take 30 years. He proposes limiting computing power given to people training AI, and then slowly decreasing the power allocation as algorithms become more efficient to compensate. Essentially though, his policy is to “shut it all down”.

“Progress in AI capabilities is running vastly, vastly ahead of progress in AI alignment or even progress in understanding what the hell is going on inside those systems,” he writes. “If we actually do this, we are all going to die.”

Source Link: AI Researcher Calls For All AI Experiments To Be Shut Down Immediately

Exit mobile version