Is artificial intelligence (AI) conscious? Would we even be able to work out if it was? These are some of the questions that researchers in AI and neuroscience are grappling with. There is a lot we don’t know about consciousness, making the quest to understand it in AI so much more complex.
A preprint report came out a few months ago that set out to develop some indicators that could be used to establish if machine learning software has gained consciousness. There is no single theory of consciousness yet – on either how it works or how it arises – so the indicators used are seen as important markers to see consciousness, though they are not exhaustive.
The team was made up of neuroscientists, computer scientists, and philosophers. One of the approaches used was the Global Workspace Theory, which is not exactly a theory of consciousness, but more about how humans and animals might think and perform tasks, independently and in parallel, while at the same time unified in the theory of the mind.
There are also indicators from other hypotheses, like higher-order theory. According to this idea, for a being to have consciousness, it must possess awareness of thought and function. They also used predictive processing, the ability to guess logically what happens next (something that has been partially demonstrated by AI), as well as other approaches. The team looked at how these indicators fare when applied to current AIs.
“Our analysis suggests that no current AI systems are conscious, but also suggests that there are no obvious technical barriers to building AI systems which satisfy these indicators,” the team wrote in the report.
Understanding the consciousness of AI, or at least trying to establish the criteria that might establish if it has developed consciousness, might even help us understand what consciousness is in the first place.
“It’s a huge question,” co-author Liad Mudrik previously told IFLScience in our podcast, The Big Questions. “I can only tell you that the search is fascinating. People have been trying to experiment with it and have been trying to think about it. As I said before, I think one of the reasons projects that try to arbitrate within theories are so important is because they might also get us closer to developing clear criteria about what is and what is not conscious.”
It is possible that consciousness might be an unintended consequence of complexity. Consciousness in animals has been associated with greater capabilities, so more capable AIs might just end up being conscious without a plan to gift them with it.
The report has been posted as a preprint on arXiv.
Source Link: Artificial Intelligence Could Become Conscious – But It's Not There Yet