Artificial intelligence is often thought of as disembodied: a mind like a program, floating in a digital void. But human minds are deeply intertwined with our bodies — and an experiment with virtual creatures performing tasks in simulated environments suggests that AI may benefit from having a mind-body setup.
Stanford scientists were curious about the physical-mental interplay in our own evolution from blobs to tool-using apes. Could it be that the brain is influenced by the capabilities of the body, and vice versa? It has been suggested before — over a century ago, in fact — and certainly it’s obvious that with a grasping hand one learns more quickly to manipulate objects than with a less differentiated appendage.
It’s hard to know whether the same could be said for an AI, since their development is more structured. Yet the questions such a concept brings up are compelling: could an AI better learn and adapts to the world if it has evolved to do so from the start?
The experiment they designed is similar in some ways to simulated environments that have been used for decades to test evolutionary algorithms. You set up a virtual space and drop simple simulated creatures into it, just a few connected geometric shapes that move in random ways. Out of a thousand such writhing shapes, you pick the ten that writhed the farthest and make a thousand variations on those, and repeat over and over. Pretty soon you have a handful of polygons doing a pretty passable walk across the virtual surface.
That’s all old hat, though: as the researchers explain, they needed to make their simulation more robust and variable. They weren’t simply trying to make virtual creatures that walk around, but to investigate how those creatures learned to do what they do, and whether some learn better or faster than others.
To find out, the team created a similar simulation to the old ones, dropping their sims, which they called “unimals” (for “universal animals”… we’ll see if this terminology takes off), into it at first just to learn to walk. The simple shapes had a spherical “head” and a few branchlike jointed limbs, with which they developed a number of interesting walks. Some stumbled forward, some developed a lizard-like articulated walk, and others a flailing but effective style reminiscent of an octopus on land.
So far, so similar to older experiments, but there the similarities more or less end.
Some of these unimals grew up on different home planets, as it were, with undulating hills or low barriers for them to clamber over. And in the next phase unimals from these different terrains competed on more complex tasks to see whether, as is often held, adversity is the mother of adaptability.
“Almost all the prior work in this field has evolved agents on a simple flat terrain. Moreover, there is no learning in the sense that the controller and/or behavior of the agent is not learnt via direct sensorimotor interactions with the environment,”,” explained co-author Agrim Gupta to TechCrunch — in other words, they evolved by surviving but didn’t really learn by doing. “This work for the first time does simultaneous evolution and learning in complex environments like terrains with steps, hills, ridges and move beyond to do manipulation in these complex environments.”
The top 10 unimals from each environment were set loose on tasks ranging from new obstacles to moving a ball to a goal, pushing a box up a hill, or patrolling between two points. Here it was that these “gladiators” really showed their virtual mettle. Unimals that had learned to walk on variable terrain learned their new tasks faster and performed them better than their flatlander cousins.
“In essence, we find that evolution rapidly selects morphologies that learn faster, thereby enabling behaviors learned late in the lifetime of early ancestors to be expressed early in the lifetime of their descendants,” write the authors in the paper, published today in the journal Nature.
It’s not just that they learned to learn faster; the evolutionary process selected body types that would allow them to adapt faster and apply lessons quicker. On flat terrain, an octopus flop might get you to the finish line just as fast, but hills and ridges selected for a body configuration that was fast, stable, and adaptable. Bringing this body into the gladiatorial arena gave those unimals coming from the school of hard knocks a leg up on the competition. Their versatile bodies were better able to apply the lessons their minds were putting to the test — and soon they left their floppier competition in the dust.
What does this all signify, besides providing a few entertaining GIFs of 3D stick figures galloping over virtual terrain? As the paper puts it, the experiment “opens the door to performing large-scale in silico experiments to yield scientific insights into how learning and evolution cooperatively create sophisticated relationships between environmental complexity, morphological intelligence, and the learnability of control tasks.”
Say you have a relatively complicated task you’d like to automate — climbing stairs with a four-legged robot, for instance. You could design the movements manually, or combine custom ones with AI-generated ones, but perhaps the best solution would be to have an agent evolve its own movement from scratch. The experiment shows that there is potentially a real benefit to having the body and the mind controlling it evolve in tandem.
If you’re code-savvy, you can get the whole operation up and running on your own hardware: the research group has made all the code and data freely available on GitHub. And make sure you’ve got your high-end computing cluster or cloud container ready to go, too: “The default parameters assume that you are running the code on 16 machines. Please ensure that each machine has a minimum of 72 CPUs.”
Source Link Simulated AI creatures demonstrate how mind and body evolve and succeed together