On today’s episode, we are making the full arc from the theoretical and borderline philosophical to the applied. Let’s start with the theory: embodied intelligence posits that the body, or the physical form, plays an active and significant role in shaping an agent’s mind and cognitive capacities. For example, human intelligence is not just the function of our brain, but a combination of our brain, our body, and the environment in which we exist. But when it comes to designing artificial intelligence (AI), a physical form and an environment are typically not part of the equation. It’s a disembodied cognition. Our guests, Li Fei-Fei and Surya Ganguli of the Stanford Institute for Human-Centered AI, set out to develop what they call an “evolutionary playground” to explore the development of embodied intelligence in AI and its connection with the environment and with learning using in silico experiments. They discuss with a16z general partner Vijay Pande and host Lauren Richardson how they created a suite of virtual environments in which agents evolve through a process that mimics aspects of Darwinian evolution. These agents, called the unimal, or universal animal, start off as a central node, and with each generation can add or subtract limbs and change various properties of their physical forms, like how flexible their joints are. Just like in real evolution, different forms arose based on the particularities of the environment, but what is really exciting is what Fei-Fei, Surya, and colleagues discovered about the intelligence encoded in some of these forms, such as an increased ability to learn a novel task. Which brings us to the applied section of our discussion. These results provide new insights for how we think about designing robots capable of performing unique tasks, and for understanding the possible limitations of disembodied AI models, like GTP-3.
The results are described in the pre-print “Embodied Intelligence via Learning and Evolution” posted on arXiv.org.