in-a-first-this-robot-learned-to-model-its-whole-body-from-scratch-without-human-assistance

In a First, This Robot Learned to Model Its Whole Body from Scratch without Human Assistance


Using funding from the likes of DARPA and Facebook, a team of engineers at Columbia University has created a robot that learned how to model its whole body from scratch without human assistance.


While the claims of Google’s “LaMDA” AI being sentient are probably overblown—although who knows?—machines becoming self-aware is still a possibility we all entertain. But while self-aware bots are probably a long way off (if they ever “arrive” at all), a team of engineers at Columbia University in New York has, for the first time ever, created a robot that’s able to learn a model of its entire body from scratch, without any human assistance. Although—much like a teenager on Snapchat—it does need a lot of external cameras to figure out its place in the world.

SciTech Daily reported on the self-visualizing robot, which looks like a simple robot arm with four degrees of freedom thanks to four joints. In a paper published in Science Robotics, the engineers describe how they were able to have the robot learn how to image (or imagine?!) itself as an independent agent apart from its environment.

To develop the robot’s physical self-concept within a digital program, the engineers began by placing a series of cameras around the machine itself. They then used a novel neural network—or network of algorithms inspired by the structure of the human brain that takes in “big data” and “trains” itself to recognize relevant patterns in the data—to learn to associate specific positions in space in conjunction with specific motor movements and joint angles.

After the robot learned about its place in space, the engineers were then able to “query” it to move to different positions; having the robot first stop and project out whether or not the movement would be possible due to obstacles or spatial constraints. The engineers were then able to have the robot seek out an object (in this instance a hanging red ball) and gently tap it, after “visualizing” an obstacle in its way (a big, red hanging box) and figuring out how to avoid it.

“We were really curious to see how the robot imagined itself,” Hod Lipson, professor of mechanical engineering and director of Columbia’s Creative Machines Lab, said in a press release. “But you can’t just peek into a neural network; it’s a black box,” the study’s lead author added, referencing the fact that with neural networks people observe their inputs and outputs, but cannot see the processes (conducted by “hidden layer” algorithms) between. E.g., in this case, the engineers are certain the robot learned how to visualize itself in space, but they’re not entirely sure how it did so.

Despite the fact that the engineers don’t have complete insight into how their robot learns, they were still able to develop a way to visualize the way it predicted where it would move to, and end up, after receiving a query. In the first tweet embedded above study author Boyuan Chen, an assistant professor of engineering at Duke University, shows off this visualization. “It [is] a sort of gently flickering cloud that [appears] to engulf the robot’s three-dimensional body,” Lipson added in the press release. “As the robot [moves], the flickering cloud gently [follows] it.”

As for real-world applications, Columbia notes in its press release that “The work is part of Lipson’s decades-long quest to find ways to grant robots some form of self-awareness.” Lipson added that “Self-modeling is a primitive form of self-awareness” and that “If a robot, animal, or human has an accurate self-model, it can function better in the world, it can make better decisions, and it has an evolutionary advantage.”

Image: Lipson, et al. via Columbia University

Incidentally, the Defense Advanced Research Project Agency (DARPA), Facebook, and Northrop Grumman may have some use-case ideas as well as they, along with several other institutions, funded this work.


Feature image: Lipson, et al. via Columbia University


Accessibility Toolbar