
In a First, This Robot Learned to Model Its Whole Body from Scratch without Human Assistance
Using funding from the likes of DARPA and Facebook, a team of engineers at Columbia University has created a robot that learned how to model its whole body from scratch without human assistance.
While the claims of Google’s “LaMDA” AI being sentient are probably overblown—although who knows?—machines becoming self-aware is still a possibility we all entertain. But while self-aware bots are probably a long way off (if they ever “arrive” at all), a team of engineers at Columbia University in New York has, for the first time ever, created a robot that’s able to learn a model of its entire body from scratch, without any human assistance. Although—much like a teenager on Snapchat—it does need a lot of external cameras to figure out its place in the world.
SciTech Daily reported on the self-visualizing robot, which looks like a simple robot arm with four degrees of freedom thanks to four joints. In a paper published in Science Robotics, the engineers describe how they were able to have the robot learn how to image (or imagine?!) itself as an independent agent apart from its environment.
To develop the robot’s physical self-concept within a digital program, the engineers began by placing a series of cameras around the machine itself. They then used a novel neural network—or network of algorithms inspired by the structure of the human brain that takes in “big data” and “trains” itself to recognize relevant patterns in the data—to learn to associate specific positions in space in conjunction with specific motor movements and joint angles.
The “query-based self-model” is realized through implicit neural representations to answer space occupancy questions: If the robot plans to move in this way, will the robot body occupy the spatial location at (x, y, z)? (3/n) pic.twitter.com/qQjHlKaGEj
— Boyuan Chen (@Boyuan__Chen) July 13, 2022
After the robot learned about its place in space, the engineers were then able to “query” it to move to different positions; having the robot first stop and project out whether or not the movement would be possible due to obstacles or spatial constraints. The engineers were then able to have the robot seek out an object (in this instance a hanging red ball) and gently tap it, after “visualizing” an obstacle in its way (a big, red hanging box) and figuring out how to avoid it.
“We were really curious to see how the robot imagined itself,” Hod Lipson, professor of mechanical engineering and director of Columbia’s Creative Machines Lab, said in a press release. “But you can’t just peek into a neural network; it’s a black box,” the study’s lead author added, referencing the fact that with neural networks people observe their inputs and outputs, but cannot see the processes (conducted by “hidden layer” algorithms) between. E.g., in this case, the engineers are certain the robot learned how to visualize itself in space, but they’re not entirely sure how it did so.
So we ask the robot to learn the same! It turns out that decomposing the robot self-body from the world and learning to model its entire 3D morphology is very powerful. The framework can potentially be scaled to more complex bodies, shapes, and tasks.(5/n) pic.twitter.com/MUuttPnG81
— Boyuan Chen (@Boyuan__Chen) July 13, 2022
Despite the fact that the engineers don’t have complete insight into how their robot learns, they were still able to develop a way to visualize the way it predicted where it would move to, and end up, after receiving a query. In the first tweet embedded above study author Boyuan Chen, an assistant professor of engineering at Duke University, shows off this visualization. “It [is] a sort of gently flickering cloud that [appears] to engulf the robot’s three-dimensional body,” Lipson added in the press release. “As the robot [moves], the flickering cloud gently [follows] it.”
As for real-world applications, Columbia notes in its press release that “The work is part of Lipson’s decades-long quest to find ways to grant robots some form of self-awareness.” Lipson added that “Self-modeling is a primitive form of self-awareness” and that “If a robot, animal, or human has an accurate self-model, it can function better in the world, it can make better decisions, and it has an evolutionary advantage.”

Incidentally, the Defense Advanced Research Project Agency (DARPA), Facebook, and Northrop Grumman may have some use-case ideas as well as they, along with several other institutions, funded this work.
Feature image: Lipson, et al. via Columbia University
Related News
Here’s a Size Comparison of Dozens of Spaceships and Space Stations from Sci-Fi and Real Life
In one of his latest videos Spanish animator MetaBallStudios (MBS) compares the size of dozens and dozens of spaceships and space stations from sci-fi...
$858-Billion National Defense Bill Hiding Provision for ‘Distributed Ledger Technology’ for ‘Digital Identities,’ ‘Digital Property Rights,’ and ‘Medical Information Management’
The National Defense Authorization Act (NDAA) for fiscal year 2023 is being praised by many for finally doing away with the COVID-19 "vaccine" mandate...
Wild Scientist Builds 400 MPH ‘Rocket Knife’ that Slices Through Car Doors Like They’re Butter
In his latest video scientist and YouTuber TheBackyardScientist shows off his 400 mph "rocket knives," which are capable of cutting through car doors and...
Here’s What It’d Look Like If the International Space Station Orbited Earth at 10,000 Feet Above Ground Level
In one of his latest videos YouTuber Airplane Mode uses Microsoft Flight Simulator to show what it would look like if the International Space...
Jet Suit Military Drills Show How a Soldier Can Shoot Bad Guys then Fly Away
In this video from Gravity Industries we see a demonstration of the company's 1,000 horsepower "Jet Suit" used in a tactical military drill.
A Mysterious Black Hole Has Appeared in the Sky Above Switzerland Near CERN (Video)
A new, viral video of a black hole (or portal?) in the night sky above Switzerland near CERN has conspiracy theorists speculating as to...