A new learning method developed by researchers at Carnegie Mellon University (CMU) allows robots to learn directly from videos of human interaction and generalize the information to new tasks, helping them learn to do household chores. The learning method is called WHIRL, which stands for In-the-wild Human Imitating Robot Learning, and it helps the robot observe the tasks and gather the video data to eventually learn to complete the job itself.
The research was presented at the Robotics: Science and Systems conference in New York.
Imitation as a way to learn
Shikhar Bahl holds a Ph.D. student at the Robotics Institute (RI) of the School of Computer Science at Carnegie Mellon University.
“Imitation is a great way to learn,” Bahl said. “Getting robots to actually learn by directly watching humans remains an unsolved problem in the field, but this work takes an important step to enable that capability.”
Bahl has worked alongside Deepak Pathak and Abhinav Gupta, both of whom are also RI faculty members. The team added a camera and its software to a ready-made robot that learned to perform more than 20 tasks. These tasks included everything from opening and closing appliances to removing a trash bag from the trash can. Each time, the robot would watch a human perform the tasks before attempting it itself.
Pathak is an assistant professor at RI.
“This work presents a way to bring robots into the home,” Pathak said. “Instead of waiting for robots to be programmed or trained to successfully perform different tasks before deploying them in people’s homes, this technology allows us to deploy the robots and teach them how to perform tasks, while adapting to their surroundings and improving just by looking.”
WHIRL versus current methods
Most of the current methods for teaching a task to a robot are based on imitation or reinforcement learning. With imitation learning, humans manually operate a robot and teach it to perform a task, which must be performed several times before the robot learns. With reinforcement learning, the robot is usually trained on millions of examples in simulation before adapting the training to the real world.
Although both of these models are effective for teaching a robot a single task in a structured environment, they prove difficult to scale and deploy. But with WHIRL, a robot can learn from any video of a human performing a task. It is also easily scalable, not limited to a specific task, and can work in home environments.
WHIRL allows robots to perform tasks in their natural environment. And while first attempts usually ended in failure, he could learn very quickly after just a few successes. The robot doesn’t always perform the task with the same movements as a human, but that’s because it has different parts that move differently. That said, the end result of completing the tasks is always the same.
“To evolve robotics in nature, data must be reliable and stable, and robots must improve in their environment by training themselves,” Pathak said.