Learning is a critical aspect of robotics, and researchers at the Toyota Research Institute (TRI) are making significant advancements in this area. At TechCrunch Disrupt’s Hardware Stage, TRI showcased a system that can teach a robot a new skill overnight. This breakthrough is significant because traditional machine learning methods require millions of training cases, which is not feasible in physical robot learning scenarios. TRI’s system, on the other hand, only requires dozens of diverse training cases.
The system combines traditional robot learning techniques with diffusion models, similar to those used in generative AI models. TRI has successfully trained robots on 60 skills and counting using this method. However, existing models cannot solve all problems on their own. Large language models have been used to impart cognitive intelligence into robots, allowing them to understand high-level commands. But when it comes to specific tasks like plugging in a USB device or picking up a tissue, these models fall short.
TRI aims to bridge this gap by developing a system that can program skills capable of functioning in diverse and unstructured environments. This is crucial because robots often struggle to operate in less-structured settings. For example, navigating a warehouse is easier than navigating a road or a home. TRI’s focus is on creating robots that can adapt to changes in their environment, whether it’s moving furniture or dealing with unexpected messes.
The process of teaching the robots starts with teleoperation, where a human operator remotely controls the robot through demonstrations. The system collects all available data, including sight and force feedback, to create a comprehensive understanding of the task. Force feedback is particularly important for tasks that require proper handling, such as holding a tool correctly.
After the initial training, the system’s neural networks continue to learn overnight. This is achieved through diffusion policy, which generates robot behavior by representing it as a conditional denoising diffusion process. The system has shown promising results in experiments, with success rates of tasks like flipping pancakes and rolling dough significantly higher when tactile sensing is involved.
In conclusion, TRI’s advancements in robot learning are pushing the boundaries of what robots can achieve. By teaching robots new skills overnight, researchers are bringing us closer to a future where robots can adapt and learn in real-world scenarios.
Sources:
– TechCrunch article, “Toyota’s TRI shows off advances in robot learning, talks plans to productize research”
– Image credits: Toyota Research Institute