论文阅读:One-Shot Visual Imitation Learning via Meta-Learning

One-Shot Visual Imitation Learning via Meta-Learning

通过元学习进行一站式视觉模仿学习

In order for a robot to be a generalist that can perform a wide range of jobs, it must be able to acquire a wide variety of skills quickly and efficiently in complex unstructured environments. High-capacity models such as deep neural networks can enable a robot to represent complex skills, but learning each skill from scratch then becomes infeasible. In this work, we present a meta-imitation learning method that enables a robot to learn how to learn more efficiently, allow- ing it to acquire new skills from just a single demonstration. Unlike prior methods for one-shot imitation, our method can scale to raw pixel inputs and requires data from significantly fewer prior tasks for effective learning of new skills. Our exper- iments on both simulated and real robot platforms demonstrate the ability to learn new tasks, end-to-end, from a single visual demonstration.

为了使机器人成为可以执行多种工作的通才,它必须能够在复杂的非结构化环境中快速有效地掌握各种技能。 诸如深度神经网络之类的高容量模型可以使机器人代表复杂的技能,但是从头开始学习每种技能就变得不可行。 在这项工作中,我们提出了一种元模仿学习方法,该方法使机器人能够学习如何更有效地学习,从而使其仅通过一次演示即可获得新技能。 与先前的一键式模仿方法不同,我们的方法可以缩放到原始像素输入,并且需要来自明显更少的先前任务的数据才能有效学习新技能。 我们在模拟和真实机器人平台上的实验都展示了从单个视觉演示中端到端学习新任务的能力。

原文地址:https://www.cnblogs.com/feifanrensheng/p/14036828.html