Modular Adaptive Policy Selection for Multi-Task Imitation Learning through Task Division pdf link

International Conference on Robotics and Automation (ICRA) 2022

Abstract: Deep imitation learning requires many expert demonstrations, which can be hard to obtain, especially when many tasks are involved. However, different tasks often share similarities, so learning them jointly can greatly benefit them and alleviate the need for many demonstrations. But, joint multi-task learning often suffers from negative transfer, sharing information that should be task-specific. In this work, we introduce a method to perform multi-task imitation while allowing for task-specific features. This is done by using proto-policies as modules to divide the tasks into simple sub-behaviours that can be shared. The proto-policies operate in parallel and are adaptively chosen by a selector mechanism that is jointly trained with the modules. Experiments on different sets of tasks show that our method improves upon the accuracy of single agents, task-conditioned and multi-headed multi-task agents, as well as state-of-the-art meta learning agents. We also demonstrate its ability to autonomously divide the tasks into both shared and task-specific sub-behaviours.

copy bibtex
@inproceedings{antotsiou2022maps,
	title={Modular Adaptive Policy Selection for Multi-Task Imitation Learning through Task Division},
	author={Antotsiou, Dafni and Ciliberto, Carlo and Kim, Tae-Kyun},
	booktitle={Proceedings of the 2022 IEEE International Conference on Robotics and Automation (ICRA)}, 
	year={2022}
	}

Demo Videos

Demo of the MAPS network on the MT10 task set. It demonstrates how the network automatically divided the tasks window-open, drawer-close, and drawer-open into 3 modules. From the video, it is seen that module 9 learnt the behaviour "reach and grab the handle" and it is shared by all 3 tasks. Similarly, module 5 learnt the behaviour "reach towards target moving forward". It is shared between window-open and drawer-close but it is not appropriate for drawer-open. To avoid negative transfer, drawer-open uses the task-specific module 7 instead. This module learnt the behaviour "reach target moving backwards".