Publications Freek Stulp


Back to Homepage
Sorted by DateClassified by Publication TypeClassified by Research Category
Learning Motion Primitive Goals for Robust Manipulation
Freek Stulp, Evangelos Theodorou, Mrinal Kalakrishnan, Peter Pastor, Ludovic Righetti, and Stefan Schaal. Learning Motion Primitive Goals for Robust Manipulation. In International Conference on Intelligent Robots and Systems (IROS), 2011.
Download
[PDF]1.6MB  
Abstract
Applying model-free reinforcement learning to manipulation remains challenging for several reasons. First, manipulation involves physical contact, which causes discontinuous cost functions. Second, in manipulation, the end-point of the movement must be chosen carefully, as it represents a grasp which must be adapted to the pose and shape of the object. Finally, there is uncertainty in the object pose, and even the most carefully planned movement may fail if the object is not at the expected position. To address these challenges we 1) present a simplified, computationally more efficient version of our model-free reinforcement learning algorithm PI2; 2) extend PI2 so that it simultaneously learns shape parameters and goal parameters of motion primitives; 3) use shape and goal learning to acquire motion primitives that are robust to object pose uncertainty. We evaluate these contributions on a manipulation platform consisting of a 7-DOF arm with a 4-DOF hand.
BibTeX
@InProceedings{stulp11learningmotion,
  title                    = {Learning Motion Primitive Goals for Robust Manipulation},
  author                   = {Freek Stulp and Evangelos Theodorou and Mrinal Kalakrishnan and Peter Pastor and Ludovic Righetti and Stefan Schaal},
  booktitle                = {International Conference on Intelligent Robots and Systems (IROS)},
  year                     = {2011},
  abstract                 = {Applying model-free reinforcement learning to manipulation remains challenging for several reasons. First, manipulation involves physical contact, which causes discontinuous cost functions. Second, in manipulation, the end-point of the movement must be chosen carefully, as it represents a grasp which must be adapted to the pose and shape of the object. Finally, there is uncertainty in the object pose, and even the most carefully planned movement may fail if the object is not at the expected position. To address these challenges we 1)~present a simplified, computationally more efficient version of our model-free reinforcement learning algorithm PI2; 2)~extend PI2 so that it simultaneously learns shape parameters \emph{and} goal parameters of motion primitives; 3)~use shape and goal learning to acquire motion primitives that are robust to object pose uncertainty. We evaluate these contributions on a manipulation platform consisting of a 7-DOF arm with a 4-DOF hand.},
  bib2html_accrate         = {32\%},
  bib2html_pubtype         = {Refereed Conference Paper},
  bib2html_rescat          = {Reinforcement Learning of Robot Skills}
}

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints.


Generated by bib2html.pl (written by Patrick Riley ) on Mon Jul 20, 2015 21:50:11