# Publications Freek Stulp

Back to Homepage
 • Sorted by Date • Classified by Publication Type • Classified by Research Category • Reinforcement Learning with Sequences of Motion Primitives for Robust Manipulation Freek Stulp, Evangelos Theodorou, and Stefan Schaal. Reinforcement Learning with Sequences of Motion Primitives for Robust Manipulation. IEEE Transactions on Robotics, 28(6):1360–1370, 2012. King-Sun Fu Best Paper Award of the IEEE Transactions on Robotics for the year 2012 Download [PDF]2.3MB Abstract Physical contact events often allow a natural decomposition of manipulation tasks into action phases and subgoals. Within the motion primitive paradigm, each action phase corresponds to a motion primitive, and the subgoals correspond to the goal parameters of these primitives. Current state-of-the-art reinforcement learning algorithms are able to efficiently and robustly optimize the parameters of motion primitives in very high-dimensional problems. These algorithms often consider only shape parameters, which determine the trajectory between the start- and end-point of the movement. In manipulation, however, it is also crucial to optimize the goal parameters, which represent the subgoals between the motion primitives. We therefore extend the policy improvement with path integrals (PI$^2$) algorithm to simultaneously optimize shape and goal parameters. Applying simultaneous shape and goal learning to sequences of motion primitives leads to the novel algorithm PI$^2$-Seq. We use our methods to address a fundamental challenge in manipulation: improving the robustness of everyday pick-and-place tasks. BibTeX
 @Article{stulp12reinforcement, title = {Reinforcement Learning with Sequences of Motion Primitives for Robust Manipulation}, author = {Freek Stulp and Evangelos Theodorou and Stefan Schaal}, journal = {IEEE Transactions on Robotics}, year = {2012}, note = {{\bf King-Sun Fu Best Paper Award of the IEEE Transactions on Robotics for the year 2012}}, number = {6}, pages = {1360-1370}, volume = {28}, abstract = {Physical contact events often allow a natural decomposition of manipulation tasks into action phases and subgoals. Within the motion primitive paradigm, each action phase corresponds to a motion primitive, and the subgoals correspond to the goal parameters of these primitives. Current state-of-the-art reinforcement learning algorithms are able to efficiently and robustly optimize the parameters of motion primitives in very high-dimensional problems. These algorithms often consider only shape parameters, which determine the trajectory between the start- and end-point of the movement. In manipulation, however, it is also crucial to optimize the goal parameters, which represent the subgoals between the motion primitives. We therefore extend the policy improvement with path integrals (PI$^2$) algorithm to simultaneously optimize shape and goal parameters. Applying simultaneous shape and goal learning to sequences of motion primitives leads to the novel algorithm PI$^2$-Seq. We use our methods to address a fundamental challenge in manipulation: improving the robustness of everyday pick-and-place tasks.}, bib2html_pubtype = {Journal,Awards}, bib2html_rescat = {Reinforcement Learning of Robot Skills} } 

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints.

Generated by bib2html.pl (written by Patrick Riley ) on Mon Jul 20, 2015 21:50:11