Matthew E. Taylor's Publications

Sorted by DateClassified by Publication TypeSorted by First Author Last NameClassified by Research Category

Autonomous Transfer for Reinforcement Learning

Matthew E. Taylor, Gregory Kuhlmann, and Peter Stone. Autonomous Transfer for Reinforcement Learning. In The Seventh International Joint Conference on Autonomous Agents and Multiagent Systems, pp. 283–290, May 2008.
AAMAS-2008

Download

[PDF]233.7kB  [postscript]409.6kB  

Abstract

Recent work in transfer learning has succeeded in making reinforcement learning algorithms more efficient by incorporating knowledge from previous tasks. However, such methods typically must be provided either a full model of the tasks or an explicit relation mapping one task into the other. An autonomous agent may not have access to such high-level information, but would be able to analyze its experience to find similarities between tasks. In this paper we introduce Modeling Approximate State Transitions by Exploiting Regression (MASTER), a method for automatically learning a mapping from one task to another through an agent's experience. We empirically demonstrate that such learned relationships can significantly improve the speed of a reinforcement learning algorithm in a series of Mountain Car tasks. Additionally, we demonstrate that our method may also assist with the difficult problem of task selection for transfer.

BibTeX Entry

@InProceedings{AAMAS08-taylor,
        author="Matthew E.\ Taylor and Gregory Kuhlmann and Peter Stone",
        title="Autonomous Transfer for Reinforcement Learning",
        booktitle="The Seventh International Joint Conference on
        Autonomous Agents and Multiagent Systems",
        month="May",
        year="2008", 
        pages="283--290",
        abstract={Recent work in transfer learning has succeeded in
                  making reinforcement learning algorithms more
                  efficient by incorporating knowledge from previous
                  tasks. However, such methods typically must be
                  provided either a full model of the tasks or an
                  explicit relation mapping one task into the
                  other. An autonomous agent may not have access to
                  such high-level information, but would be able to
                  analyze its experience to find similarities between
                  tasks. In this paper we introduce Modeling
                  Approximate State Transitions by Exploiting
                  Regression (MASTER), a method for automatically
                  learning a mapping from one task to another through
                  an agent's experience. We empirically demonstrate
                  that such learned relationships can significantly
                  improve the speed of a reinforcement learning
                  algorithm in a series of Mountain Car
                  tasks. Additionally, we demonstrate that our method
                  may also assist with the difficult problem of task
                  selection for transfer.}
       wwwnote={<a href="http://gaips.inesc-id.pt/aamas2008/">AAMAS-2008</a>},
}       

Generated by bib2html.pl (written by Patrick Riley ) on Mon Apr 19, 2010 14:12:46