Home News About Us Contact Contributors Disclaimer Privacy Policy Help FAQ

Home
Search
Quick Search
Advanced
Fulltext
Browse
Collections
Persons
My eDoc
Session History
Login
Name:
Password:
Documentation
Help
Support Wiki
Direct access to
document ID:


          Institute: MPI für biologische Kybernetik     Collection: Biologische Kybernetik     Display Documents



ID: 461771.0, MPI für biologische Kybernetik / Biologische Kybernetik
An Expectation Maximization Algorithm for Continuous Markov Decision Processes with Arbitrary Reward
Authors:Hoffman, M.; Freitas, N.; Doucet, A.; Peters, J.
Editors:Dyk, D. van; Welling, M.
Date of Publication (YYYY-MM-DD):2009-04
Title of Proceedings:Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics (AIStats 2009)
Start Page:232
End Page:239
Physical Description:8
Audience:Not Specified
Intended Educational Use:No
Abstract / Description:We derive a new expectation maximization algorithm for policy optimization in linear Gaussian Markov decision processes, where the reward function is parameterised in terms of a flexible mixture of Gaussians. This approach exploits both analytical tractability and numerical optimization. Consequently, on the one hand, it is more flexible and general than closed-form solutions, such as the widely used linear quadratic Gaussian (LQG) controllers. On the other hand, it is more accurate and faster than optimization methods that rely on approximation and simulation. Partial analytical solutions (though costly) eliminate the need for simulation and, hence, avoid approximation error. The experiments will show that for the same cost of computation, policy optimization methods that rely on analytical tractability have higher value than the ones that rely on simulation.
External Publication Status:published
Document Type:Conference-Paper
Communicated by:Holger Fischer
Affiliations:MPI für biologische Kybernetik/Empirical Inference (Dept. Schölkopf)
Identifiers:LOCALID:5658
URL:http://www.ics.uci.edu/~aistats/index.html
The scope and number of records on eDoc is subject to the collection policies defined by each institute - see "info" button in the collection browse view.