Last edited by Dura
Friday, July 24, 2020 | History

2 edition of Exploiting structure to efficiently solve large scale partially observable Markov decision processes. found in the catalog.

Exploiting structure to efficiently solve large scale partially observable Markov decision processes.

Pascal Poupart

Exploiting structure to efficiently solve large scale partially observable Markov decision processes.

by Pascal Poupart

  • 369 Want to read
  • 5 Currently reading

Published .
Written in English


About the Edition

Partially observable Markov decision processes (POMDPs) provide a natural and principled framework to model a wide range of sequential decision making problems under uncertainty. To date, the use of POMDPs in real-world problems has been limited by the poor scalability of existing solution algorithms, which can only solve problems with up to ten thousand states. In fact, the complexity of finding an optimal policy for a finite-horizon discrete POMDP is PSPACE-complete. In practice, two important sources of intractability plague most solution algorithms: Large policy spaces and large state spaces.In practice, it is critical to simultaneously mitigate the impact of complex policy representations and large state spaces. Hence, this thesis describes three approaches that combine techniques capable of dealing with each source of intractability: VDC with BPI, VDC with Perseus (a randomized point-based value iteration algorithm by Spaan and Vlassis [136]), and state abstraction with Perseus. The scalability of those approaches is demonstrated on two problems with more than 33 million states: synthetic network management and a real-world system designed to assist elderly persons with cognitive deficiencies to carry out simple daily tasks such as hand-washing. This represents an important step towards the deployment of POMDP techniques in ever larger, real-world, sequential decision making problems.On the other hand, for many real-world POMDPs it is possible to define effective policies with simple rules of thumb. This suggests that we may be able to find small policies that are near optimal. This thesis first presents a Bounded Policy Iteration (BPI) algorithm to robustly find a good policy represented by a small finite state controller. Real-world POMDPs also tend to exhibit structural properties that can be exploited to mitigate the effect of large state spaces. To that effect, a value-directed compression (VDC) technique is also presented to reduce POMDP models to lower dimensional representations.

The Physical Object
Pagination144 leaves.
Number of Pages144
ID Numbers
Open LibraryOL19475491M
ISBN 100494027274


Share this book
You might also like
The Tempest

The Tempest

Boundary value problems ofapplied mathematics

Boundary value problems ofapplied mathematics

Teaching and learning in diverse and inclusive classrooms

Teaching and learning in diverse and inclusive classrooms

formance related pay.

formance related pay.

Assessment of rural nonpoint source pollution

Assessment of rural nonpoint source pollution

Divorce proceedings

Divorce proceedings

Guidance services for adults.

Guidance services for adults.

Hajime

Hajime

Use of non-human primates in biomedical research

Use of non-human primates in biomedical research

H.R. 4541--the Commodity Futures Modernization Act

H.R. 4541--the Commodity Futures Modernization Act

Christian councillor

Christian councillor

Grotesques and other reflections

Grotesques and other reflections

Exploiting structure to efficiently solve large scale partially observable Markov decision processes by Pascal Poupart Download PDF EPUB FB2