Kneebone, M L (2010)
Ph.D. thesis, University of Birmingham.
Planning under uncertainty is a common requirement of robot navigation. Probabilistic roadmaps are an efficient method for generating motion graphs through the robot's configuration space, but do not inherently represent any uncertainty in the environment. In this thesis, the physical domain is abstracted into a graph search problem where the states of some edges are unknown. This is modelled as a decision-theoretic planning problem described through a partially observable Markov Decision Process (POMDP). It is shown that the optimal policy can depend on accounting for the value of information from observations. The model scalability and the graph size that can be handled is then extended by conversion to a belief state Markov Decision Process. Approximations to both the model and the planning algorithm are demonstrated that further extend the scalability of the techniques for static graphs. Experiments conducted verify the viability of these approximations by producing near-optimal plans in greatly reduced time compared to recent POMDP solvers. Belief state approximation in the planner reduces planning time significantly while producing plans of equal quality to those without this approximation. This is shown to be superior to other techniques such as heuristic weighting which is not found to give any significant benefit to the planner.
This unpublished thesis/dissertation is copyright of the author and/or third parties. The intellectual property rights of the author or third parties in respect of this work are as defined by The Copyright Designs and Patents Act 1988 or as modified by any successor legislation. Any use made of information contained in this thesis/dissertation must be in accordance with that legislation and must be properly acknowledged. Further distribution or reproduction in any format is prohibited without the permission of the copyright holder.
Repository Staff Only: item control page