Algorithms for reachability problems on stochastic Markov reward models

Muhammad, Irfan (2021). Algorithms for reachability problems on stochastic Markov reward models. University of Birmingham. Ph.D.

Text - Accepted Version
Available under License Creative Commons Attribution Share Alike.

Download (2MB) | Preview


Probabilistic model-checking is a field which seeks to automate the formal analysis of probabilistic models such as Markov chains. In this thesis, we study and develop the stochastic Markov reward model (sMRM) which extends the Markov chain with rewards as random variables. The model recently being introduced, does not have much in the way of techniques and algorithms for their analysis. The purpose of this study is to derive such algorithms that are both scalable and accurate.

Additionally, we derive the necessary theory for probabilistic model-checking of sMRMs against existing temporal logics such as PRCTL. We present the equations for computing \textit{first-passage reward densities}, \textit{expected value problems}, and other \textit{reachability problems}. Our focus however is on finding strictly numerical solutions for \textit{first-passage reward densities}. We solve for these by firstly adapting known direct linear algebra algorithms such as Gaussian elimination, and iterative methods such as the power method, Jacobi and Gauss-Seidel. We provide solutions for both discrete-reward sMRMs, where all rewards discrete (lattice) random variables. And also for continuous-reward sMRMs, where all rewards are strictly continuous random variables, but not necessarily having continuous probability density functions (pdfs). Our solutions involve the use of fast Fourier transform (FFT) for faster computation, and we adapted existing quadrature rules for convolution to gain more accurate solutions, rules such as the trapezoid rule, Simpson's rule or Romberg's method.

In the discrete-reward setting, existing solutions are either derived by hands, or a combination of graph-reduction algorithms and symbolically solving them via computer algebra systems. The symbolic approach is not scalable, and for this we present strictly numerical but relatively more scalable algorithms. We found each - direct and iterative - capable of solving problems with larger state spaces. The best performer was the power method, owed partially to its simplicity, leading to easier vectorization of its implementation. Whilst, the Gauss-Seidel method was shown to converge with fewer iterations, it was slower due to costs of deconvolution. The Gaussian Elimination algorithm performed poorly relative to these.

In the continuous-reward setting, existing solutions are adaptable from literature on semi-Markov processes. However, it appears that other algorithms should still be researched for the cases where rewards have discontinuous pdfs. The algorithm we have developed has the ability to resolve such a case, albeit the solution does not appear as scalable as the discrete-reward setting.

Type of Work: Thesis (Doctorates > Ph.D.)
Award Type: Doctorates > Ph.D.
Licence: Creative Commons: Attribution-Share Alike 4.0
College/Faculty: Colleges (2008 onwards) > College of Engineering & Physical Sciences
School or Department: School of Computer Science
Funders: Other
Other Funders: DARPA BRASS programme
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science


Request a Correction Request a Correction
View Item View Item


Downloads per month over past year