We consider infinite-state discrete Markov chains which are eager: the probability of avoiding a defined set of final states for more than n steps decreases exponentially in n. We study the problem of computing the expected reward (or cost) of runs until reaching the final states, where rewards are assigned to individual runs by computable reward functions. We present a path exploration scheme, based on forward reachability analysis, to approximate the expected reward up-to an arbitrarily small error, and show that the scheme is guaranteed to terminate in the case of eager Markov chains. We show that eager Markov chains include those induced by Probabilistic Vector Addition Systems with States, Noisy Turing Machines, and Probabilistic Lossy Channel Systems.
Available as compressed Postscript (156 kB, no cover) and Postscript (375 kB, no cover)
Download BibTeX entry.