Books

@book{krishnamurthy2016partially,
title={Partially Observed Markov Decision Processes}
author={Krishnamurthy, Vikram},
year={2016},
publisher={Cambridge University Press}
}

Covering formulation, algorithms, and structural results, and linking theory to real-world applications in controlled sensing (including social learning, adaptive radars and sequential detection), this book focuses on the conceptual foundations of partially observed Markov decision processes (POMDPs). It emphasizes structural results in stochastic dynamic programming, enabling graduate students and researchers in engineering, operations research, and economics to understand the underlying unifying themes without getting weighed down by mathematical technicalities. Bringing together research from across the literature, the book provides an introduction to nonlinear filtering followed by a systematic development of stochastic dynamic programming, lattice programming and reinforcement learning for POMDPs. Questions addressed in the book include: when does a POMDP have a threshold optimal policy? When are myopic policies optimal? How do local and global decision makers interact in adaptive decision making in multi-agent social learning where there is herding and data incest? And how can sophisticated radars and sensors adapt their sensing in real time?

Problem Sets and Internet Supplement (Click here to download) problems

This internet supplement contains exercises, examples and case studies. They are mainly mini-research type exercises rather than simplistic drill type problems. Some of the exercises are extensions of the material in the book. The exercises are suitable as term projects for a graduate level course on POMDPs.

You can also download this internet supplement from the Cambridge University Press website or from arxiv

book_integrated trackingbook_biological membranebook_interactive sensing

Save