The value function in ergodic control of diffusion processes with partial observations II
Volume 27 / 2000
Applicationes Mathematicae 27 (2000), 455-464
DOI: 10.4064/am-27-4-455-464
Abstract
The problem of minimizing the ergodic or time-averaged cost for a controlled diffusion with partial observations can be recast as an equivalent control problem for the associated nonlinear filter. In analogy with the completely observed case, one may seek the value function for this problem as the vanishing discount limit of value functions for the associated discounted cost problems. This passage is justified here for the scalar case under a stability hypothesis, leading in particular to a "martingale" formulation of the dynamic programming principle.