Optimal stationary policies inrisk-sensitive dynamic programs with finite state spaceand nonnegative rewards
Volume 27 / 2000
Abstract
This work concerns controlled Markov chains with finite state space and nonnegative rewards; it is assumed that the controller has a constant risk-sensitivity, and that the performance ofa control policy is measured by a risk-sensitive expected total-reward criterion. The existence of optimal stationary policies isstudied within this context, and the main resultestablishes the optimalityof a stationary policy achieving the supremum in the correspondingoptimality equation, whenever the associated Markov chain hasa unique positive recurrent class. Two explicit examples are providedto show that, if such an additional condition fails, an optimal stationarypolicy cannot be generally guaranteed. The results of this note, which consider both the risk-seeking and the risk-averse cases, answer an extended version of a question recently posed in Puterman (1994).