Show simple item record

dc.contributor.authorYe, Juan Juan.en_US
dc.date.accessioned2014-10-21T12:34:53Z
dc.date.available1990
dc.date.issued1990en_US
dc.identifier.otherAAINN64513en_US
dc.identifier.urihttp://hdl.handle.net/10222/55204
dc.descriptionThis thesis describes a complete theory of optimal control of piecewise deterministic Markov processes under weak assumptions. The theory consists of a description of the processes, a nonsmooth stochastic maximum principle as a necessary optimality condition, a generalized Bellman-Hamilton-Jacobi necessary and sufficient optimality condition involving the Clarke generalized gradient, existence results and regularity properties of the value function. The impulse control problem is transformed to an equivalent optimal dynamic control problem. Cost functions are subject only to growth conditions.en_US
dc.descriptionPiecewise deterministic Markov processes, termed PDPs for short, are continuous time homogeneous Markov processes consisting of a mixture of deterministic motion and random jumps. PDPs, with stochastic jump processes and deterministic dynamical systems as special cases, include virtually all of the stochastic models of applied probability except diffusions. Their impulse control extends their applicability to discrete event problems such as stochastic scheduling. The processes are controlled by an open loop control depending on the postjump state and the time elapsed since the last jump in the interior of the state space, a feedback control on the boundary of the state space and impulse controls on the entire state space. The expected value of a performance functional of integral type with additional boundary and impulse costs is to be minimized.en_US
dc.descriptionThe PDP optimal control problem is converted to an infinite horizon discrete-time stochastic optimal control problem and it is shown that the optimal strategy for control of a PDP is to choose after each jump a control function which is an optimal control in a corresponding deterministic control problem where the state of the system is required to stop at the boundary. This deterministic control problem is however non-standard in that the terminal time is not fixed but instead is either infinity or the first time the trajectory reaches the boundary of the state space. As preliminary results, we obtain a nonsmooth maximum principle as a necessary optimality condition and a necessary and sufficient optimality condition in terms of a generalized Bellman-Hamilton-Jacobi equation involving the Clarke generalized gradient for the deterministic problem. The desired results then follow in a straight-forward manner.en_US
dc.descriptionThesis (Ph.D.)--Dalhousie University (Canada), 1990.en_US
dc.languageengen_US
dc.publisherDalhousie Universityen_US
dc.publisheren_US
dc.subjectOperations Research.en_US
dc.titleOptimal control of piecewise deterministic Markov processes.en_US
dc.typetexten_US
dc.contributor.degreePh.D.en_US
 Find Full text

Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record