文摘
In the present work, we employ backward stochastic differential equations (BSDEs) to study the optimal control problem of semi-Markov processes on a finite horizon, with general state and action spaces. More precisely, we prove that the value function and the optimal control law can be represented by means of the solution of a class of BSDEs driven by a semi-Markov process or, equivalently, by the associated random measure. We also introduce a suitable Hamilton–Jacobi–Bellman (HJB) equation. With respect to the pure jump Markov framework, the HJB equation in the semi-Markov case is characterized by an additional differential term \(\partial _a\). Taking into account the particular structure of semi-Markov processes, we rewrite the HJB equation in a suitable integral form which involves a directional derivative operator D related to \(\partial _a\). Then, using a formula of It\(\hat{\text{ o }}\) type tailor-made for semi-Markov processes and the operator D, we are able to prove that a BSDE of the above-mentioned type provides the unique classical solution to the HJB equation, which identifies the value function of our control problem.