Stochastic MPC with Dynamic Feedback Gain Selection and Discounted Probabilistic Constraints

Shuhao Yan, P. J. Goulart and Mark Cannon

IEEE Transactions on Automatic Control, vol. 67, no. 11, pp. 5885-5899, November 2022.
BibTeX  Preprint 

  author = {Shuhao Yan and P. J. Goulart and Mark Cannon},
  title = {Stochastic MPC with Dynamic Feedback Gain Selection and Discounted Probabilistic Constraints},
  journal = {IEEE Transactions on Automatic Control},
  year = {2022},
  volume = {67},
  number = {11},
  pages = {5885-5899},
  doi = {10.1109/TAC.2021.3128466}

This paper considers linear discrete-time systems with additive disturbances, and designs a Model Predictive Control (MPC) law incorporating a dynamic feedback gain to minimise a quadratic cost function subject to a single chance constraint. The feedback gain is selected from a set of candidates generated by solutions of multiobjective optimisation problems solved by Dynamic Programming (DP). We provide two methods for gain selection based on minimising upper bounds on predicted costs. The chance constraint is defined as a discounted sum of violation probabilities on an infinite horizon. By penalising violation probabilities close to the initial time and ignoring violation probabilities in the far future, this form of constraint allows for an MPC law with guarantees of recursive feasibility without an assumption of boundedness of the disturbance. A computationally convenient MPC optimisation problem is formulated using Chebyshev's inequality and we introduce an online constraint-tightening technique to ensure recursive feasibility. The closed loop system is guaranteed to satisfy the chance constraint and a quadratic stability condition. With dynamic feedback gain selection, the conservativeness of Chebyshev's inequality is mitigated and closed loop cost is reduced with a larger set of feasible initial conditions. A numerical example is given to show these properties.