In 18 Nov 2020, Prof Mark Cannon from University of Oxford gives us a talk with the title “ ”. The abstract of the talk and the brief introduction of Prof Cannon are given as follows.Stochastic Model Predictive Control: Discounting, Output Feedback with Intermittent Observations and ConvergenceAbstractControl algorithms that regulate the risk of constraint violation to a non-zero but pre-specified probability are important in diverse applications that are subject to stochastic model uncertainty. This is because probabilistic or chance constraints can avoid the conservativeness of robust control strategies that assume worst-case disturbance values. In particular, for the practically important case of model uncertainty distributions that are not supported on a finite interval, chance constraints are a necessary replacement for robust constraints. This motivates the development of stochastic Model Predictive Control (MPC). Discounted costs and constraints in optimal control problems allow performance in the near future to be prioritised over long-term behaviour. This shift of emphasis is vital for ensuring recursive feasibility of chance-constrained control problems involving possibly unbounded disturbances. In this talk we consider constrained systems with stochastic additive disturbances and noisy measurements transmitted over lossy communication channels. We discuss MPC strategies that minimise discounted costs subject to discounted expectation constraints. Sensor data is assumed to be lost with known probability, and data losses are accounted for by expressing the predicted control policy as an affine function of future observations, which results in a convex optimal control problem. An online constraint-tightening technique ensures recursive feasibility of the online optimisation problem and satisfaction of the expectation constraint without the need for prior bounds on the distributions of the noise and disturbance inputs. We discuss closed loop properties of stochastic MPC algorithms with both discounted and non-discounted constraints. Using an input-to-state stability property we find conditions that imply convergence with probability 1 of the state of a disturbed nonlinear system to a minimal robust positively invariant set. We discuss implications for the convergence of the state and control laws of stochastic MPC formulations, and we demonstrate convergence for several existing stochastic MPC formulations for linear and nonlinear systems. Bio sketchMark Cannon studied engineering as an undergraduate (MEng in Engineering Science) and completed a doctorate (DPhil) at the University of Oxford, graduating in 1993 and 1998. Between these he did a master’s degree (SM) at Massachusetts Institute of Technology, graduating in 1995. He is currently an Associate Professor of engineering science with the University of Oxford and a fellow of St Johns College, Oxford. His research interests are in robust and optimal control for constrained and uncertain systems, optimization for receding horizon control with robust constraints and stochastic uncertainty, and stochastic model predictive control. He is a member of the Oxford Control Group. |