Predicting Chaotic Systems with Sparse Data
In nonlinear and stochastic models, even small uncertainties in the knowledge of the current state can lead to large uncertainties in the prediction of future states. As the model evolves, one can hope to reduce this uncertainty by blending the model with observational data, with the intention of `steering’ the evolution towards the data. In applied sciences, the methodology for achieving this is known as data assimilation. Applications are vast and include rocket guidance, robotics, GPS, controlled chemical reactions and econometric forecasting. Many techniques for data assimilation have been developed for the problem of numerical weather prediction, where knowledge of the ocean-atmosphere state is at any time very uncertain, and the evolutionary model is high dimensional (of order 10^9 variables at present resolutions), turbulent and stochastic. Observational data comes from a variety of sources (ground based weather stations, satellites, GPS enables drifting buoys etc) but due to the dimensionality of the model, sparse data becomes apparent. The central problem of data assimilation can be mathematically formulated using a Bayesian framework, but for high dimensional nonlinear models, the solution is computationally out of reach. To circumvent this problem, applied scientists (often weather agencies) have developed algorithms that approximate some aspect of the Bayesian problem to make the problem computationally tractable. In this talk, we will introduce several of these algorithms that are applied commonly in engineering and geoscience problems, including the Kalman filter, the Ensemble Kalman filter and the particle filter. We will illustrate their use and explain the advantages and disadvantages of each by focusing on a family of toy forecast models, starting with simple linear models and building up to toy models for atmospheric dynamics.