Lead CI: Ian Turner
The aim of the project is to develop a computationally efficient modelling approach for simulating multiphase transport in heterogeneous porous media through the use of homogenization methods and large-scale Bayesian computations for estimating key model parameters from observed data. The model will be applied to predicting the gas composition in a coal seam gas field, simulating groundwater flows in coastal aquifer systems, drying fibrous media, and investigating anomalous diffusion in biological tissue.
Our modeling framework offers additional insights over experimental measurements alone because, once calibrated, cost-effective numerical simulations can be performed to investigate the evolution of different transport phenomena under a wide range of external conditions. This insight will prove invaluable for designing new industrial technologies and optimising current operations.
The macroscopic approach for modelling heat and mass transport in porous media is now a well developed science with the method of volume averaging (Whitaker, 1998) typically used to overcome the difficulties associated with the complex geometry of the underlying pore structure. A smoothing of the highly oscillatory physical quantities, such as the model phase density, via an averaging volume containing in the order of hundreds or thousands of pores is used to derive conservation laws that resemble those of a continuum with the exception that volume-averaged quantities and effective parameters appear in their definitions (Whitaker, 1998). A problem with this approach is the requirement to specify the nonlinear effective coefficients for use in the model. Homogenisation theory is often used to predict these effective coefficients from the microstructure (Hornung, 1997) by assuming that the underlying pore structure consists of a periodic arrangement of cells. Steady-state, uncoupled problems are then solved on the unit cell prior to the simulation to determine the effective parameters. However, these parameters also need to be calibrated using macro-scale observed data.
Commonly, data assimilation is considered as an application of the Bayesian sequential learning or recursive estimation approach based on a state space formulation; e.g. Wilke and Berliner, (2007). The focus is to make accurate predictions of the state variable or future observations. In data assimilation the state space evolution generally follows the laws of physics determined by the context of the application. The standard Bayesian approach has model parameters that evolve according to a Markov state equation, while an observation equation depends upon the current value of the state. In this context Bayesian parameter estimation and prediction has been studied extensively. The method involves the state model parameters being assigned an appropriate prior probability distribution that is updated with the current observations using Bayesian updating to give the current posterior distribution for the state model parameters and predictions of future observations. If the state space evolution equations are linear, the probability distributions for the state space evolution and the observation process are Gaussian, and conjugate distributions are used for priors for parameters. The updating process can be represented by a Kalman filter. For the types of applications planned here the challenge comes from the large number of state parameters, the complex non-linear state system evolution representing the underlying physics of the various applications, and the non-standard observational processes. Current developments involve Monte Carlo algorithms for quite general non-parametric state space equations (Frigola et al, 2014), and extensive use of Particle Filters or Sequential Monte Carlo. For these Monte Carlo implementations, the number of state parameters tends to be small and the computational time is large for a small number of observations. Thus, the challenge is to find computational approaches that can scale for the models that have been developed in data assimilation and state space modeling to deal with the complexity and non-linearity of the state space evolution determined by the physics of the various planned applications.
In order to calibrate and validate of our models, we will seek permission from our industrial collaborators for the use of their measured ‘large and noisy’ experimental datasets for the chosen application areas.
As a result of the computational complexity of the underlying computational algorithms the use of advanced computing infrastructure will be mandatory. We plan to use a combination of parallel processing and GPU technology to implement our algorithms on the HPC system at QUT, with a particular focus on Krylov subspace methods (Strickland et al, 2011). The complexity of these algorithms necessitates careful design and planning to ensure an efficient overall solution. We will also develop specialised visualisation software for the large, complex datasets generated from the chosen industrial applications. An important contribution will be the generation of a suite of visualisation procedures that highlight the spatial and temporal trends of the different flow fields, and reveal structure in the data with multiple inputs from different sources.
Frigola, R., Chen, Y., Rasmussen, C.E. (2014). arXiv: 1406.4905v1.
Hornung, U. (Editor): Homogenization and Porous Media, Springer-Verlag, New York (1997).
Strickland C.M., Simpson D.P., Turner I.W., Denham R., Mengersen K.L. (2011) Fast Bayesian Analysis of Spatial Dynamic Factor Models for Multi-temporal Remotely Sensed Imagery, Journal of the Royal Statistical Society Series C, Volume 60, No. 1, pp. 109-124.
Whitaker, S. (1998), Coupled transport in multiphase systems: a theory of drying, in Advances in Heat Transfer, Y. I. Cho J. P. Hartnett, T. F. Irvine and G. A. Greene, eds., vol. 31, Elsevier: 1-104.
Wilkie, C.K. and Berliner L.M. (2008). Physica D: Nonlinear Phenomena, v 230. pp 1-16.