Applied Statistics

Research

varrank: a variable selection appoach

Gilles Kratzer

A common challenge encountered when working with high dimensional datasets is that of variable selection. All relevant confounders must be taken into account to allow for unbiased estimation of model parameters, while balancing with the need for parsimony and producing interpretable models. This task is known to be one of the most controversial and difficult tasks in epidemiological analysis. 

Variable selection approaches can be categorized into three broad classes: filter-based methods, wrapper-based methods, and embedded methods. They differ in how the methods combine the selection step and the model inference. An appealing filter approach is the minimum redundancy maximum relevance (mRMRe) algorithm. The purpose of this heuristic approach is to select the most relevant variables from a set by penalising according to the amount of redundancy variables share with previously selected variables. In epidemiology, the most frequently used approaches to tackle variable selection based on modeling use goodness-of fit metrics. The paradigm is that important variables for modeling are variables that are causally connected and predictive power is a proxy for causal links. On the other hand, the mRMRe algorithm aims to measure the importance of variables based on a relevance penalized by redundancy measure which makes it appealing for epidemiological modeling.

varrank has a flexible implementation of the mRMRe algorithm which perform variable ranking based on mutual information. The package is particularly suitable for exploring multivariate datasets requiring a holistic analysis. The two main problems that can be addressed by this package are the selection of the most representative variables for modeling a collection of variables of interest, i.e., dimension reduction, and variable ranking with respect to a set of variables of interest.

Publications:  https://arxiv.org/abs/1804.07134
Software:  https://cran.r-project.org/package=varrank

Spatial fusion modeling

Craig Wang, in collaboration with Milo Puhan, Epidemiology, Biostatistics and Prevention Institute, UZH.

The availability of data has increased dramatically in past years. Multivariate remote sensing data, highly detailed social-economic data are readily to be analyzed to address different research interests. Moreover, the linkage between diverse datasets can be more easily established with the openness trend of database hosts and organizations.

We constructs spatial fusion models within the Bayesian framework to jointly analyze both individual point data and area data. A single source of data may be incomplete or not suitable for parameter inference in statistical models. The cost of data collection especially in large population studies may result useful variables to be omitted, hence limit the scope of research interests. In addition, appropriate statistical models can be complex hence requiring a large amount of data to make precise inference on the weakly identified parameters. Therefore, it becomes crucial in those situations to utilize multiple data sources, in order to reduce bias, widen research possibilities and apply appropriate statistical models.

Area data (left), point data (middle) and output of fusion model (right) from a simulation.

Time series extension to Additive Bayesian Network

Gilles Kratzer.

In recent years, Additive Bayesian Networks (ABN) analysis has been successfully used in many fields from sociology to veterinary epidemiology. This approach has shown to be very efficient in embracing the correlated multivariate nature of high dimensional datasets and in producing data driven model instead of expert based models. ABN is a multidimensional regression model analogous to generalised linear modelling but with all variables as potential predictors. The final goal is to construct the Bayesian network that best support the data. When applying ABN to time series dataset it is of high importance to cope with the autocorrelation structure of the variance-covariance matrix as the structural learning process relies on the estimation of the relative quality of the model for the given dataset. This is done by using common model selection score such as AIC or BIC. We adapt the ABN framework such that it can handle time series and longitudinal datasets and then generalize the time series regression for a given set of data. We implement an iterative Cochrane-Orcutt procedure in the fitting algorithm to deal with serially correlated errors and cope with the between- and within- cluster effect in regressing centred responses over centred covariate. tsabn is distributed as an R package.

Software: git.math.uzh.ch/reinhard.furrer/tsabn

64-bit sparse matrices in R

Florian Gerber

Software packages for spatial data often implement a hybrid approach of interpreted and compiled programming languages. The compiled parts are usually written in C, C++, or Fortran, and are efficient in terms of computational speed and memory usage. Conversely, the interpreted part serves as a convenient user interface and calls the compiled code for computationally demanding operations. The price paid for the user friendliness of the interpreted component is—besides performance—the limited access to low level and optimized code. An example of such a restriction is the 64-bit vector support of the widely used statistical language R. On the R side, users do not need to change existing code and may not even notice the extension. On the other hand, interfacing 64-bit compiled code efficiently is challenging. Since many R packages for spatial data could benefit from 64-bit vectors, we investigated how to simply extend existing R packages using the foreign function interface to seamlessly support 64-bit vectors. This extension is shown with the sparse matrix algebra R package spam. The new capabilities are illustrated with an example of GIMMS NDVI3g data featuring a parametric modeling approach for a non-stationary covariance matrix.
A key part of the 64-bit extension is the R package dotCall64, which provides an enhanced foreign function interface to call compiled code from R. The interface provides functionality to do the required double to 64-bit integer type conversions. In addition, options to control copying of R objects are available.

Developing Bayesian Networks as a tool for Epidemiological Systems Analysis

Gilles Kratzer, in collaboration with the Section of Epidemiology, VetSuisse Faculty, UZH.

The study of the causes and effect of health and disease condition is a cornerstone of the epidemiology. Classical approaches, such as regression techniques have been successfully used to model the impact of health determinants over the whole population. However, recently there is a growing recognition of biological, behavioural factors, at multiple levels that can impact the health condition. These epidemiological data are, by nature, highly complex and correlated. Classical regression framework have shown limited abilities to embrace the correlated multivariate nature of high dimensional epidemiological variables. On the other hand, models driven by expert knowledge often fail to efficiently manage the complexity and correlation of the epidemiological data. Additive Bayesian Networks (ABN) addresses these challenges in producing a data selected set of multivariate models presented using Directed Acyclic Graphs (DAGs). ABN is a machine learning approach to empirically identifying associations in complex and high dimensional datasets. It is actually distributed as an R package available on CRAN.
The very natural extension to abn R package is to implement a frequentist approach using the classical GLM, then to implement classical scores as AIC, BIC etc. This extension could have many side benefits, one can imagine to boost different scores to find the best supported BN, it is easier to deal with data separation in a GLM setting, multilevel of clustering can be tackled with a mixed model setting, there exists highly efficient estimation methods for fitting GLM. More generally, if the main interest relies on the score and not on the shape of the posterior density, then a frequentist approach can be a good alternative. Surprisingly, there exists few available resources to display and analyse epidemiological data in an ABN framework. There is a need for comprehensive approach to display abn outputs. Indeed as the ABN framework is aimed for non-statistician to analyse complex data, one major challenge is to provide simple graphical tools to analyse epidemiological data. Besides that, there is a lack of resource addressing which class of problem can be tackle using ABN method, in terms of sample size, number of variables, expected density of the learned network.

Software:
https://CRAN.R-project.org/package=abn
http://www.r-bayesian-networks.org

Publications:
see also publications of this project

Bayesian hierarchical modeling of anthelmintic resistance

Craig Wang, in collaboration with Paul Torgerson, Section of Epidemiology, VetSuisse Faculty, UZH.

The prevalence of anthelmintic resistance has increased in recent years, as a result of the extensive use of anthelmintic drugs to reduce the infection of parasitic worms in livestock. In order to detect the resistance, the number of parasite eggs in animal faeces is counted. The widely used faecal egg count reduction test (FECRT) was established in the early 1990s. We develop Bayesian hierarchical models to estimate the reduction in faecal egg counts. Our models provide lower estimation bias and provide accurate posterior credible intervals. An R package is available on CRAN, and we have also implemented an user-friendly interface in R Shiny.

Publications:
https://doi.org/10.1016/j.vetpar.2016.12.007
Software:
https://CRAN.R-project.org/package=eggCounts
http://shiny.math.uzh.ch/user/furrer/shinyas/shiny-eggCounts/

Predicting missing values in spatio-temporal satellite data

Florian Gerber, in collaboration with Rogier de Jong, Michael E. Schaepman, Gabriela Schaepman-Strub

Remotely sensed data are sparse, which means that data have missing values, for instance due to cloud cover. This is problematic for applications and signal processing algorithms that require complete data sets. To address the sparse data issue, we worked on a new gap-fill algorithm. The proposed method predicts each missing value separately based on data points in a spatio-temporal neighborhood around the missing data point. The computational workload can be distributed among several computers, making the method suitable for large datasets. The prediction of the missing values and the estimation of the corresponding prediction uncertainties are based on sorting procedures and quantile regression. The algorithm was applied to MODIS NDVI data from Alaska and tested with realistic cloud cover scenarios featuring up to 50% missing data. Validation against established software showed that the proposed method has a good performance in terms of the root mean squared prediction error. We demonstrate the software performance with a real data example and show how it can be tailored to specific data. Due to the flexible software design, users can control and redesign major parts of the procedure with little effort. This makes it an interesting tool for gap-filling satellite data and for the future development of gap-fill procedures.

Publications:
https://doi.org/10.1109/TGRS.2017.2785240
Software:
https://cran.r-project.org/package=gapfill

Statistical Ice Shelf Modeling

Towards uncertainty quantification of ice shelves simulations as key components of the global cryosphere
David Masson, in collaboration with Nina Kirchner (Stockholm University)

This project aims at building a nucleus for “Statistical Glaciology” by cross-fertilizing the previously virtually unrelated fields of numerical ice modeling and statistics. While the atmospheric and oceanic modeling community has since long integrated statistical tools in the analysis of uncertainties, the ice sheet modeling community still struggles to provide a systematic quantification of uncertainties of their simulations. Indeed, intricate fluid-dynamics equations often require large amount of computations, which in turn severely limits the estimation of uncertainty by multiple simulation runs.

More specifically, we focus on ice shelves. These 'floating glaciers' are several hundreds of meters thick and play a major role in global climate through their interaction with the polar oceans. Here, the modeling of ice shelves is split into two components : first, a deterministic component provides a gross approximation of the ice-shelf dynamics via a simple 2D-advection of ice into the ocean. The second component consists in stochastic processes that correct the errors made above. The melting processes beneath the ice shelf as well as the shelf's disaggregation in smaller icebergs at the calving front is simulated by statistical models. For instance, ice melting is generated by random Gaussian processes, and ice calving at the shelf's front is modeled by Poisson- and extreme value processes. If this technique can reproduce steady-state ice-shelves in Antarctica, we propose then to simulate several thousands of ice-shelves to explore the parameter space. All simulations leading to steady-states agreeing with present-day observations contains valuable information about plausible parameter ranges. Hence, this study will enable: (i) the quantification of uncertainties in ice shelf simulations, (ii) the identification their driving factors, (iii) and will inform numerical modelers about crucial, yet typically unobservable parameters (e.g. melting rate at the underside of an ice shelf).

Observed ice thickness for the Ross ice shelf, Antarctica. Grey area: grounded ice sheet. Black area: ocean.

Global Change and Biodiversity

Assessing uncertainty in global change–biodiversity research using
multiscale Bayesian modelling
Florian Gerber, in collaboration with  Gabriela Schaepman-Strub (IEU, UZH)

We aim to improve the understanding of ecosystem-environment interactions by linking local, sparse field data together with coarse satellite data and data from Earch System Models (ESMs). Spatio-temporal hierarchical Bayesian models will be used to describe the correlation between biodiversity, biophysical feedbacks and climate. Due to the large number of data points and model parameters, estimation is challenging and brute-force Markov chain Monte Carlo (MCMC) simulations are likely to fail. This motivates the developments of suitable approximation techniques. The statistical frame-work will allow us to quantify, e.g., biodiversity–albedo relations and predict their future behavior. Also, uncertainty estimates associated with model parameters and predictions will be derived.

Computer experiments

Evaluation Strategies for Computer Experiments with Medium-sized Datasets

Clément Chevalier, in collaboration with David Ginsbourger (University of Berne)

The design and analysis of computer experiments is a growing topic with important applications in engineering in different domains like nuclear safety, meteorology, finance, telecommunications, oil exploration, vehicle crash-worthiness, etc. In the literature, Gaussian processes  are often used as a surrogate model to construct response surfaces of the output of some complex black-box simulator. These models also have the ability to quantify uncertainties and guide parsimonious evaluation strategies of the simulator.


Typical applications, however, are limited to small (typically, less than 1000) number n of observations due to a typical n x n matrix storage and inversion involved in the computations. This research project aims at facilitating the use of computer experiment techniques to dataset of intermediate size (i.e. less than, 50,000 - 100,000), through the use of some well-chosen covariance functions. In particular, we would like to deal with two major issues:

  1. (Theoretical) Quantify the discrepancy between our covariance models and other typical models.
  2. (Practical) Adapt some typical sequential evaluation strategies in the computer experiment literature to settings with larger datasets and/or design new strategies.

Climate data analysis

Multivariate Modeling of Large Non-stationary Spatial and Spatio-temporal Climate Fields

Mattia Molinaro, in collaboration with R. Knutti (ETH)

Climate data analysis represents a great challenge not only from a statistical point of view, but also for its implications to other disciplines, such as economics and public health. There are a number of AOGCMs (atmosphere-ocean general circulation models) that are currently being used by scientists to model and possibly predict the extremely complex interactions governing the global climate: it appears therefore crucial to develop a statistical model able to meaningfully combine the obtained results. From this point of view, the project will focus on two main aspects: quantifying the uncertainty resulting from the considered AOGCMs and attributing it to different factors. Accomplishing this task will rely on advanced and tailored techniques, such as finding a proper parametrization of specific GMRFs (Gaussian Markov Random Fields) that model the climate fields related to the aforementioned AOGCMs. In order to take advantage of the available previous knowledge, we choose to develop a hierarchical Bayesian model whose different levels are expected to statistically describe the observations and the prior distributions of the parameters of interest. Via the Bayes theorem, the related posterior distribution can be found. As it happens oftentimes in Bayesian modeling, a closed formula does not exits. As a consequence, it will be necessary to pay particular attention to the computational aspects of the problem: we expect to use both MCMC techniques and Laplace nested approximations via the package INLA. After having developed the above outlined framework, we plan on establishing a multivariate ANOVA model for a specific AOGCM. The aim is to take into consideration the uncertainties inherent future projections and climate change, in order to provide a possible answer to the interdisciplinary questions stated in the introduction.

Bayesian Network

Developing Bayesian Networks as a tool for Epidemiological Systems Analysis

Marta Pittavino, in collaboration with F.I. Lewis and P. Torgerson (VetSuisse).

Bayesian network (BN) analysis is a form of probabilistic modeling which derives empirically from observed data a directed acyclic graph (DAG) describing the dependency structure between random variables.

BN are increasingly finding application in areas such as computational and systems biology, and more recently in epidemiological analysis.

These approaches are relatively new within the discipline of veterinary epidemiology, but promise a quantum leap in our ability to untangle the complexity of disease occurrence because of their capability to formulate and test multivariate model structures. The multivariate approach considers all dependencies between all variables observed in an epidemiological study, without the need to declare one outcome variable as a function of one of more “predictor” variables (the multivariable approach). 

Three key challenges exist in dealing with additive Bayesian network and that are going to be tackled within this research project, one theoretical, one epidemiological and another one computational.

Firstly, trying to develop a likelihood equivalent parameter priors for non-conjugate Bns. This will produce a score equivalent metric used to evaluate and discriminate between bayesian network structures.

Secondly, to demonstrate and promote the use of this methodology in epidemiology, relevant and high quality exemplar case studies will be developed showing objectively, situations in which BN models can offer the most added value, relative to existing standard statistical and epidemiological methods.

Thirdly, no appropriate software exists for fitting the types of BN models necessary for analysing epidemiological data, where complexities such as grouped/overdispersed/correlated observations are ubiquitous. This project will develop easy-to-use software to allow ready access to BN modelling to epidemiological practitioners, which is essential in order to make the crucial transition from merely a technically attractive methodology, to an approach which is actually used in practice.

Publications:
PDF of PhD thesis doi:10.5167/uzh-126930
10.1016/j.dib.2017.05.053
10.1016/j.actatropica.2017.04.034

Statistical evaluation of climate model output

Spatial Hierarchical Bayes Model for AOGCM Climate Projections
Steve Geinitz, in collaboration with R. Knutti, ETHZ, D. Nychka, NCAR, and S. R. Sain, NCAR.

Numerical experiments based on atmospheric-ocean general circulation models (AOGCMs) are one of the primary tools in deriving projections for future climate change. However, each model has its strengths and weaknesses within local and global scales. This motivates climate projections synthesized from results of several AOGCMs' output, combining present day observations, present day and future climate projections in a single hierarchical Bayes model for which the posterior climate change distributions are obtained with computer-intensive MCMC simulations.

We propose to extend the above idea and develop a Bayesian statistical model serving two purposes: quantifying uncertainty and attributing it to different factors. We use spatial statistical models that borrow strength across adjacent spatial regions of the globe in order to provide an statistically accurate assessment of (climate) model bias and inter-model variability. An additional feature of the methodology is the ability to synthesize climate change projections across the different models and then to down-scale to almost arbitrary regions providing a coherent uncertainty estimate.

Towards intelligent continuous compaction control

Multivariate non-stationary spatial mixed-effects models
for roller measurement values
Daniel Heersink, in collaboration with M. Mooney, CSM, R. Anderegg, FHNW.

In this project we attempt to develop complex state-space models for spatial data that are (i) observed indirectly through non-linear measurement operators, (ii) highly non-stationary, (iii) inherently multivariate with dynamic, complex correlation structures, (iv) subject to spatial misalignment and non-linear change-of support characteristics, and are (v) collected in huge quantities.

Such data are, for example, collected by modern earthwork compaction rollers which are equipped with sensors allowing a virtually continuous flow of soil property measurements. The statistical methodology that will be developed in this project can subsequently be used for the design of an intelligent continuous compaction control system.

Statistics for large datasets

Fitting Large-Scale Spatial Models with Applications to Microarray Data Analysis
In collaboration with S. R. Sain, NCAR.

A single microarray includes over 400,000 individual observations, too much data for classical analysis techniques. We apply covariance tapering to a very general type of mixed model that has a random spatial component. Then, taking advantage of the sparse nature of such tapered covariance matrices, backfitting is used to estimate the fixed and random model parameters. Results are demonstrated on an experiment using microarrays to build a profile of differentially expressed genes relating to cerebral vascular malformations, an important cause of hemorrhagic stroke and seizures.

The taper technique is of general nature and can be applied to many other problems in the environmental and biological sciences. This requires more flexibility in the tapering technique. A potential approach is to taper directly the Cholesky factor instead of the covariance matrix itself.