Applied Statistics

Research

The range of research topics of the Applied Statistics group is broad. A few typical projects are listed here. Selected published work is available via ZORA.

For possible and future Master's thesis in Mathematics or in Biostatistics, see here. We often have shorter projects as well, see here.

See also our Declaration of Reproducibility Policy.

Identification of Dominant Features in Spatial Data

Roman Flury

Dominant features of spatial data are connected structures or patterns that emerge from location-based variation and manifest at specific scales or resolutions. To identify dominant features, we propose a sequential application of multiresolution decomposition and variogram function estimation. Multiresolution decomposition separates data into additive components, and in this way, enables the recognition of their dominant features. The data are separated into their components by smoothing on different scales, such that larger scales have longer spatial correlation ranges. Variogram functions are estimated for each component to determine its effective range, assessing the width-extent of the dominant feature. Finally, Bayesian analysis enables the inference of identified dominant features and to judge whether these are credibly different. The efficient implementation of the method relies mainly on a sparse-matrix data structure and algorithms. In disciplines that use spatial data, this method can lead to new insights, as we exemplify by identifying the dominant features in a forest dataset. In that application, the width-extents of the dominant features have an ecological interpretation, namely the species interaction range, and their estimates support the derivation of ecosystem properties such as biodiversity indices.

Publication:  https://doi.org/10.1016/j.spasta.2020.100483
Supplementary material https://git.math.uzh.ch/roflur/spatialfeatureidentification.

Spatio-temporally consistent postprocessing of precipitation over complex topography

Stephan Hemri in collaboration with MeteoSwiss

Over the last decade statistical postprocessing has become a standard tool to reduce biases and dispersion errors of probabilistic numerical weather prediction (NWP) ensemble forecasts. Most established postprocessing approaches train a regression type model using raw ensemble statistics as predictors on a typically small set of stations. With high-resolution fields of observed weather data becoming increasingly available, our goal is to assess the potential for spatio-temporally multivariate postprocessing approaches which are able to incorporate the spatio-temporal information from these fields. While having worked mostly with quantile regression based approaches in the beginning, we have moved towards conditional generative adversarial networks (cGAN) in order to be able to generate postprocessed and yet realistic scenarios of forecast precipitation. The figure below shows example forecast scenarios for daily precipitation with a forecast horizon up to five days issued on 1 June 2019 00 UTC. We show the control run of the COSMO-E NWP ensemble along with the corresponding ensemble mean, followed by two forecast scenarios sampled from the cGAN distribution and the pixel-wise mean of 21 cGAN samples. Overall the two cGAN scenarios and the cGAN mean forecast look quite realistic. However, a closer look at the cGAN forecasts still reveals some visual artifacts and also deficiencies in the pixel-wise univariate forecast skill.  Currently, we are working on further improving these type of cGAN based postprocessing methods.

Gaussian Process-based Spatially Varying Coefficient Models

Jakob Dambon, in collaboration with Fabio Sigrist (HSLU)

Spatial modelling usually assumes the marginal effects of covariates to be constant and only incorporates a spatial structure on the residuals. However, there exist cases where such an assumption is too strong and we assume non-stationarity for the effects, i.e., coefficients. Spatially varying coefficient (SVC) models account for non-stationarity of the coefficients where their effect sizes are depending on the observation location. We present a new methodology where the coefficients are defined by Gaussian processes (GPs), so called GP-based SVC models. These models are highly flexible yet simple to interpret. We use maximum likelihood estimation (MLE) that has been optimized for large data sets where the number of observations exceeds 105 and the number of SVCs is moderate. Further, a variable selection methodology for GP-based SVC models is presented. A software implementation of the described methods is given in the R package varycoef.

Publications:

Unveiling the tapering approach

Federico Blasi as well as Roman Flury and Michael Hediger

Over the last twenty years, whopping increases in data sizes for multivariate spatial and spatio-temporal data has been recorded and simulated (i.e. remotely sensed data, bathymetric data, weather, and climate simulations, inter alia), introducing a new exciting era of data analysis, yet unveiling computational limitations of the classical statistical procedures. As an example, the maximum likelihood estimation involves solving linear systems based on the covariance matrix, requiring O(n3) operations, which can be a prohibitive task when dealing with very large datasets.

A vast variety of strategies has been introduced to overcome this, (i) through "simpler" models (e.g., low-rank models, composite likelihood methods, predictive process models), and (ii) model approximations (e.g., with Gaussian Markov random fields, compactly supported covariance function). In many cases the literature about practical recommendations is sparse.

In this project, we fill that gap for the tapering approach. Along with a concise review of the subject, we provide an extensive simulated study introducing and contrasting available implementations for the tapering approach and classical implementations, that along with good statistical practices, provides an extensive well-covered summary of the approach.

 

Software:

Detection and Space-Time Modeling of Biome Transition Zones

Leila Schuh in collaboration with Maria Santos and  others, URPP Global Change and Biodiversity

Shifting climatic zones result in spatial reconfiguration of potential biome ranges. Particularly ecotones are expected to respond to novel conditions. While temporal trends of individual pixel values have been studied extensively, we analyze dynamics in spatial configuration over time. Landscape heterogeneity is an important variable in biodiversity research, and can refer to between and within habitat characteristics. Tuanmu and Jetz (2015) distinguish between topography based, land cover based, 1st and 2nd-order heterogeneity measures. 1st-order metrics characterize single pixel values, while 2nd-order texture metrics provide information about the spatial relationship between pixels in an area. We utilize and further develop such texture metrics to advance our understanding of landscape heterogeneity as an indicator of large-scale ecosystem transformations.

Spatial fusion modeling

Craig Wang, in collaboration with Milo Puhan, Epidemiology, Biostatistics and Prevention Institute, UZH.

The availability of data has increased dramatically in past years. Multivariate remote sensing data, highly detailed social-economic data are readily to be analyzed to address different research interests. Moreover, the linkage between diverse datasets can be more easily established with the openness trend of database hosts and organizations.

We constructs spatial fusion models within the Bayesian framework to jointly analyze both individual point data and area data. A single source of data may be incomplete or not suitable for parameter inference in statistical models. The cost of data collection especially in large population studies may result useful variables to be omitted, hence limit the scope of research interests. In addition, appropriate statistical models can be complex hence requiring a large amount of data to make precise inference on the weakly identified parameters. Therefore, it becomes crucial in those situations to utilize multiple data sources, in order to reduce bias, widen research possibilities and apply appropriate statistical models.

64-bit sparse matrices in R

Florian Gerber

Software packages for spatial data often implement a hybrid approach of interpreted and compiled programming languages. The compiled parts are usually written in C, C++, or Fortran, and are efficient in terms of computational speed and memory usage. Conversely, the interpreted part serves as a convenient user interface and calls the compiled code for computationally demanding operations. The price paid for the user friendliness of the interpreted component is—besides performance—the limited access to low level and optimized code. An example of such a restriction is the 64-bit vector support of the widely used statistical language R. On the R side, users do not need to change existing code and may not even notice the extension. On the other hand, interfacing 64-bit compiled code efficiently is challenging. Since many R packages for spatial data could benefit from 64-bit vectors, we investigated how to simply extend existing R packages using the foreign function interface to seamlessly support 64-bit vectors. This extension is shown with the sparse matrix algebra R package spam. The new capabilities are illustrated with an example of GIMMS NDVI3g data featuring a parametric modeling approach for a non-stationary covariance matrix.
A key part of the 64-bit extension is the R package dotCall64, which provides an enhanced foreign function interface to call compiled code from R. The interface provides functionality to do the required double to 64-bit integer type conversions. In addition, options to control copying of R objects are available.

Developing Bayesian Networks as a tool for Epidemiological Systems Analysis

Gilles Kratzer, in collaboration with the Section of Epidemiology, VetSuisse Faculty, UZH.

The study of the causes and effect of health and disease condition is a cornerstone of the epidemiology. Classical approaches, such as regression techniques have been successfully used to model the impact of health determinants over the whole population. However, recently there is a growing recognition of biological, behavioural factors, at multiple levels that can impact the health condition. These epidemiological data are, by nature, highly complex and correlated. Classical regression framework have shown limited abilities to embrace the correlated multivariate nature of high dimensional epidemiological variables. On the other hand, models driven by expert knowledge often fail to efficiently manage the complexity and correlation of the epidemiological data. Additive Bayesian Networks (ABN) addresses these challenges in producing a data selected set of multivariate models presented using Directed Acyclic Graphs (DAGs). ABN is a machine learning approach to empirically identifying associations in complex and high dimensional datasets. It is actually distributed as an R package available on CRAN.
The very natural extension to abn R package is to implement a frequentist approach using the classical GLM, then to implement classical scores as AIC, BIC etc. This extension could have many side benefits, one can imagine to boost different scores to find the best supported BN, it is easier to deal with data separation in a GLM setting, multilevel of clustering can be tackled with a mixed model setting, there exists highly efficient estimation methods for fitting GLM. More generally, if the main interest relies on the score and not on the shape of the posterior density, then a frequentist approach can be a good alternative. Surprisingly, there exists few available resources to display and analyse epidemiological data in an ABN framework. There is a need for comprehensive approach to display abn outputs. Indeed as the ABN framework is aimed for non-statistician to analyse complex data, one major challenge is to provide simple graphical tools to analyse epidemiological data. Besides that, there is a lack of resource addressing which class of problem can be tackle using ABN method, in terms of sample size, number of variables, expected density of the learned network.

Software:
https://CRAN.R-project.org/package=abn
http://www.r-bayesian-networks.org

Publications:
see also publications of this project