Applied Statistics

Research

The range of research topics of the Applied Statistics group is broad. A few typical projects are listed here. Selected published work is available via ZORA.

For possible and future Master's thesis in Mathematics or in Biostatistics, see here. We often have shorter projects as well, see here.

See also our Declaration of Reproducibility Policy.

Novel Machine Learning Algorithms for Large Spatial Data

Tim Gyger, in collaboration with HSLU

The Gaussian Process regression model, provides a flexible method of interpolation by identifying the underlying spatial structure in the data, commonly referred to as Kriging. A recent proposal in advocates the incorporation of tree-boosting algorithms to model the fixed effects in the regression model. This approach has the advantage that complex functions in the predictor variables can be learned. The model parameters are typically estimated through the repetitive calculation of the negative log marginal likelihood and its derivative with respect to the hyperparameters. However, this calculation becomes computationally unfeasible for large datasets.

To address this issue, we propose a novel approach that enables scaling to larger datasets through the utilization of a full-scale covariance approximation (FSA) combined with conjugate gradient algorithms for the linear solves and stochastic trace estimations for the log-determinant and its derivative.

Through first simulations, we evaluated that a simple preconditioner given by the structure of FSA has a highly positive impact on the convergence of the conjugate gradient method and the variance reduction in trace estimations. This leads to improvements in runtime and accuracy of the log-likelihood estimates. Moreover, we investigate a novel approach for a low-rank covariance approximation based on variants of the Lanczos Algorithm. Furthermore, we will apply our model to a real-world dataset describing specific structural characteristics of the Laegeren mountain. We will evaluate the predictive accuracy of our approach and compare it with state-of-the-art methods.

Interpretable decision support model estimation for intracranial aneurysms

Matteo Delucchi, in collaboration with ZHAW

A better understanding of the interplay of intracranial aneurysms (IAs) rupture risk factors can be highly beneficial in making clinical decisions about the treatment of unruptured IAs. IAs are bulges in cerebral arteries present in about 3% of the population. While most of these IAs are asymptomatic, sudden rupture can lead to a type of hemorrhagic stroke, which often has poor functional outcomes. Clinicians base their treatment decisions on a combination of expertise, guidelines, and risk prediction scores. However, weighing treatment risks and benefits remains inherently difficult due to unknown disease mechanisms and interdependent rupture risk factors. 

This project aims to create a more accurate and practical clinical decision support system by developing a probabilistic graphical model that accounts for the interdependence among rupture risk factors and incorporates clinical expertise. When it comes to analyzing complex and highly dimensional datasets, additive Bayesian network modelling (R package abn) is a powerful machine-learning approach that can help identify associations. Moreover, it allows for the flexible modelling of continuous and discrete variables of various distribution types and multiple data sources.

We could show, on a single-centre data set, that BNs facilitate knowledge discovery of the IA disease compared to standard statistical analyses. For example, only a few factors are directly associated with IA rupture. To improve the generalisability, we collect a new, multicentre data set of IA rupture risk factors and expand the R package abn to account for more complex data generation processes.

Pruning Sparse Dataframes of Linguistic Data

Marc Lischka, in collaboration with Anna Graff and other NCCR EvolvingLanguage members

With increasing availability of linguistic data in large-scale databases, the perspective of appropriately merging data from such databases via language identifiers (such as glottocodes) and performing large-scale analyses on such aggregated datasets becomes attractive, especially for analyses that aim at testing hypotheses at global scales, for which data collection may not be feasible for large numbers of variables.
Some of these databases have near-complete variable coding density for all languages present (e.g. PHOIBLE, Grambank), others’ variables are coded for very different sets of languages (e.g. AUTOTYP, Lexibank, WALS), resulting in sparse variable-language matrices. Additionally, combining data from various databases results in further sparsity. Our initial dataset including original variables, modified variables and merged variables has an overall coding density of ~15%. It is clear that many languages and variables in such an initial dataset have such low coding densities that including them in analysis is not sufficiently beneficial, e. g. due to their contribution to extra computational time. 

We develop an iterative procedure to prune the aggregated matrix, following criteria set with the aim to optimise the language-variable matrix in terms of coding density and taxonomic diversity, primarily. The criteria we employ relate to a) the taxonomic importance of each language given the current language sample (we could call this a “vertical” criterion) and b) weighted density of data for both languages and variables (criterion both “vertical” and “horizontal”). Point (a) means we want to make the removal of a language representative of a family that is present with few languages in the current sample more expensive than the removal of a language from a family that is present in the matrix with many languages.
In addition to the raw coding densities of languages and variables, we want to base our decision as to which of them to prune in a given step on weighted densities. It makes sense to keep languages that are coded for variables of high density and, vice versa, to keep variables coded for languages of high density. Hence, weighted language densities are computed using the variable densities as weights and vice versa. This approach can be iterated.

Identification of Dominant Features in Spatial Data

Roman Flury

Dominant features of spatial data are connected structures or patterns that emerge from location-based variation and manifest at specific scales or resolutions. To identify dominant features, we propose a sequential application of multiresolution decomposition and variogram function estimation. Multiresolution decomposition separates data into additive components, and in this way, enables the recognition of their dominant features. The data are separated into their components by smoothing on different scales, such that larger scales have longer spatial correlation ranges. Variogram functions are estimated for each component to determine its effective range, assessing the width-extent of the dominant feature. Finally, Bayesian analysis enables the inference of identified dominant features and to judge whether these are credibly different. The efficient implementation of the method relies mainly on a sparse-matrix data structure and algorithms. In disciplines that use spatial data, this method can lead to new insights, as we exemplify by identifying the dominant features in a forest dataset. In that application, the width-extents of the dominant features have an ecological interpretation, namely the species interaction range, and their estimates support the derivation of ecosystem properties such as biodiversity indices.

Publication:  https://doi.org/10.1016/j.spasta.2020.100483
Supplementary material https://git.math.uzh.ch/roflur/spatialfeatureidentification.

Spatio-temporally consistent postprocessing of precipitation over complex topography

Stephan Hemri in collaboration with MeteoSwiss

Over the last decade statistical postprocessing has become a standard tool to reduce biases and dispersion errors of probabilistic numerical weather prediction (NWP) ensemble forecasts. Most established postprocessing approaches train a regression type model using raw ensemble statistics as predictors on a typically small set of stations. With high-resolution fields of observed weather data becoming increasingly available, our goal is to assess the potential for spatio-temporally multivariate postprocessing approaches which are able to incorporate the spatio-temporal information from these fields. While having worked mostly with quantile regression based approaches in the beginning, we have moved towards conditional generative adversarial networks (cGAN) in order to be able to generate postprocessed and yet realistic scenarios of forecast precipitation. The figure below shows example forecast scenarios for daily precipitation with a forecast horizon up to five days issued on 1 June 2019 00 UTC. We show the control run of the COSMO-E NWP ensemble along with the corresponding ensemble mean, followed by two forecast scenarios sampled from the cGAN distribution and the pixel-wise mean of 21 cGAN samples. Overall the two cGAN scenarios and the cGAN mean forecast look quite realistic. However, a closer look at the cGAN forecasts still reveals some visual artifacts and also deficiencies in the pixel-wise univariate forecast skill.  Currently, we are working on further improving these type of cGAN based postprocessing methods.

Gaussian Process-based Spatially Varying Coefficient Models

Jakob Dambon, in collaboration with Fabio Sigrist (HSLU)

Spatial modelling usually assumes the marginal effects of covariates to be constant and only incorporates a spatial structure on the residuals. However, there exist cases where such an assumption is too strong and we assume non-stationarity for the effects, i.e., coefficients. Spatially varying coefficient (SVC) models account for non-stationarity of the coefficients where their effect sizes are depending on the observation location. We present a new methodology where the coefficients are defined by Gaussian processes (GPs), so called GP-based SVC models. These models are highly flexible yet simple to interpret. We use maximum likelihood estimation (MLE) that has been optimized for large data sets where the number of observations exceeds 105 and the number of SVCs is moderate. Further, a variable selection methodology for GP-based SVC models is presented. A software implementation of the described methods is given in the R package varycoef.

Publications:

Unveiling the tapering approach

Federico Blasi as well as Roman Flury and Michael Hediger

Over the last twenty years, whopping increases in data sizes for multivariate spatial and spatio-temporal data has been recorded and simulated (i.e. remotely sensed data, bathymetric data, weather, and climate simulations, inter alia), introducing a new exciting era of data analysis, yet unveiling computational limitations of the classical statistical procedures. As an example, the maximum likelihood estimation involves solving linear systems based on the covariance matrix, requiring O(n3) operations, which can be a prohibitive task when dealing with very large datasets.

A vast variety of strategies has been introduced to overcome this, (i) through "simpler" models (e.g., low-rank models, composite likelihood methods, predictive process models), and (ii) model approximations (e.g., with Gaussian Markov random fields, compactly supported covariance function). In many cases the literature about practical recommendations is sparse.

In this project, we fill that gap for the tapering approach. Along with a concise review of the subject, we provide an extensive simulated study introducing and contrasting available implementations for the tapering approach and classical implementations, that along with good statistical practices, provides an extensive well-covered summary of the approach.

 

Software:

Detection and Space-Time Modeling of Biome Transition Zones

Leila Schuh in collaboration with Maria Santos and  others, URPP Global Change and Biodiversity

Shifting climatic zones result in spatial reconfiguration of potential biome ranges. Particularly ecotones are expected to respond to novel conditions. While temporal trends of individual pixel values have been studied extensively, we analyze dynamics in spatial configuration over time. Landscape heterogeneity is an important variable in biodiversity research, and can refer to between and within habitat characteristics. Tuanmu and Jetz (2015) distinguish between topography based, land cover based, 1st and 2nd-order heterogeneity measures. 1st-order metrics characterize single pixel values, while 2nd-order texture metrics provide information about the spatial relationship between pixels in an area. We utilize and further develop such texture metrics to advance our understanding of landscape heterogeneity as an indicator of large-scale ecosystem transformations.