Validity, Reliability, and Significance: Empirical Methods for NLP and Data Science

Validity, Reliability, and Significance: Empirical Methods for NLP and Data Science (book cover)


Abstract

Empirical methods are means to answering methodological questions of empirical sciences by statistical techniques. The methodological questions addressed in this book include the problems of validity, reliability, and significance. In the case of machine learning, these correspond to the questions of whether a model predicts what it purports to predict, whether a model’s performance is consistent across replications, and whether a performance difference between two models is due to chance, respectively. The goal of this book is to answer these questions by concrete statistical tests that can be applied to assess validity, reliability, and significance of data annotation and machine learning prediction in the fields of NLP and data science.

Our focus is on model-based empirical methods where data annotations and model predictions are treated as training data for interpretable probabilistic models from the well-understood families of generalized additive models (GAMs) and linear mixed effects models (LMEMs). Based on the interpretable parameters of the trained GAMs or LMEMs, the book presents model-based statistical tests such as a validity test that allows detecting circular features that circumvent learning. Furthermore, the book discusses a reliability coefficient using variance decomposition based on random effect parameters of LMEMs. Last, a significance test based on the likelihood ratios of nested LMEMs trained on the performance scores of two machine learning models is shown to naturally allow the inclusion of variations in meta-parameter settings into hypothesis testing, and further facilitates a refined system comparison conditional on properties of input data.

This book can be used as an introduction to empirical methods for machine learning in general, with a special focus on applications in NLP and data science. The book is self-contained with an appendix on the mathematical background on GAMs and LMEMs, and with an accompanying webpage including R code to replicate experiments presented in the book.

Access

Access the original copy (paper or ebook) from SpringerLink.

Download a preprint.

Data & Code

The data and code provided here allow recreating the experiments presented in the book.

Description

  • Information Retrieval Example:
    • We used the test data set published by Kuwa et al. for the CLIR example. This data set encompasses 2,000 queries and roughly 100,000 search corpus documents and contains the similarity scores for all query/document pairs used by Kuwa et al..
  • Machine Translation Example:
    • Kreutzer et al. provided three data sets. The first one was obtained during a user study on human error markings and corrections. The second one contained the test set evaluations of the baseline system and final systems obtained after annotation and marking based fine tuning of neural machine translation. The third data sets encompasses the test set evaluations of all models that were visited during hyperparameter optimization.
  • SOFA Examples:
    • We used the data set published by Schamoni et al. for the medical data examples. This data was collected at the 25-bed ICU of the Department of Anesthesiology and Surgical Intensive Care Medicine at University Medical Center Mannheim. All encounters (age ≥18 years) with complete ICU stay between June 1st 2016 and July 9th 2017 were included. In total physiological time series of 45 measurements of 620 patients were obtained. The Ethics Commission II of Medical Faculty Mannheim approved the study (2016-800R-MA) and waived the need for informed consent and the data protective authority of the hospital allowed its publication in a fully de-identified form.
  • [!!new!!] Large Language Model Example [!!new!!]:
    • This example consists of the BART+R3F fine-tuning algorithm presented by Aghajanyan et al. for the task of text summarization, evaluated on the CNN/DailyMail and RedditTIFU datasets. BART+R3F was listed as SOTA for text summarization on these datasets at the time of anlysis. It uses an approximate trust region method to constrain updates on embeddings and classifier during fine-tuning to mitigate catastrophic forgeting.

Code

  • Information Retrieval Example
    • Chapter 2: Circularity in Data Annotation Prediction (R)
    • Chapter 2: Circularity in Machine Learning Prediction (R)
  • Machine Translation Example
    • Chapter 3: Reliability of Data Annotation Performance (R)
    • Chapter 3: Reliability of Model Prediction Performance (R)
    • Chapter 4: Model based Significance Testing (R, Python)
  • Kidney SOFA Example
    • Chapter 2: Circularity in Data Annotation Prediction (R)
    • Chapter 2: Circularity in Machine Learning Prediction (R)
  • Liver SOFA Example
    • Chapter 2: Circularity in Data Annotation Prediction (R)
    • Chapter 2: Circularity in Machine Learning Prediction (R)
    • Chapter 3: Reliability of Model Prediction Performance (R)
  • [!!new!!] Large Language Model Example [!!new!!]
    • Chapter 5: Inferential Reproducibility (Python)

Download

Code & Data

Acknowledgments

This research has been conducted in project SCIDATOS (Scientific Computing for Improved Detection and Therapy of Sepsis), funded by the Klaus Tschira Foundation, Germany (Grant number 00.0277.2015).

Publication

  1. Stefan Riezler and Michael Hagmann
    Validity, Reliability, and Significance: Empirical Methods for NLP and Data Science
    Synthesis Lectures on Human Language Technologies, Springer Cham, 2022
    @book{riezler2022,
      author = {Riezler, Stefan and Hagmann, Michael},
      title = {Validity, Reliability, and Significance: Empirical Methods for NLP and Data Science},
      publisher = {Springer Cham},
      series = {Synthesis Lectures on Human Language Technologies},
      editor = {Hirst, Graeme},
      year = {2022},
      isbn = {9783031010552},
      doi = {https://doi.org/10.1007/978-3-031-02183-1}
      url = {https://doi.org/10.1007/978-3-031-02183-1}
    }