Presentations

2023

  • Analysis of fairness metrics for anonymization in Electronic Health Records

    Authors: Mariela Rajngewerc, Laura Acion and Laura Alonso Alemany

    Presented at KHIPU 2023 - Montevideo, Uruguay. March 2023.

    Abstract: Classical metrics to evaluate machine learning models are usually aggregates that provide no insights into the differential behavior of the model with respect to certain subgroups, which is usually known as bias. When working with models that will affect human beings, the impact of bias must be assessed to detect, mitigate or even prevent possible harm. Several fairness metrics have been defined in the bibliography. In some cases, if a metric adequately represents a relevant aspect of the behavior of the model, this implies that some other metrics may be irrelevant. Different problems may require different perspectives and different bias definitions. In this work, we show the strengths and limitations of different metrics, illustrating them as applied to the bias analysis of anonymization algorithms of Electronic Health Reports (EHR). These algorithms take a set of sentences and eliminate any sensitive data they may contain (names, surnames, identification numbers, etc). If these algorithms make systematic errors over a specific group of society, that group may be exposed, and their privacy may be violated. We show how different fairness metrics highlight certain aspects of the behavior of these algorithms while obscuring others.