"Does k-Anonymous Microaggregation Affect Machine-Learned Macrotrends?", IEEE Access (accepted for publication)

Abstract

In the era of big data, the availability of massive amounts of information make privacy protection more necessary than ever. Among a variety of anonymization mechanisms, microaggregation is a common approach to satisfy the popular requirement of k-anonymity in statistical databases. In essence, k-anonymous microaggregation aggregates quasi-identifiers to hide the identity of each data subject within a group of other k ?
1 subjects. As any perturbative mechanism, however, anonymization comes at the cost of some information loss that may hinder the ulterior purpose of the released data, which very often is building machine-learning models for macrotrends analysis. To assess the impact of microaggregation on the utility of the anonymized data, it is necessary to evaluate the resulting accuracy of said models. In this work, we address the problem of measuring the effect of k-anonymous microaggregation on the empirical utility of microdata. We quantify utility accordingly as the accuracy of classification models learned from microaggregated data, evaluated over original test data. Our experiments indicate, with some consistency, that the impact of the de facto microaggregation standard (MDAV) on the performance of machine-learning algorithms is often minor to negligible for a wide range of k, for a variety of classification algorithms and data sets. Furthermore, experimental evidences suggest that the traditional measure of distortion in the community of microdata anonymization may be inappropriate for evaluating the utility of microaggregated data.

Authors: 
A. Rodríguez-Hoyos
J. Estrada-Jiménez
D. Rebollo-Monedero
J. Parra-Arnau
J. Forné