

In its wake, “differential privacy” was proposed in 2006 ( Dwork, McSherry, et al. 2006) initially as a rigorous privacy or risk measure addressing consequences from the database reconstruction theorem. Differentially private noise mechanisms were then picked up and developed further to test and improve its use for (official) statistics see for example, Machanavajjhala et al. (2008), Hardt and Talwar (2010), Ghosh, Roughgarden, and Sundararajan (2012), Dwork and Roth (2014), Dwork and Rothblum (2016), and Rinott, O’Keefe, Shlomo, and Skinner (2018). Now a first strict line must be drawn between differential privacy as a risk measure, and differentially private (noisy) output mechanisms that are engineered to manifestly guarantee a given differential privacy level. However, many other noisy output mechanisms, using bounded or unbounded noise distributions, can be set up to give at least a relaxed differential privacy guarantee too ( Dwork, Kenthapadi, et al. For instance, the cell key method originally proposed by Fraser and Wooton (2005), Marley and Leaver (2011), and Thompson, Broadfoot, and Elazar (2013) can be turned into a (relaxed) differentially private mechanism ( Bailie and Chien 2019). On the other hand, strictly differentially private output mechanisms require unbounded noise distributions with infinite tails, which may have particularly negative effects on utility. This article aims to first address all these different notions separately, and then to present a consolidated discussion from both risk and utility perspectives. We further focus on population and census-like statistics with typical outputs being unweighted person counts possibly arranged in contingency tables. This serves two distinct motivations: On the one hand, treating only unweighted counts simplifies many technicalities without touching key issues of the noise discussion. On the other hand, global efforts on the 2020/2021 census round are peaking right now, with many important (and urgent) contact points to this article.
