Reconstruction attack

From HandWiki

A reconstruction attack is any method for partially reconstructing a private dataset from public aggregate information. Typically, the dataset contains sensitive information about individuals, whose privacy needs to be protected. The attacker has no or only partial access to the dataset, but has access to public aggregate statistics about the datasets, which could be exact or distorted, for example by adding noise. If the public statistics are not sufficiently distorted, the attacker is able to accurately reconstruct a large portion of the original private data. Reconstruction attacks are relevant to the analysis of private data, as they show that, in order to preserve even a very weak notion of individual privacy, any published statistics need to be sufficiently distorted. This phenomenon was called the Fundamental Law of Information Recovery by Dwork and Roth, and formulated as "overly accurate answers to too many questions will destroy privacy in a spectacular way."[1]

The Dinur-Nissim Attack

In 2003, Irit Dinur and Kobbi Nissim proposed a reconstruction attack based on noisy answers to multiple statistical queries.[2] Their work was recognized by the 2013 ACM PODS Alberto O. Mendelzon Test-of-Time Award in part for being the seed for the development of differential privacy.[3]

Dinur and Nissim model a private database as a sequence of bits [math]\displaystyle{ D = (d_1, \ldots, d_n) }[/math], where each bit is the private information of a single individual. A database query is specified by a subset [math]\displaystyle{ S\subseteq \{1, \ldots, n\} }[/math], and is defined to equal [math]\displaystyle{ q_S(D) = \sum_{i \in S}{d_i} }[/math]. They show that, given approximate answers [math]\displaystyle{ a_1, \ldots, a_m }[/math] to queries specified by sets [math]\displaystyle{ S_1, \ldots, S_m }[/math], such that [math]\displaystyle{ |a_i - q_{S_i}(D)| \le \mathcal{E} }[/math] for all [math]\displaystyle{ i \in \{1, \ldots, m\} }[/math], if [math]\displaystyle{ \mathcal{E} }[/math] is sufficiently small and [math]\displaystyle{ m }[/math] is sufficiently large, then an attacker can reconstruct most of the private bits in [math]\displaystyle{ D }[/math]. Here the error bound [math]\displaystyle{ \mathcal{E} }[/math] can be a function of [math]\displaystyle{ m }[/math] and [math]\displaystyle{ n }[/math]. Nissim and Dinur's attack works in two regimes: in one regime, [math]\displaystyle{ m }[/math] is exponential in [math]\displaystyle{ n }[/math], and the error [math]\displaystyle{ \mathcal{E} }[/math] can be linear in [math]\displaystyle{ n }[/math]; in the other regime, [math]\displaystyle{ m }[/math] is polynomial in [math]\displaystyle{ n }[/math], and the error [math]\displaystyle{ \mathcal{E} }[/math] is on the order of [math]\displaystyle{ \sqrt{n} }[/math].

References

  1. The Algorithmic Foundations of Differential Privacy by Cynthia Dwork and Aaron Roth. Foundations and Trends in Theoretical Computer Science. Vol. 9, no. 3–4, pp. 211‐407, Aug. 2014. DOI:10.1561/0400000042
  2. Irit Dinur and Kobbi Nissim. 2003. Revealing information while preserving privacy. In Proceedings of the twenty-second ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems (PODS '03). ACM, New York, NY, USA, 202–210. DOI:10.1145/773153.773173
  3. "ACM PODS Alberto O. Mendelzon Test-of-Time Award". https://sigmod.org/pods/acm-pods-alberto-o-mendelzon-test-of-time-award/.