Privacy preservation without compromising data integrity
thesisposted on 23.02.2017, 00:45 by Sabrina, Tishna
In people-centric applications, participants voluntarily report data to service providers for community benefits. As most of the applications demand high-quality data, straightforward representation of even seemingly benign data may pose significant privacy risks through inference. Retaining high data quality without compromising participants’ privacy is a challenging research problem since these goals are inherently orthogonal. The existing techniques attempt to protect user privacy by reducing data precision or infusing obfuscation that ultimately degrade data quality. This thesis introduces a novel plaintext data sharing framework that aims to provide high-quality data at the desired end, protect privacy at vulnerable points such as adversaries, and safeguard against untrustworthy data manipulations. A novel subset-coding technique is developed to anonymize user reports from where original data can be retrieved through joint-decoding only if sufficient reports are received. The proposed framework is applicable when many people observe/express opinion about individual instances. Two widely-known people-centric application scenarios—participatory sensing and electronic voting—are considered. In participatory sensing, participants use data capturing devices such as smartphones that often profile their whereabouts, interests, activities, and relationships and hence, intensify inferable privacy risks. To mitigate such risks a number of anonymization and joint-decoding algorithms are proposed considering both probabilistic and deterministic decision mechanisms to cater for different participation rate e.g., commonly visited points of interests or rarely visited ones. Comprehensive adversary models are investigated and analytical privacy risk models are presented along with risk mitigation strategies. Verifiability of the received data is not of considerable significance in participatory sensing. However, wide-acceptance of electronic voting systems largely depend on guaranteeing vote-verifiability (vote is cast-as-intended) and tally-verifiability (vote is counted-as-cast) while thwarting any attempt of revealing voter-vote association to mitigate privacy, coercion, and vote-trading risks. The proposed subset-coding technique is successfully applied in this context to design an end-to-end verifiable electronic voting framework. The strength of joint-decoding is shown robust not only to detect any vote manipulation attempts by the voting machines but also to provide individual verifiability indirectly. Different possible threats are analysed and solutions are designed accordingly. Extensive performance analysis, including computational complexity of key algorithms, are carried out with analytical models, wherever deemed possible, and rigorous simulation experiments to establish the applicability and efficacy of the proposed techniques in various realistic scenarios.