Keynote 1

Learning from the experience of publishing useful data

Nobuaki Hoshino
Faculty of Economics and Management, Institute of Human and Social Sciences, Kanazawa University

Abstract.

Official statistical agencies began to publish microdata of individuals in 1970's with a hope that no sensitive information would be disclosed. Soon this hope was understood as virtually impossible, partially because of an experience that strongly anonymized microdata could not satisfy scientific needs. Nowadays anonymized microdata is mainly provided under a ban of de-anonymization.

Observing this historical development of the current practice, it seems hopeless to create useful microdata under the assumption of an adversary who has almost perfect information. A ban of de-anonymization by some contract tacitly specifies a model of an adversary who has a limited ability of de-anonymization.

Major laws regard de-anonymization as re-identification. Hence we can restrict our goal in anonymization to the prevention of re-identification. However, most of researches are rather subjective in deciding the actual ability of an adversary to re-identify.

The present talk proposes to estimate the actual ability of an adversary from a fact that re-identification has been unobserved except for a few incidents. This view shares the philosophy of data driven knowledge such as machine learning.

Keynote 2

Leakage Resilient Cryptography

Sebastian Faust
École Polytechnique Fédérale de Lausanne (EPFL)

Abstract.

Traditionally, the security analysis of cryptographic schemes assumes that computation is carried out on fully-trusted machines. This assumption is formalized by the so-called black-box model, which largely contributed to the success of cryptographic research over the last decades. In the black-box model, an adversary has access to the inputs and outputs of the algorithm, but the internal data that is computed by the implementation stays completely hidden. While most of today's cryptographic schemes offer security in the black-box model that is far beyond anything that can realistically be broken, real-world security breaches of cryptographic implementations are continuously reported. Typically, these attacks do not attempt to break the mathematical properties of the cryptographic algorithm, but instead move outside of the traditional black-box model and target weaknesses of the implementation. Important examples of such so-called side-channel attacks include adversaries that break implementations by, e.g., exploiting the device's power consumption, its running time or by inducing faults into the computation.

A large body of recent work on so-called "leakage resilient cryptography" attempts to close this fundamental gap between the black-box security analysis and physical reality. To this end, leakage resilient cryptography weakens the traditional black-box assumption and considers adversaries that obtain a partial view on the internal computation that is carried out on cryptographic implementations. The final goal is to develop models that incorporate large classes of relevant side-channel attacks and to show that cryptographic implementations are provably secure within these models. While leakage resilient cryptography has presented fascinating feasibility results for cryptography when secrets are (partially) revealed to an attacker, experts have raised criticism that security proofs in these models say little about the security of the actual implementation. In this talk, we will overview some of the important results in leakage resilient cryptography, discuss the main shortcomings of the current state-of-the-art and outline some of the major challenges in the area.