第二回 PWS論文読破会
What' new
第3回開催決定
引き続き第3回を企画しました。是非ご参加ください。
開催概要
プライバシーに関する論文は、様々な分野の国際会議に発表されており関連技術も多数存在するため、プライバシーに関する技術動向を追うことは容易では無いと感じています。そこで、プライバシ関係の研究の促進を目的として、トップカンファレンスで発表されているプライバシ関係の論文を簡単に紹介し合う「PWS論文読破会」を企画しました。前回の第1回開催に引き続き、第2回目を開催したいと思います。
発表者・聴講者(特に、発表者)を募集しますので、下記の問い合わせ先にご連絡頂ければ幸いです。発表者の申し込みは、定員に達し次第締め切ります。
論文は2020年8月~2021年1月に発表されたプライバシ (秘密計算、差分プライバシ、匿名化、Federated
Learning等)を中心にご自由にお選びください。 また、プライバシと同様に注目されているデータ活用やAIにおける信頼性
(Safety, Fairness. Accountabilityなど) に関する論文も歓迎します。
ご参考までに、NeurIPS2020 CCS2020 VLDB2020 KDD2020
からリストアップした論文リストを掲載します。(リストアップされていない論文も大歓迎です)
日時・場所
- 2021年3月24日(水) 15:00〜18:00(予定)
※終了後に希望者でオンライン飲み会もしたいと思います
- オンライン Zoom (発表者・聴講者に別途URL等をご連絡しますので、申し込みをお願いします)
- 参加費無料
発表申込
発表を希望される方は、次のアドレスに電子メールをお送りください。
tsubasa.takahashi あっとまーく linecorp.com
ご記載いただきたい内容:
- 氏名
- 所属(会社名、学校名など)
- メールアドレス
- 希望する論文
最新動向を追う事を目的としているため、比較的広く浅く論文を読んで、紹介し合うことを想定しています。そのため、必ずしも深く論文を読み綺麗な資料を作成する必要は無く、発表者の負荷はなるべく小さくしたいと考えております。
1論文につき10~15分程度(質疑込み)を想定しています。
聴講申込
聴講を希望される方は、申し込みフォームにメールアドレスなどを入力してください。後日、参加方法(Zoom
URLなど)を連絡します。
なお、入力頂いた内容は本イベントの運営のために利用します。プライバシポリシはこちらを参照ください。
プログラム
- 15:00-15:05 オープニング
- 15:05-15:20 高橋翼 "Federated Evaluation and Tuning For
On-device Personalization: System Design & Applications",
https://arxiv.org/abs/2102.08503
- 15:20-15:35 リュウセンペイ "Inverting Gradients - How easy is it
to break privacy in federated learning?", NeurIPS20
https://arxiv.org/abs/2003.14053
- 15:35-15:50 菊池浩明 "R2DP: A Universal and Automated
Approach to Optimizing the Randomization Mechanisms of
Differential Privacy for Utility Metrics with No Known
Optimal Distributions", CCS20
https://arxiv.org/abs/2009.09451
- 15:50-16:00 休憩
- 16:00-16:15 荒井ひろみ "Privacy Norms for Smart Home Personal
Assistants", CHI21
- 16:15-16:30 上野道彦 "Exploring Design and Governance
Challenges in the Development of Privacy-Preserving
Computation", CHI21 https://arxiv.org/abs/2101.08048
- 16:30-16:45 廣江彩乃 "Usage Patterns of Privacy-Enhancing
Technologies", CCS20 https://arxiv.org/abs/2009.10278
- 16:45-17:00 休憩
- 17:00-17:15 松本茉倫 "Glyph: Fast and Accurately Training
Deep Neural Networks on Encrypted Data", NeurIPS20
https://arxiv.org/abs/1911.07101
- 17:15-17:30 渡辺知恵美 "ObliDB: Oblivious Query Processing for
Secure Databases", VLDB20
https://arxiv.org/abs/1710.00458
- 17:30-17:45 竹之内隆夫 "Lessons and Challenges in Deploying
(Heavy) MPC in Different Environments", RWC21
- クロージング
代表的な国際会議論文リスト
KDD2020
- Estimating Properties of Social Networks via Random Walk
considering Private Nodes
- Re-identification Attack to Privacy-Preserving Data
Analysis with Noisy Sample-Mean
- TIPRDC: Task-Independent Privacy-Respecting Data
Crowdsourcing Framework for Deep Learning with Anonymized
Intermediate Representations
- Privileged Features Distillation at Taobao
Recommendations
- Faster Secure Data Mining via Distributed Homomorphic
Encryption
- Algorithmic Decision Making with Conditional
Fairness
- Evaluating Fairness using Permutation Tests
- InFoRM: Individual Fairness on Graph Mining
- List-wise Fairness Criterion for Point Processes
- Towards Fair Truth Discovery from Biased Crowdsourced
Answers
- Attackability Characterization of Adversarial Evasion
Attack on Discrete Data
- Interpretability is a Kind of Safety: An
Interpreter-based Ensemble for Adversary Defense
- RayS: A Ray Searching Method for Hard-label Adversarial
Attack
- Vulnerability vs. Reliability: Disentangled Adversarial
Examples for Cross-Modal Learning
- INPREM: An Interpretable and Trustworthy Predictive Model
for Healthcare
VLDB2020
- Collecting and Analyzing Data Jointly from Multiple
Services under Local Differential Privacy
- Free Gap Information from the Differentially Private
Sparse Vector and Noisy Max Mechanisms
- A workload-adaptive mechanism for linear queries under
local differential privacy
- SAQE: Practical Privacy-Preserving Approximate Query
Processing for Data Federations
- ObliDB: Oblivious Query Processing for Secure
Databases
- Efficient Oblivious Database Joins
- Set-valued Data Publication with Local Privacy: Tight
Error Bounds and Efficient Mechanisms
- Relational Data Synthesis using Generative Adversarial
Networks: A Design Space Exploration
- Secure Multi-Party Functional Dependency Discovery
- TransNet: Training Privacy-Preserving Neural Network over
Transformed Layer
- Privacy Preserving Vertical Federated Learning for
Tree-based Models
- Efficient Confidentiality-Preserving Data Analytics over
Symmetrically Encrypted Datasets
- Rank Aggregation Algorithms for Fair Consensus
- Fair Task Assignment in Spatial Crowdsourcing
- Sieve: A Middleware Approach to Scalable Access Control
for Database Management Systems
- Understanding and Benchmarking the Impact of GDPR on
Database Systems
- Operationalizing Individual Fairness with Pairwise Fair
Representations
CCS2020
- Private Summation in the Multi-Message Shuffle Model
- R^2DP: A Universal and Automated Approach to Optimizing
the Randomization Mechanisms of Differential Privacy for
Utility Metrics with No Known Optimal Distributions
- Privaros: A Framework for Privacy-Compliant Delivery
Drones
- Implementing the Exponential Mechanism with Base-2
Differential Privacy
- Usage Patterns of Privacy-Enhancing Technologies
- CheckDP: An Automated and Integrated Approach for Proving
Differential Privacy or Finding Precise Counterexamples
- The Signal Private Group System and Anonymous Credentials
Supporting Efficient Verifiable Encryption
- Dangerous Skills Got Certified: Measuring the
Trustworthiness of Skill Certification in Voice Personal
Assistant Platforms
- Threshold Password-Hardened Encryption Services
- CrypTFlow2: Practical 2-Party Secure Inference
NeurIPS2020
- Adversarially Robust Streaming Algorithms via
Differential Privacy
- Permute-and-Flip: A new mechanism for differentially
private selection
- Learning from Mixtures of Private and Public
Populations
- Locally private non-asymptotic testing of discrete
distributions is faster using interactive mechanisms
- Optimal Private Median Estimation under Minimal
Distributional Assumptions
- Breaking the Communication-Privacy-Accuracy Trilemma
- Differentially Private Clustering: Tight Approximation
Ratios
- Privacy Amplification via Random Check-Ins
- Towards practical differentially private causal graph
discovery
- Differentially-Private Federated Linear Bandits
- Synthetic Data Generators -- Sequential and Private
- Faster Differentially Private Samplers via Rényi
Divergence Analysis of Discretized Langevin MCMC
- A Scalable Approach for Privacy-Preserving Collaborative
Machine Learning
- AutoPrivacy: Automated Layer-wise Parameter Selection for
Secure Neural Network Inference
- Smoothed Analysis of Online and Differentially Private
Learning
- Private Identity Testing for High-Dimensional
Distributions
- Locally Differentially Private (Contextual) Bandits
Learning
- GS-WGAN: A Gradient-Sanitized Approach for Learning
Differentially Private Generators
- Understanding Gradient Clipping in Private SGD: A
Geometric Perspective
- Private Learning of Halfspaces: Simplifying the
Construction and Reducing the Sample Complexity
- Smoothly Bounding User Contributions in Differential
Privacy
- Instance-optimality in differential privacy via
approximate inverse sensitivity mechanisms
- CoinPress: Practical Private Mean and Covariance
Estimation
- The Discrete Gaussian for Differential Privacy
- On the Equivalence between Online and Private
Learnability beyond Binary Classification
- Inverting Gradients - How easy is it to break privacy in
federated learning?
- CryptoNAS: Private Inference on a ReLU Budget
- The Flajolet-Martin Sketch Itself Preserves Differential
Privacy: Private Counting with Minimal Space
- Improving Sparse Vector Technique with Renyi Differential
Privacy
- A Computational Separation between Private Learning and
Online Learning
- Learning discrete distributions: user vs item-level
privacy
- Auditing Differentially Private Machine Learning: How
Private is Private SGD?
- Fairness without Demographics through Adversarially
Reweighted Learning
- Fairness with Overlapping Groups; a Probabilistic
Perspective
- Robust Optimization for Fairness with Noisy Protected
Groups
- Fair regression with Wasserstein barycenters
- Learning Certified Individually Fair Representations
- Fair Performance Metric Elicitation
- Metric-Free Individual Fairness in Online Learning
- Fairness constraints can help exact inference in
structured prediction
- Probabilistic Fair Clustering
- Fairness in Streaming Submodular Maximization: Algorithms
and Hardness
- Group-Fair Online Allocation in Continuous Time
- KFC: A Scalable Approximation Algorithm for k−center Fair
Clustering
- A Fair Classifier Using Kernel Density Estimation
- Exploiting MMD and Sinkhorn Divergences for Fair and
Transferable Representation Learning
- Fair Multiple Decision Making Through Soft
Interventions
- Ensuring Fairness Beyond the Training Data
- How do fair decisions fare in long-term
qualification?
- Can I Trust My Fairness Metric? Assessing Fairness with
Unlabeled Data and Bayesian Inference
- Fair regression via plug-in estimator and recalibration
with statistical guarantees
- Fair Hierarchical Clustering
- Falcon: Fast Spectral Inference on Encrypted Data
- Glyph: Fast and Accurately Training Deep Neural Networks
on Encrypted Data
- Optimal Query Complexity of Secure Stochastic Convex
Optimization
お問い合わせ先
- 竹之内: takao-takenouchi あっとまーく garage.co.jp
- 高橋: tsubasa.takahashi あっとまーく linecorp.com