第三回 PWS論文読破会
What' new
- 2021/04/27(火) 本ページを作成
- 2021/05/11(火) 参考論文リストを記載
- 2021/06/09(水) 暫定プログラムを公開
- 2021/06/15(火) 最終プログラムを公開
開催概要
プライバシーに関する論文は、様々な分野の国際会議に発表されており関連技術も多数存在するため、プライバシーに関する技術動向を追うことは容易では無いと感じています。そこで、プライバシ関係の研究の促進を目的として、トップカンファレンスで発表されているプライバシ関係の論文を簡単に紹介し合う「PWS論文読破会」を企画しました。第1回開催、第2回開催に引き続き、第3回目を開催したいと思います。
発表者・聴講者(特に、発表者)を募集しますので、下記の問い合わせ先にご連絡頂ければ幸いです。発表者の申し込みは、定員に達し次第締め切ります。
論文は2021年1月~2021年5月に発表されたプライバシ (秘密計算、差分プライバシ、匿名化、Federated
Learning等) を中心にご自由にお選びください。 また、プライバシと同様に注目されているデータ活用やAIにおける信頼性
(Safety, Fairness. Accountabilityなど) に関する論文も歓迎します。
ご参考までに、S&P, NDSS, ICLR, AIStats, ICDE
からリストアップした論文リストを掲載します。(リストアップされていない論文も大歓迎です)
日時・場所
- 2021年6月16日(水) 15:00〜18:00(予定)
※終了後に希望者でオンライン飲み会もしたいと思います
- オンライン Zoom (発表者・聴講者に別途URL等をご連絡しますので、申し込みをお願いします)
- 参加費無料
発表申込
発表を希望される方は、次のアドレスに電子メールをお送りください。
tsubasa.takahashi あっとまーく linecorp.com
ご記載いただきたい内容:
- 氏名
- 所属(会社名、学校名など)
- メールアドレス
- 希望する論文
最新動向を追う事を目的としているため、比較的広く浅く論文を読んで、紹介し合うことを想定しています。そのため、必ずしも深く論文を読み綺麗な資料を作成する必要は無く、発表者の負荷はなるべく小さくしたいと考えております。
1論文につき10~15分程度(質疑込み)を想定しています。
聴講申込
聴講を希望される方は、申し込みフォームにメールアドレスなどを入力してください。後日、参加方法(Zoom
URLなど)を連絡します。
なお、入力頂いた内容は本イベントの運営のために利用します。プライバシポリシはこちらを参照ください。
プログラム
- 15:00-15:05 オープニング
- 15:05-15:25 高木 駿 (京都大学) "P3GM: Private High-Dimensional
Data Release via Privacy Preserving Phased Generative Model"
(ICDE2021) https://arxiv.org/abs/2006.12101
- 15:25-15:45 松本 茉倫 (お茶の水女子大学) "Practical and Private
(Deep) Learning without Sampling or Shuffling" (ICML 2021)
https://arxiv.org/abs/2103.00039
- 15:45-16:05 藤田 真浩 (三菱電機) "Visual Interactive Privacy
Policy: The Better Choice?" (CHI2021)
https://dl.acm.org/doi/10.1145/3411764.3445465
- 16:05-16:30 休憩
- 16:30-16:50 上野 道彦 (LINE) "Robustness Gym: Unifying the
NLP Evaluation Landscape" (NAACL2021)
https://arxiv.org/abs/2101.04840
- 16:50-17:10 綿岡 晃輝 (LINE) "Nice Try, Kiddo: Investigating
Ad Hominems in Dialogue Responses" (NAACL2021)
https://www.aclweb.org/anthology/2021.naacl-main.60.pdf
- クロージング
代表的な国際会議論文リスト
IEEE S&P 2021
- Detecting AI Trojans Using Meta Neural Analysis
- Adversarial Watermarking Transformer: Towards Tracing
Text Provenance with Data Hiding
- Machine Unlearning
- Defensive Technology Use by Political Activists During
the Sudanese Revolution
- DP-Sniper: Black-Box Discovery of Differential Privacy
Violations using Classifiers
- Is Private Learning Possible with Instance Encoding?
- DIANE: Identifying Fuzzing Triggers in Apps to Generate
Under-constrained Inputs for IoT Devices
- Data Privacy in Trigger-Action Systems
- Which Privacy and Security Attributes Most Impact
Consumers‚ Risk Perception and Willingness to Purchase IoT
Devices?
- Learning Differentially Private Mechanisms
- Adversary Instantiation: Lower bounds for differentially
private machine learning
- Manipulation Attacks in Local Differential Privacy
- SIRNN: A Math Library for Secure RNN Inference
- CryptGPU: Fast Privacy-Preserving Machine Learning on the
GPU
- Proof-of-Learning: Definitions and Practice
- Pegasus: Bridging Polynomial and Non-polynomial
Evaluations in Homomorphic Encryption
- Wolverine: Fast, Scalable, and Communication-Efficient
Zero-Knowledge Proofs for Boolean and Arithmetic
Circuits
- SoK: Fully Homomorphic Encryption Compilers
NDSS 2021
- GALA: Greedy ComputAtion for Linear Algebra in
Privacy-Preserved Neural Networks
- POSEIDON: Privacy-Preserving Federated Neural Network
Learning
- PrivacyFlash Pro: Automating Privacy Policy Generation
for Mobile Apps
- Understanding Worldwide Private Information Collection on
Android
- FLTrust: Byzantine-robust Federated Learning via Trust
Bootstrapping
- Manipulating the Byzantine: Optimizing Model Poisoning
Attacks and Defenses for Federated Learning
ICDE 2021
- Differentially Private Publication of Multi-Party
Sequential Data
- Secure Dynamic Skyline Queries Using Result
Materialization
- P3GM: Private High-Dimensional Data Release via Privacy
Preserving Phased Generative Model
- Feature Inference Attack on Model Predictions in Vertical
Federated Learning
- Privacy Preserving Strong Simulation Queries for Large
Graphs
- Aria: Tolerating Skewed Workloads in Secure In-memory
Key-Value Store
- Efficient Federated-Learning Model Debugging
- An Efficient Approach for Cross-Silo Federated Learning
to Rank
AIStats 2021
- Revisiting Model-Agnostic Private Learning: Faster Rates
and Active Learning
- Differentially Private Analysis on Graph Streams
- Differentially Private Online Submodular
Maximization
- On the Privacy Properties of GAN-generated Samples
- Robust and Private Learning of Halfspaces
- DP-MERF: Differentially Private Mean Embeddings with
RandomFeatures for Practical Privacy-preserving Data
Generation
- Stability and Differential Privacy of Stochastic Gradient
Descent for Pairwise Learning with Non-Smooth Loss
- No-Regret Algorithms for Private Gaussian Process Bandit
Optimization
- Federated f-Differential Privacy
- Quantifying the Privacy Risks of Learning
High-Dimensional Graphical Models
- Optimal query complexity for private sequential learning
against eavesdropping
- Differentially Private Weighted Sampling
- Shuffled Model of Differential Privacy in Federated
Learning
- Private optimization without constraint violations
- Evading the Curse of Dimensionality in Unconstrained
Private GLMs
- Location Trace Privacy Under Conditional Priors
- Differentially Private Monotone Submodular Maximization
Under Matroid and Knapsack Constraints
- Tight Differential Privacy for Discrete-Valued Mechanisms
and for the Subsampled Gaussian Mechanism Using FFT
- On Data Efficiency of Meta-learning for Personalized
Federated Learning
- Free-rider Attacks on Model Aggregation in Federated
Learning
- Federated Learning with Compression: Unified Analysis and
Sharp Guarantees
- Convergence and Accuracy Trade-Offs in Federated Learning
and Meta-Learning
- Federated Multi-armed Bandits with Personalization
- Towards Flexible Device Participation in Federated
Learning
- Learning Individually Fair Classifier with Path-Specific
Causal-Effect Constraint
- Learning Smooth and Fair Representations
- Learning Fair Scoring Functions: Bipartite Ranking under
ROC-based Fairness Constraints
- Algorithms for Fairness in Sequential Decision
Making
- All of the Fairness for Edge Prediction with Optimal
Transport
- Fair for All: Best-effort Fairness Guarantees for
Classification
ICLR 2021
- Bypassing the Ambient Dimension: Private SGD with
Gradient Subspace Identification
- Information Laundering for Model Privacy
- Differentially Private Learning Needs Better Features (or
Much More Data)
- Do not Let Privacy Overbill Utility: Gradient Embedding
Perturbation for Private Learning
- Private Image Reconstruction from System Side Channels
Using Generative Models
- R-GAP: Recursive Gradient Attack on Privacy
- CaPC Learning: Confidential and Private Collaborative
Learning
- Private Post-GAN Boosting
- SenSeI: Sensitive Set Invariance for Enforcing Individual
Fairness
- Fair Mixup: Fairness via Interpolation
- FairBatch: Batch Selection for Model Fairness
- Statistical inference for individual fairness
- Individually Fair Rankings
- Individually Fair Gradient Boosting
- FairFil: Contrastive Neural Debiasing Method for
Pretrained Text Encoders
- On Dyadic Fairness: Exploring and Mitigating Bias in
Graph Connections
- Personalized Federated Learning with First Order Model
Optimization
- FedMix: Approximation of Mixup under Mean Augmented
Federated Learning
- Federated Semi-Supervised Learning with Inter-Client
Consistency & Disjoint Learning
- Achieving Linear Speedup with Partial Worker
Participation in Non-IID Federated Learning
- FedBE: Making Bayesian Model Ensemble Applicable to
Federated Learning
- Federated Learning via Posterior Averaging: A New
Perspective and Practical Algorithms
- HeteroFL: Computation and Communication Efficient
Federated Learning for Heterogeneous Clients
- FedBN: Federated Learning on Non-IID Features via Local
Batch Normalization
- SAFENet: A Secure, Accurate and Fast Neural Network
Inference
お問い合わせ先
- 竹之内: takao-takenouchi あっとまーく garage.co.jp
- 高橋: tsubasa.takahashi あっとまーく linecorp.com