PrivateNLP 2022: Fourth Workshop on Privacy in Natural Language Processing Seattle, WA, United States, May 6, 2022 |
Conference website | https://sites.google.com/view/privatenlp/ |
Submission deadline | April 15, 2022 |
Privacy-preserving data analysis has become essential in the age of Machine Learning (ML) where access to vast amounts of data can provide gains over tuned algorithms. A large proportion of user-contributed data comes from natural language e.g., text transcriptions from voice assistants.
It is therefore important to curate NLP datasets while preserving the privacy of the users whose data is collected, and train ML models that only retain non-identifying user data.
The workshop aims to bring together practitioners and researchers from academia and industry to discuss the challenges and approaches to designing, building, verifying, and testing privacy preserving systems in the context of Natural Language Processing.
Submission Guidelines
Two types of submissions are invited: full papers and short papers.
-
Full papers should not exceed eight (8) pages of text, plus unlimited references. Final versions of full papers will be given one additional page of content (up to 9 pages) so that reviewers' comments can be taken into account.
-
Short papers may consist of up to four (4) pages of content, plus unlimited references. Upon acceptance, short papers will still be given up to five (5) content pages in the proceedings.
See the guidelines here:
Long papers: https://aclrollingreview.org/cfp#long-papers
Short papers: https://aclrollingreview.org/cfp#short-papers
- Submissions should be made as a pdf file to: https://www.softconf.com/naacl2022/PrivateNLP2022/
List of Topics
- Privacy and Federated Learning in NLP
- Homomorphic encryption for language models
- Privacy preserving machine learning for language models
- Generating privacy preserving test sets
- Inference and identification attacks
- Generating Differentially private derived data
- NLP, privacy and regulatory compliance
- Private Generative Adversarial Networks
- Privacy in Active Learning and Crowdsourcing
- User perceptions on privatized personal data
- Auditing provenance in language models
- Continual learning under privacy constraints
- NLP and summarization of privacy policies
- Ethical ramifications of AI/NLP in support of usable privacy
Committees
Organizing committee
- Oluwaseyi Feyisetan (Meta, USA)
- Sepideh Ghanavati (University of Maine, USA)
- Patricia Thaine (University of Toronto, Canada)
- Ivan Habernal (Technische Universität Darmstadt, Germany)
- Fatemehsadat Mireshghallah (University of California, San Diego)
Invited Speakers
- Ilya Mironov (Meta)
- Franziska Boenisch (Fraunhofer AISEC)
- Esha Ghosh (Microsoft)
Venue
The conference will be held in Seattle, WA USA
Contact
All questions about submissions should be emailed to privatenlp-naacl@googlegroups.com