OHARS-2021: Second Workshop on Online Misinformation- and Harm-Aware Recommender Systems Virtual Amsterdam, Netherlands, September 25-October 2, 2021 |
Conference website | https://sites.google.com/isistan.unicen.edu.ar/ohars-recsys |
Submission link | https://easychair.org/conferences/?conf=ohars2021 |
Second Workshop on Online Misinformation- and Harm-Aware Recommender Systems
In conjunction with ACM RecSys 2021
September 25, 2021
Amsterdam, Netherlands (Virtual)
Website: https://ohars-recsys2021.isistan.unicen.edu.ar
Aim and Scope
Social media platforms have become an integral part of everyday life and activities of most people, providing new forms of communication and interaction. These sites allow their users to share information and opinions as well as to promote the formation of links and social relationships. One of the most valuable features of social platforms is the potential for the dissemination of information on a large scale. Recommender systems play an important role in this process as they leverage on the massive user-generated content to assist users in finding relevant information as well as establishing new social relationships.
As mediators of online information consumption, recommender systems are both affected by the proliferation of low-quality content in social media, which hinders their capacity of achieving accurate predictions, and, at the same time, become unintended means for the amplification and massive distribution of online harm. Some of these issues stem from the core concepts and assumptions recommender systems are based on. In their attempt to deliver relevant and engaging suggestions about content/users, recommendation algorithms are also prone to introduce biases.
Harnessing recommender systems with misinformation- and harm-awareness mechanisms become essential not only to mitigate the negative effects of the diffusion of unwanted content, but also to increase the user-perceived quality of recommender systems. Novel strategies like the diversification of recommendations, bias mitigation, model-level disruption, explainability and interpretation, among others, can help users in performing informed decision making in the context of online misinformation, hate speech and other forms of online harm.
Submission Guidelines
We will consider five different submission types, all following the new single-column format ACM proceedings format: regular (max 14 pages), short (max 8 pages), and extended abstracts (max 2 pages), excluding references. Authors of long and short papers will also be asked to present a poster.
-
Research papers (regular or short) should be clearly placed with respect to the state of the art and state the contribution of the proposal in the domain of application, even if presenting preliminary results. Papers should describe the methodology in detail, experiments should be repeatable, and a comparison with the existing approaches in the literature should be made where possible.
-
Position papers (regular or short) should introduce novel points of view in the workshop topics or summarize the experience of a researcher or a group in the field.
-
Practice and experience reports (short) should present in detail the real-world scenarios that present harm-aware recommender systems. Novel but significant proposals will be considered for acceptance into this category despite not having gone through sufficient experimental validation or lacking strong theoretical foundation.
-
Dataset descriptions (short) should introduce new public data collections that could be used to explore or develop harm-aware recommender systems.
-
Demo proposals (extended abstract or poster) should present the details of a prototype recommender system, to be demonstrated to the workshop attendees.
Submission will be accepted through Easychair: https://easychair.org/conferences/?conf=ohars2021
Each submitted paper will be refereed by three members of the Program Committee, based on its novelty, technical quality, potential impact, insightfulness, depth, clarity, and reproducibility.
In order to generate a strong outcome of the workshop, all long and short accepted papers will be included in the Workshop proceedings, provided that at least one of the authors attends the workshop to present the work. Proceedings will be published in a volume, indexed on Scopus and DBLP.
List of Topics
The aim of this workshop is to bring together a community of researchers interested in tackling online harms and, at the same time, mitigating their impact on recommender systems. We will seek novel research contributions on misinformation- and harm-aware recommender systems. The main objective of the workshop is to further research in recommender systems that can circumvent the negative effects of online harms by promoting recommendation of safe content and users.
We solicit contributions in all topics related to misinformation- and harm-aware recommender systems, focusing on (but not limited to) the following list:
-
Reducing misinformation effects (e.g. echo-chambers, filter bubbles).
-
Online harms dynamics and prevalence.
-
Computational models for multi-modal and multi-lingual harm detection and countermeasures.
-
User/content trustworthiness.
-
Bias detection and mitigation in data/algorithms.
-
Fairness, interpretability, and transparency in recommendations.
-
Explainable models of recommendations.
-
Data collection and processing.
-
Design of specific evaluation metrics.
-
The appropriateness of countermeasures for tackling online harms in recommender systems.
-
Applications and case studies of misinformation- and harm-aware recommender systems.
-
Mitigation strategies against coronavirus-fueled hate speech and COVID-related misinformation propagation.
-
Ethical and social implications of monitoring, tackling, and moderating online harms.
-
Online harm engagement, propagation, and attacks in recommender systems.
-
Privacy-preserving recommender systems.
-
Attack prevention in collaborative filtering recommender systems
-
Quantitative user studies exploring the effects of harm recommendations.
We encourage works focused on mitigating online harms in domains beyond social media, such as effects in collaborative filtering settings, e-commerce platforms, news-media, video platforms (e.g.YouTubeorVimeo) or opinion-mining applications, among other possibilities. Works specifically analyzing any of the previous topics in the context of the COVID-19 crisis are also welcome, as well as works based on social networks other than Twitter and Facebook, such as Tik-Tok, Reddit, Snapchat and Instagram.
Important Dates
-
Abstract submission deadline: July 24th, 2021
-
Paper submission deadline: July 29th, 2021
-
Author notification: August 21st, 2021
-
Camera-ready version deadline: September 4th, 2021
Committees
Program Committee (to be confirmed)
-
Ludovico Boratto, Eurecat
-
Ivan Cantador, Universidad Autónoma de Madrid
-
Giovanni Luca Ciampaglia, University of South Florida
-
Dagmar Gromann, University of Vienna
-
Elena Kochkina, University of Warwick
-
Ana Maguitman, Universidad Nacional del Sur, Argentina
-
Lara Quijano Sanchez, Universidad Autónoma de Madrid
-
Christoph Trattner, University of Bergen, Norway
-
...
Organizing committee
-
Daniela Godoy, ISISTAN Research Institute (CONICET/UNCPBA), Argentina
-
Antonela Tommasel, ISISTAN Research Institute (CONICET/UNCPBA), Argentina
-
Arkaitz Zubiaga, Queen Mary University of London, UK
Contact
All questions about submissions should be emailed to ohars2021 [AT] easychair [DOT] org