AI4EQ: Advancing Towards the SDGS: Artificial Intelligence for a Fair, Just and Equitable World |
Website | http://blogs.uned.es/workshopadvancingtowards/ |
Submission link | https://easychair.org/conferences/?conf=ai4eq |
Abstract registration deadline | May 4, 2020 |
Submission deadline | May 17, 2020 |
The United Nations (UN) Agenda for Sustainable Development, which was adopted by the UN General Assembly in 2015, commits all member states to make concerted efforts towards building an inclusive, sustainable, prosperous and resilient future for people and planet, and to reaching the universally-applicable goals by 2030. Artificial Intelligence (AI) has the potential to contribute to solving some of the world’s most pressing problems - such as climate change, lack of basic services, poverty, exploitation and violations of human rights - and thereby to the achievement of the UN Sustainable Development Goals (SDGs), bringing positive socio-economic outcomes in both High Income Countries (HIC) and Low and Income Countries (LMIC).
Since 2017, the UN has held the annual AI for Good Global Summit aimed to accelerate progress towards the UN SDGs by connecting AI innovators with “problem owners”, and to ensure trusted, safe and inclusive development of AI technologies and equitable access to their benefits.
Also noteworthy is the UN Global Pulse Initiative intended to foster discovery, development and scaled adoption of big data innovation for sustainable development and humanitarian action. It is relevant that experience with projects such as those of Global Pulse has highlighted important ethical concerns, for example, with both the collection and use of data during humanitarian emergencies.
In the same vein, the recently released UN report of the Special rapporteur on extreme poverty and human rights warns of the "risk of stumbling zombie-like into a digital welfare dystopia" where "Big Tech has been a driver of growing inequality and has facilitated the creation of a vast digital underclass". The report provides many well documented examples in different countries on how dehumanized intelligent technologies are creating barriers to accessing a range of social rights for those lacking internet access and digital skills.
Initiatives such as the IEEE Global Initiative on Ethics of Autonomous/Intelligent Systems, the European Commission's High-Level Expert group on AI work on "Ethics guidelines for trustworthy AI", or the Montreal declaration for a responsible development of AI 2018 also highlight the increasing challenges posed by AI in the ethical, moral, legal, humanitarian and sociopolitical domains.
A wide view of ethics focuses on potentialities, not only on risk mitigation, and from such a view arises the ethical imperative to harness AI technologies to the benefit of humanity in order to improve quality-of-life for all rather than contribute to perpetuating systemic injustices. To this end, more R&D in the potential of AI to contribute to the SDGs is urgently needed; a practical research that goes beyond cataloging risks and potentialities.
With this workshop we seek to encourage the academic research community to get involved in inter and multidisciplinary research, where this community includes application-driven AI researchers together with applied ethicists and experts on technological legislation and technological innovation for development.
Firstly, there is a need to study the current panorama of AI applications in sectors crucial to the UN SDGs, to share the lessons learned in applying them, in order to identify strengths and weaknesses, and to document and disseminate the development and deployment of the most significant innovative applications. Attention should be drawn to the idiosyncrasy of each LMIC (cultural, climatic, environmental, organizational, infrastructural, socio-economic, etc.) and the particular impact AI-based technological innovation can have in each context. Secondly, progress in standards, research methodologies and development methodologies and tools that guide the development of ethical AI is also essential. Ethical AI should be respectful of fundamental human rights (dignity, freedoms, equality, solidarity, justice) and of the particular values of the culture where it is implemented, and should take into account the idiosyncrasy of each context. There is a need to bring order to the current landscape of overabundance of ethical codes, guidelines and frameworks that often suffer from deficiencies such as: lack of scientific rigor, subjectiveness, incoherence, superficiality and redundance, and may do little more than generate confusion.
Although manuals of good business practices are necessary, in the academic field there is a need for independent and scientifically rigorous research, with an empirical dimension which, so far, is mostly lacking. Of course, methodologies and tools cannot replace legislation and manuals of ethics and good practices, but should support their implementation. Academic research, private sector self-regulation and legislation are necessary and complementary actions. It is important that research and development methodologies and tools comply with universal ethical principles so that they are applied equally, since legislation may be lax or unclear in certain contexts, this being more likely to occur in LMICs and when the most vulnerable are concerned.
Regarding the risks, perhaps the most feared is job losses. Base erosion and profit shifting and the concentration of industries threaten to undermine countries’ tax bases. Other threats from the digital revolution of AI include the theft of digital identities, invasion of privacy by governments or businesses, discrimination based on personal data, monopoly positions due to control of big data, challenges to deliberative decision-making processes, cyber warfare, hacking of election data and the manipulation of social media.
Submission Guidelines
All papers must be original and not simultaneously submitted to another journal or conference. We will invite contributions of different lengths, including:
- Full papers (less than 12 pages)
- Short papers (less than 4 pages)
- Position papers (less than 2 pages)
in any case including references. All kind of papers should be formatted according to the ECAI2020 formatting style. Details are available at ECAI2020 website. All submissions will be subject to peer review.
The submission Web page for AI4EQ is https://easychair.org/conferences/?conf=ai4eq
List of Topics
Submissions are solicited for the ECAI 2020 Workshop on ADVANCING TOWARDS THE SDGs: ARTIFICIAL INTELLIGENCE FOR A FAIR, JUST AND EQUITABLE WORLD (AI4EQ) for unpublished completed or ongoing work which focus on either one of the following topic areas or some combination of both:
SDG-oriented AI applications for a fair, just and equitable world
Real experience with AI applications (already deployed or not) that can make a significant contribution to achieving the UN SDGs, with an emphasis on reducing inequalities. This covers, for example, fields such as:
- Big data for development (with applications in agriculture, medical tele-diagnosis, etc.)
- Geographic information systems (with applications in public service planning, disaster prevention, emergency planning, disease monitoring, etc.)
- Control systems (with applications in naturalizing intelligent cities through energy and traffic control, management of urban agriculture, etc.)
- E-democracy and participatory democracy systems
- Welfare-oriented service systems.
Proposals that include a reflection on strengths and weaknesses are particularly welcome (ethical problems arising from use of the technology, possible acceptance problems in a specific context or culture, sustainability, gender digital divide, etc.), especially if the argumentation is based on impact measurement using quantifiable metrics (associated with compliance with the SDGs).
Reviews and analysis of the state of the art in relevant application areas are also welcome.
Methods and tools for SDG-oriented AI
Proposals of methodological and technical tools at all levels of the AI research and development processes (analysis, design, implementation, validation, deployment and evaluation), focused on guaranteeing the properties of ethical AI and ensuring compliance with regulations, laws and policies, particularly those focusing on protecting the most vulnerable and marginalized, and those specific to the LMIC context.
Examples of these properties are: explicability, accountability, data governance, design for all, non-discrimination, respect for human autonomy, respect for privacy, robustness, safety, transparency and traceability, broad-spectrum impact forecast/monitoring/measurement, etc. Some properties that are particularly relevant in the case of LMICs are: adaptation to the available resources (hardware, software, connectivity, etc.), impact on the receiving communities, suitability and sustainability, etc.
Some examples of research areas arising in the study of the aforementioned tools are the following:
- Impact measurement by design
- Equity-by-design
- Ethics & rule-of-law by design
- Privacy-by-design
- Safe, Trustworthy and Explainable AI (XAI)
- Standardization/harmonization
- Low-cost AI (e.g. mobile lightweight applications, FOSS solutions)
- The "Open AI" paradigm, where this refers not only to FOSS (Free / Open-Source Software), but also to applying FOSS principles to algorithms, scientific insights or other AI artifacts.
- Culture-aware techniques
- Community-centered technology development approaches
Committees
Program Committee
- Celia Fernández-Aller (Universidad Politécnica de Madrid, Spain)
- Maite Lopez-Sanchez (Universidad de Barcelona, Spain)
- Miguel Ángel Luengo (UN Global Pulse, US)
- Ángeles Manjarrés (Universidad Nacional de Educación a Distancia, Spain)
- Rafael Miñano (Universidad Politécnica de Madrid, Spain)
- Jeremy Pitt (Imperial College London, UK)
- Cristina Puente-Águeda (Universidad Pontificia de Comillas de Madridm Spain)
- Juan Antonio Rodríguez (Consejo Superior de Investigaciones Científicas, Spain)
- Manuel Sierra-Castañer (Universidad Politécnica de Madrid, Spain)
Organizing Committee
- Celia Fernández-Aller (Universidad Politécnica de Madrid, Spain)
- Ángeles Manjarrés Riesco (Universidad Nacional de Educación a Distancia, Spain)
- Cristina Puente-Águeda (Universidad Pontificia de Comillas de Madridm Spain)
Publication Manager
- Jeremy Pitts, Editor-in-Chief of the IEEE Technology and Society Magazine
Invited Speakers
- Elizabeth Gibbons is currently a Senior Fellow at the FXB Center for Health and Human Rights in the Harvard T.H. Chan School of Public Health; chair of the Sustainable Development Committee of The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems.
Publication
AI4EQ selected papers will be published in a Special issue of the IEEE Technology and Society Magazine.
Venue
The workshop, associated to the ECAI 2020 international conference, will be held in Santiago de Compostela, Spain.
Contact
All questions about submissions should be emailed to:
- Ángeles Manjarrés Riesco: amanja@dia.uned.es
- Celia Fernández-Aller: mariacelia.fernandez@upm.es
- Cristina Puente-Águeda: cristina.puente@icai.comillas.edu