Artificially intelligent systems and robots are becoming ubiquitous in our lives, including automated vehicles, delivery robots, and AI assistants. However, as research and development on this field increasingly turns towards imbuing such systems with artificial awareness, we must ask ourselves: what are the ethical implications of doing so? How can challenges, benefits and responsibilities be defined across stakeholders, from developers and policymakers to end-users? And how do we ensure that these technologies align with societal values while mitigating risks?
According to Ana Tanevska, Postdoctoral Researcher at Uppsala University in Sweden, “the integration of increasingly sophisticated AI systems into society presents novel ethical challenges, in terms of their interactions with the wider public but also policy making and monitoring.”
“Introducing terms such as awareness into this field could provide potential benefits such as enhanced opportunities for value alignment, self-monitoring capabilities, and more explainable systems for end-users. Yet, they also pose risks, including difficulties in evaluation, and the potential for miscommunication about the capacities and intelligence of such systems to the wider public,” highlights Mathijs Smakman, Professor at the University of Applied Sciences Utrecht, in the Netherlands.
These pressing topics formed the focus of the 2nd Workshop "Inside the Ethics of AI Awareness," held at Uppsala University on November 11, 2024. Organised by the SymAware consortium alongside other EIC-funded projects in the “Awareness Inside” portfolio, the workshop addressed topics such as the governance of AI ethics, ethical dimensions of multi-agent systems, and the implications of designing value-aware and metacognitive AI systems.
The recently introduced EU AI Act underscores the importance of addressing these questions, emphasising ethical considerations as central to the design and governance of AI systems. As such, the workshop featured keynote talks by Mihalis Kritikos (European Commission), discussing the gradual development of an EU ethics governance framework for AI, and the EU's Trustworthy approach to the use and deployment of AI systems including the AI Act, and Andrea Galassi (University of Bologna) presenting two case studies on the development of an ethical and human-centric artificial intelligence: a prototype dialogue system aimed at supporting immigrants in their asylum application, and the analysis of fairness and diversity in speech datasets aimed for mental health research.
Participants engaged in an interactive workshop on ethical considerations, exploring how these technologies can be responsibly integrated into society. Discussions focused on key stakeholders, such as policymakers, developers, and the public, as well as the potential risks and benefits associated with aware AI, including transparency, accountability, and human-AI collaboration.
“Workshops like this are where meaningful ethical frameworks take shape. By uniting experts across disciplines, they foster actionable dialogue that goes beyond theoretical debates. The emphasis on cross-fertilizing ethical discussions across European projects is especially crucial, as Europe emerges as a leader in addressing the alignment between AI and human values. This commitment not only sets a global standard but also tackles pressing real-world challenges with clarity and responsibility,” concludes Ophelia Deroy, Professor of Philosophy and Neuroscience at the Ludwig Maximilian University in Munich.
About the EIC Pathfinder challenge Awareness Inside
Eight projects have been funded by the European Innovation Council (EIC) to develop technologies based on awareness principles that will feed novel engineered complex systems, that are more resilient, self-developing and human-centric. This challenge places awareness as a prerequisite for real and contextualised problem-solving and action adaptation (and their consequences) to specific circumstances.
The projects in the “Awareness Inside” portfolio are funded by the European Union under Grant Agreements 101071191 (ASTOUND), 101071178 (CAVAA), 101070918 (EMERGE), 101070940 (MetaTool), 101071179 (SUSTAIN), 101070802 (SymAware), 101071147 (SYMBIOTIK), and 101070930 (VALAWAI). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or European Innovation Council and SMEs Executive Agency (EISMEA). Neither the European Union nor the granting authority can be held responsible for them.
Learn more: https://awarenessinside.eu/