A consortium of four top European universities, led by Tallinn University of Technology, is determined to establish the Estonian Center for Safe and Trustworthy AI to lead the European discourse on AI safety. Amid the exponential pace of changes in the AI landscape over the past year, including the recent ratification of the European Union's Artificial Intelligence Act, Estonia is taking a bold step towards leading the conversation on “what next?”.
Why Estonia?
Estonia's former Minister of Foreign Trade and IT and current MP, Andres Sutt, has emphasized Estonia’s advantages in hosting an AI Safety Engineering Center with worldwide impact. “For more than a generation, we have been building a digital society. We have always been in the forefront and open to new adventures that technology brings us. That has enabled us to build an ecosystem that is also open to the implementation of AI.” He pointed out that with society’s increasing dependence on ever more powerful technologies, we need increased investment into the capacity and resilience against potential threats. This can only be achieved through investing in research in this field. “AI Safety engineering is one of the courses that science should pursue.”
Estonia, renowned for adopting digital government services at a national level is ready for the next challenge: AI. Sutt, along with other government officials, is also an active member of the Advisory Board of AI and Robotics Estonia (AIRE), a competence center that supports companies and the society more broadly in the implementation of Artificial Intelligence systems. His commitment to this cause exemplifies the government's interest and commitment to cooperating in ensuring human-centric AI.
The perspective of Professor of Tallinn University of Technology Pawel Sobocinski on Estonia's role in shaping the future of AI in Europe highlights the nation's unique position in driving the conversation on AI safety and innovation. Echoing MP Andres Sutt's sentiment, Sobocinski points out Estonia's potential to be a hub for sandboxing initiatives and nurturing entrepreneurial spirit in the AI sector. His emphasis on leveraging Europe's research strengths, particularly in areas like verification and foundations of probabilistic reasoning, underscores the strategic importance of focusing on domains where Europe already excels. “We need to combine European research strengths with leveraging frontier AI. For instance, AI is all about probability. This is where Europe leads and it is one area where research in ECSTAI can make a significant contribution.”
Sobocinski's call to action extends beyond research; it addresses the critical issue of talent retention and capacity building. The brain drain phenomenon, where talent leaves Europe for opportunities elsewhere, is a significant challenge. By focusing on education and training, Sobocinski envisions a robust strategy to cultivate a new generation of experts in trustworthy AI. This approach not only aims to keep Europe at the forefront of AI innovation but also ensures that its advancements are rooted in ethical and reliable practices. He emphasizes the importance of creating the knowledge and expertise as Europe's competitive advantage in the global AI landscape and sees a solution in an interdisciplinary PhD school. By increasing the knowledge capacity in areas of strength, Europe can assert its position as a leader in the development and application of AI technologies that are not only advanced but also trustworthy and aligned with European values and standards.
Why correctness, security and ethical deployment?
The vision of the ECSTAI consortium in tackling the emerging problems in AI safety is to focus on three pillars of AI safety. The consortium believes that the most significant focus areas can be categorized into the correctness of artificial intelligence systems, their security and the ethical considerations in their deployment. Through a unified, interdisciplinary approach to the different questions falling under each category, this consortium believes it can establish a new research discipline: AI Safety engineering.
“To me, trustworthiness in AI is an intersection between correctness, which involves measuring accuracy, risk and performance, security, which is all about the new challenges AI poses on cybersecurity, and ethics, which is all about new challenges to our notions of fairness, etc. Those three things together are crucial for this concept of trustworthiness”, explains Pawel Sobocinski.
Engagement in Brussels
The consortium recently organized a seminar in Brussels to discuss the courses that Europe should take to establish leadership in Artificial Intelligence. The panel discussion took the conversation regarding AI safety to a more philosophical and existential level. Moderated by an industry expert and former Minister of Foreign Trade and IT of Estonia, Kaimar Karu, this dynamic discussion was led with questions like: Who benefits from AI? How much government intervention do we actually want to see? How personalized of a service do we actually need? What regulatory and risk mitigation measures should the AI Act encompass?
Christoph Lütge, Director of the world class Institute of Ethics in AI at Technical University of Munich, brought an interesting argument to these introductory worrisome questions. When asked about the most interesting initiatives in correctness, security and responsible deployment of AI, he argued that amid discussions regarding the numerous initiatives countering the negative effects of AI, we should not overlook the initiatives that focus on leveraging the opportunities that AI can bring in these three strands of AI research. “What I believe should be mentioned are initiatives that look at the ethical opportunities of AI. For example, autonomous driving is a good thing to have if it reduces accidents'', claimed Lütge highlighting the flip side of the coin.
The perspectives of the panel suggest a balanced approach for Europe's tech ecosystem: one that equally values innovation, regulation, and ethics. Investing in technology development, fostering a startup-friendly environment, and ensuring close collaboration between technologists and ethicists could help Europe not only in embedding its values into technology but also in becoming a global leader in ethical tech innovation. “Europe is good in developing regulations, but less in developing technologies. We don't develop technologies ourselves in which our values can be embedded. We need to invest in developing the tech ourselves”, commented Bart Jacobs, from Radboud University. Prof. Lütge from Technical University of Munich encouraged supporting startups and fostering a culture of innovation that doesn't get bogged down by overly burdensome regulations. With around 70 technology-oriented companies founded each year, and with 11 unicorns established by TUM alumni and researchers, TUM already has a proven track record on how to nurture a thriving technological ecosystem.
A point of agreement between the panelists was the importance of establishing trust in society in order to truly ensure that AI is human-centric and benefits a society as a whole rather than a selected elite. Referring to the successful uptake of digital governance by the Estonian population, Meelis Kull, Associate Professor in machine learning at University of Tartu, argued that building a societal understanding of the impacts, dangers and benefits of AI is the basis of ensuring its effects benefit society. “Estonia has particularly been able to get people to trust the digitalization of the state. This is something that can also spread more widely in Europe.” Professor Kull is also the Head of the Estonian Center of Excellence in AI (EXAI), which recently secured national government funding.
When asked about why we need academia to get involved in this emerging market rather than allowing market forces to work, the panelists agreed that the creation of AI safety hubs like ECSTAI have a crucial role in developing best-practices in an unpredictable field like artificial intelligence. “What ECSTAI can do is show how to do things as well as possible.”
Method of Research
Computational simulation/modeling
Subject of Research
Not applicable