Working Groups (WGs) are semi-organised groups which gather people both within and outside the AI4EU project. They aim to create a common space for reflection on Ethical, Legal, Social, Economic and Legal issues of AI (ELSEC AI) leveraging the contributions of experts from different fields and sectors. Participation is free and voluntary. For further details about the activities and the evolution of WGs see the Observatory article on the ethical and legal AI WG and the announcement of the AI & culture WG. See also the AI4EU webpage on WGs.
Launched officially in November 2020 in conjunction with the AI4EU workshop "Trustworthy AI made in Europe", WGs address the following thematic areas:
- Law & AI: considerations about existing laws and regulations, how these apply to AI systems and the identification of possible gaps.
- Ethics & AI: how to design and develop AI systems in a way that respects fundamental rights and human values, how to assess Trustworthy AI in practice
- Society & AI: how the European citizens view AI, how much they are aware of possible AI consequences, and their opinions about Trustworthy AI.
- Education & AI: what European institutions, in particular universities, are doing to train future engineers and computer scientists to address Trustworthy AI requirements, considerations of ELSEC issues in STEM disciplines.
- Culture & AI: the ethical and legal issues with respect to the use of socio-cultural data, how meaning can be obtained from these data.
WGs purse an open and collaborative approach and pay particular attention to the following aspects:
- Multidisciplinary perspective. People from different backgrounds (e.g. fields of study and sectors) and European countries who combine different viewpoints, methods of study and languages.
- Bottom-up approach. Proactive participation from the very beginning of WG activities, common objectives and flexible organization.
- Experiential learning. Knowledge and skills acquisition from concrete case studies, mutual exchange of ideas and conversations with stakeholders.
WGs are expected to produce a tangible outcome (report) and to present the results of their activities in a final workshop that will be held in September 2021.
Alessandro Fabris is a PhD student with the University of Padua, where he works to make algorithms more accountable and fair. His work focuses on the understanding and mathematical formalization of fairness criteria that are relevant to specific contexts, including search engines and car insurance premiums.
Dr Atia Cortés (she/her) is a computer science engineer with a MsC and a PhD in Artificial Intelligence by the Universitat Politècnica de Catalunya. She is currently a post-doctoral researcher at the Social Link Analytics unit of the Barcelona Supercomputer Center, where she is also part of the Bioinfo4Women programme. For over a decade, she has participated in several European and national funded projects related to the design and deployment of AI solutions applied to healthcare. Her main research interest focuses on the ethical and social impact of AI, the assessment of AI, and in particular the identification of sex and gender biases in AI, and the promotion of social awareness and responsible AI practices.
Angeliki Dedopoulou is Public Policy Manager for AI & Fintech at Meta (formerly Facebook). Before joining Meta's EU Public Affairs team, she was a Senior Manager of EU Public Affairs at Huawei, responsible for the policy area of Artificial Intelligence, Blockchain, Digital Skills and Green-related policy topics. She was also an adviser for the European Commission for over 5 years (through everis, an NTT Data Company) on DG Employment, Social Affairs and Inclusion. Her focus during this period was the European Classification of Skills, Competences, Qualifications and Occupations (ESCO) and the Europass Digital Credential project. Ms Dedopoulou is a Member of the Board of the Hellenic Blockchain Hub. She studied Political Science and History in Greece, Sociology in France and European Governance in Luxembourg. She also regularly writes articles and has travelled across Europe delivering speeches to policymakers, governments and industry summits, on topics ranging from the digital labour market to Blockchain in education and employment.
Ana Chubinidze is a founder and CEO of Adalan AI, consulting firm on AI Governance and Policy; also founder and director of non-profit organization AI Governance International. She is an invited founding editorial board member of Springer Nature’s AI and Ethics journal and member of the European AI Alliance. She often speaks at AI forums and conferences internationally and contributes to the work of several AI-related associations.
Andrea Aler Tubella
Dr. Andrea Aler Tubella (PhD, Computer Science, female) is a Senior Research Engineer at Umeå University with focus on the the design of formalisms and systems, and their applications as tools for the responsible design and monitoring of intelligent systems. Her research expertise includes formal logic, proof theory, as well as the use of logical modeling to describe reasoning and behaviour and its applications in AI.
Bárbara Urban Gonzalez
Bárbara Urban Gonzalez (Castelló de la Plana, 1981) is a Spanish researcher whose works are oriented towards the relationship between robotics and human beings.She graduated in Social and Cultural Anthropology (UNED) and Master in Ethics and Democracy (UJI). At this moment, she is finishing out her doctoral thesis on Roboethics.She is a lecturer at the National Distance Education University and collaborates with the Jaume I University.Her publications and her participation in congresses have tried to highlight the need to investigate the coexistence between humans and robots, especially in relation to transhumanism and the cyborg phenomenon.
Christoph Heitz is professor at School of Engineering, Zurich University of Applied Sciences, Switzerland. He has been working in the field of data-based decision making, developing approaches and algorithms that harvest data for improving business processes, customer interaction, and service co-creation. In the last years, he has been heavily engaged in developing new approaches for addressing ethical challenges of commercial data-based value creation. He is one of the authors of the “Code of Ethics for Data-Based Value Creation” which has been developed in a joint effort of Swiss companies and universities, for supporting companies in creating ethical data-based business. He also leads several research projects on algorithmic fairness (e.g. https://fair-ai.ch/).
Dario Garcia-Gasulla is a senior researcher at the Barcelona Supercomputing Center. He leads research on the High Performance Artificial Intelligence group, in topics such as deep neural representations and AI for medical imaging. He coordinates and teaches the Deep Learning course at the Master's on AI offered by the UPC, UB and URV universities. Occasionally he contributes to fields like characterization of misinformation, and transparent and accessible AI.
Evert F. Stamhuis
Evert F. Stamhuis (LLM, PhD) holds a chair for Law and Innovation at Erasmus School of Law since 2017 and is Senior Fellow of the Jean Monnet Centre of Excellence on Digital Governance. Previously he held a chair in criminal law and procedure at the Open University (NL). His research is on the interaction between law, governance and new technologies, with a special focus on the public domain, health care and regulated markets. As a researcher Stamhuis is affiliated to the International Centre for Financial Law & Governance, the Centre for Law and Economics of Cybersecurity and the Erasmus Initiative Dynamics of Inclusive Prosperity. Other current affiliations are the University of Aruba and the Court of Appeal of ‘s Hertogenbosch (NL).
Fabio Fossa (PhD, University of Pisa) is a researcher at the Department of Mechanical Engineering of the Politecnico di Milano. His main research areas are applied ethics, philosophy of technology, robot and AI ethics, and the philosophy of Hans Jonas. His current research deals with the philosophy of artificial agency and the ethics of autonomous driving. He is Editor-In-Chief of InCircolo – Rivista di filosofia e culture, a steering committee member of the META Research Group, and a founding member of the Zetesis Research Group.
Francesca Foffano is a researcher at the European Centre for Living Technology, Ca’ Foscari University of Venice working at the AI4EU project. She holds a Master in Human-Computer Interaction at the University of Trento and previously she obtained her Bachelor in Psychology at the University of Padua. During her studies, she collaborates with the CADIA research centre at Reykjavik University and in the industry. Her research interest focuses on the user’ understanding and perception of AI, social and ethical influences, and a definition of more human-centric design approaches.
Joris Krijger works as an Ethics & AI specialist at the Dutch bank de Volksbank while also holding a PhD position at the Erasmus University Rotterdam on Ethics & AI. He has a background in Philosophy, Economic Psychology and Media Studies. During his studies Joris was awarded a Dutch national prize for both his high-tech startup Condi Food (Rabobank Wijffels Innovation Award 2014) as well as for his Philosophy thesis on technology, ethics, and the financial crisis of 2008 (Royal Holland Society of Sciences and Humanities, 2017). He presently works on bridging the gap between principles and practice in AI Ethics by studying the operationalization of ethical principles from an academic and practical perspective. Additionally, Joris holds positions as a.o. Advisory Board Member at the Frankfurt Big Data Lab, Subject Matter Expert for CertNexus’ ‘Certified Ethical Emerging Technologist’ and Founding Editorial Board Member of Springer Nature’s AI and Ethics Journal.
Long Pham is the Community Manager of AI4EU, a €20M project that won funding from the European Union’s Horizon 2020 research and innovation program. She manages regular communications with a community of 400+ members from the 80 project partners, 5000+ users on the AI4EU Platform, nearly 10K followers on AI4EU social media channels. She supports dissemination activities and ecosystem development of European AI via collaborations with a series of European AI initiatives and winning projects. In her research, Long focuses on citizen engagement aspects of smart city programs, local policy development, and policy and regulation for technology adoption in the development of smart and sustainable cities.
Manuela Battaglini is a specialist in strategic digital marketing, a law graduate and an independent researcher studying the social impact of automated decision-making processes and personal profiling. She works on Digital Ethics (data ethics, security ethics, algorithm ethics and ethics in practice) She is also CEO of Transparent Internet, a consulting firm that helps organizations make their AI systems ethical, transparent and trustworthy. Due to her research activity, Manuela Battaglini was called by the Spanish Government, together with another governmentally appointed group of experts, she was called to help define the Spanish Charter of Digital Rights, where she leads the ‘Ethical Considerations’ working group.
Dr. Ricardo Vinuesa is an Associate Professor at the Department of Engineering Mechanics, at KTH Royal Institute of Technology in Stockholm. He is also a Researcher at the AI Sustainability Center in Stockholm and he is Vice Director of the KTH Digitalization Platform. He received his PhD in Mechanical and Aerospace Engineering from the Illinois Institute of Technology in Chicago. His research combines numerical simulations and data-driven methods to understand and model complex wall-bounded turbulent flows, such as the boundary layers developing around wings, obstacles, or the flow through ducted geometries. Dr. Vinuesa's research is funded by the Swedish Research Council (VR) and the Swedish e-Science Research Centre (SeRC). He has also received the Göran Gustafsson Award for Young Researchers. Research Group Web: www.vinuesalab.com
PhD Researcher in Economics at Tallinn University of Technology focusing on the impact of AI on the labor market. In addition, Project Manager at the World Economic Forum's Global AI Council working on a white paper putting forward positive visions for a future economy driven by AI. Previously did research on trustworthy AI for the European Commission.
Teresa Scantamburlo is a post-doc researcher at the European Centre for Living Technology, Ca’ Foscari University of Venice (Italy) and before that has worked at the University of Bristol (UK). Her main research interests lay at the intersection of Computer Science and Philosophy and include the impact of Artificial Intelligence (AI) on human-decision making, the role of data and algorithms in social regulation, and the ethical assessment of AI systems. She is also interested in studying AI from the point of view of epistemology and the philosophy of science (e.g. some topics of interest include the problem of induction, the problem-solving approach and the notion of progress).
Steven Umbrello currently serves as the Managing Director at the Institute for Ethics and Emerging Technologies. His primary research interests are on value sensitive design (VSD) and its application to transformative technologies like AI, nanotechnology, and industry 4.0 technologies.
Xin Chen is Executive Director European Lead on AI & Data Governance Policy & Standards & Industry Digitization and Corporate Strategy Department at Huawei Technologies He jointed Huawei in 2005 in the UK. Since then He held various leadership roles within Huawei’s Carrier Business Group and Enterprise Business Group. At Enterprise BG, he has played a key role in building a significant Enterprise CPE business in the convergent communication sector with some carrier partners and helped to grow the strategic partnership and business with verticals in Europe. He recently joined Huawei’s Corporate Strategy Department leading the European standards and policy related activities including industry enablement on AI & Data and Health Care.
He has a number of industry engagements including being a member of TechUK AI & Big Data Leadership Committee, AI4EU Trustworthiness & Legal AI WG and Digital Europe AI & Data and eHealth WG. Prior to joining Huawei, he worked in Lucent Bell Lab in the UK (2000) and Fujitsu Laboratory of Europe (2003).He held a BSC in Communication Engineering from Beijing Jiaotong University and a MSC in Data Communications from The University of Sheffield in the UK.
Zahoor ul Islam
Currently working as a PhD Student in Responsible Artificial Intelligence group at Umeå University, Sweden. Zahoor received his MS degree from the University of Goteborg, Sweden in Software Engineering and Management, and has been working as a Software Engineer in multiple organizations. His research focuses on addressing and integrating ethical, legal and social values in the design and development life-cycle of AI systems and ensuring that engineering of AI systems is carried out in a responsible manner while complying with established set Software Engineering practices, methodologies, and standards. To know more, visit https://www.umu.se/en/staff/zahoor-ul-islam/.