Reports

General reports

Country Organisation Title Year Brief description
UK House of Commons Algorithms in decision-making 2018 The report identifies the themes and challenges for the ‘Centre for Data Ethics & Innovation’, an advisory body launched in 2018 by the Government. Key issues include data sharing, bias, discrimination, transparency and accountability.
UK Royal Society Machine learning: the power and promise of computers that learn by example 2017 The report outlines the significant opportunities and challenges introduced by modern machine learning techniques. The report makes a number of recommendations for the Government such as: to promote open data standards; to improve education and training in machine learning methods at all education levels; to ensure that immigration and industrial strategy policies align with the needs of the UK AI development sector, and to facilitate public dialogues on the opportunities and challenges of machine learning.
UK Ipsos MORI & Royal Society Public Views of Machine Learning 2017 The report provides evidence about public perceptions around the potential benefits and risks of machine learning, based on 978 face-to-face interviews conducted in 2016 and public dialogues. Types of perceived risks: human replacement, depersonalisation, restriction and harms; types of perceived benefits: the time saving and better choice.
UK Royal Society Machine learning: the power and promise of computers that learn by example 2017 The report documents the Royal Society’s machine learning project, which aims at increasing awareness of this technology, suggesting its potential benefits and challenges. Also, it identifies areas of public concern that would need farther investigation: interpretability, robustness, privacy, fairness, inference of causality, human-machine interaction, and security.
EU Informatics for Europe & ACM Europe Policy Committee When Computers Decide: European Recommendations on Machine-Learned Automated Decision Making 2018 The report reviews the major implications of Automated Decision Making (ADM) with a particular emphasis on technical, ethical, legal, economic, societal and educational aspects. Some of the recommendations put forward are: providing standards to assure that ADM systems are fair; Ensuring that ethics remain at the forefront of ADM development and deployment (e.g. European agency for oversight); promoting value-sensitive design; clarifying legal responsibilities for ADM’s use and impacts; in-depth consideration of economic consequences of ADM; increasing public funding for ADM-related research (prioritising research in explainable ADM); expanding public awareness of ADM systems.
EU European Commission & EurAI The European Artificial Intelligence Landscape 2018 The report collects the result of a workshop held in Brussels that reviewed the current state of AI in Europe. As well as considering some bottlenecks (e.g. bureaucracy and fragmentation), it makes some proposals such as: the establishment of a European research centre for AI modelled on institutions such as CERN; Investment and creation of a pan-European data infrastructure; designing mechanisms to re-skill and up-skill the broader population in the use of AI tools.
World The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, First Edition 2019 The report is the result of a vast collaborative effort and guides stakeholders involved in the design and development of autonomous intelligent systems. Based on eight general ethical principles (human rights, well-being, data agency, effectiveness, transparency, accountability, awareness of misuse, competence), it provides concrete recommendations to address and mitigate ethical issues (well-being metrics, embedding norms into autonomous intelligent systems, value-based design methods)
World Access Now Mapping Regulatory Proposals for AI in Europe 2018 This report surveys the major regulatory initiatives in AI in the EU and among member states. The analysis is based on published strategy papers and states’ consultations with experts. A key contribution in the document is the comparison among regulatory strategies with respect to ten relevant principles: transparency, accountability, the right to privacy, freedom of conscience and expression, the right to equality and non-discrimination, due process, the right to data protection and user control, collective rights, economic rights and the future of work, the laws of war.
EU High Level Expert Group on AI (HLEG-AI) Ethics Guidelines for Trustworthy AI 2019 The report introduces a framework for trustworthy AI based on fundamental rights (respect for human dignity; freedom of the individual; respect for democracy, justice and the rule of law; equality, non-discrimination and solidarity, citizens’ rights) and four ethical principles (respect for human autonomy, prevention of harm, fairness and explicability). These principles are then translated into seven key requirements for AI systems: Human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental well-being; accountability. These guidelines are complemented by an assessment list that offers guidance for practical implementation.
France Commission for Information Technology and Liberties (CNIL) How Can humans keep the upper hand? The ethical matters raised by algorithms and artificial intelligence 2017 The document presents the result of a public debate organised by the French Data Protection Authority, which has involved 60 partners. It identifies six ethical issues: a threat to human autonomy and free will; discrimination and exclusion; algorithmic profiling; retention of personal data; quality, quantity and relevance of training data; hybridisation between humans and machines. Also, the report suggests six practical policy recommendations such as: fostering the education of all players involved in the algorithmic chain; making algorithmic systems understandable and setting up a national platform for auditing algorithms.
Belgium Flemish Academy of Science Artificiële intelligentie: Naar een vierde industriële revolutie? 2017 This document results from the activity of a working group set up by the Class of Natural Sciences and the Royal Flemish Academy for Sciences and Arts to study the impact of AI. The main purpose of this document is to inform the public as objectively as possible and to propose a series of conclusions and recommendations to concerned parties in order to deal with AI and ensure that our community can adequately benefit from the vast opportunities, as well as get an insight into the risks and what to do about them.
The Netherlands Nederland ICT Ethische Code Artifical Intelligence 2019 This document is a code of ethics developed by the Ethics Think Tank within the Netherlands ICT, a group of Dutch companies operating in the ICT sector. The code of conduct is in accordance with EU ethics guidelines and will be reviewed annually. Each member company of Nederland ICT commits to eight guidelines such as: awareness of AI technical possibilities and limitations; providing insight into the data that is used by AI application; clarifying when a user is dealing with an AI system and the responsibility of each part; and ensuring that the behaviour of application is actively motored.
Sweden Vinnova Artificial intelligence in Swedish business and society: Analysis of development and potential 2018 This report is delivered by Sweden’s Innovation Agency (Vinnova), which was commissioned by the Swedish government. The report maps the opportunities connected to the use of AI in Swedish industry and in Swedish business and the public sector generally. It analyses the development of AI in Sweden with intending to highlight strengths and weaknesses.
World Future of Humanity Inst, Oxford Uni, Centre for the Study of Existential Risks, Uni Cambridge, Centre for a new American Security, Electronic Frontier Foundation, OpenAI The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation 2018 The report is written by 26 experts from different institutions and builds upon a workshop held in Oxford in 2017. It surveys the threats of AI in three security domains (digital security, physical security and political security). It also makes high-level recommendations and sets up priorities, such as learning from and with cybersecurity community and promoting a culture of responsibility.
Germany Algorithmic Watch & Bertelsmann Stiftung Automating Society Taking Stock of Automated Decision-Making in the EU 2019 This is an explorative study of the use of automated decision-making in Europe. For example, it considers applications for job profiling (Finland), allocating treatment for patients (Italy) and identifying vulnerable children (Denmark). It builds upon a network of experts (academics, journalists, lawyers…) that contribute to report on national situations. This network is expected to grow in the coming years so as to include countries not yet covered.

Industry corner

Last update: 23/12/2022