Introduction

The AI industry is booming. Despite a deep economic crisis caused by the COVID-19 pandemic, digital technologies have seen remarkable success. When lockdowns/confinements began, more of us had to become digitally ‘enabled’ citizens – and even more so for those businesses forced to send employees to work from home. This even involved people historically less comfortable with sharing data and engaging with digital content being pushed forward into the world of new technologies, sharing their personal information, registering on all types of social media and websites, using video conferencing tools and other connected platforms.

More importantly, society has a growing interest in the many opportunities that AI and online data can offer, which we have only begun to tap into – for example: improving content moderation on social media; supporting clinical diagnosis in healthcare; and detecting fraud in financial services. Even those sectors that are relatively mature in their adoption of digital technology, such as financial services or retail, have yet to maximise the benefits of AI and data analytics. Particularly relevant also is the role played by the EU recovery fund which foresees a 20 per cent investment in digitalisation and other instruments.1

This accelerated digital adoption is not without its problems. Three barriers to progress require particular attention: the poor quality and quantity of data; a lack of coordinated policy and practice across public and private sectors; and transparency around AI and digital data use (CDEI, 2020: 4). The AI and data industry developed so fast and so globally that there is no accepted regulatory environment for good practice, beyond data protection laws such as the EU’s General Data Protection Regulation (GDPR), which are only a partial solution. The big data giants – Facebook, Google, Amazon and so on – have been left to set the tone for the whole industry and strongly advocate for self-regulation. Given the pressure and the capital mobilised by these giants, it is not a surprise that so far there has been little progress towards national guidelines and regulations for the industry.

The growth of AI and data is characterised by an interesting dilemma. Governments want better self-regulation from industry, but industry says it needs governments to explain the details of what to regulate. In many cases, governments are constrained by two sometimes opposing ideological positions. First, to promote a free market where the giants of the data industry promise self-regulation because they are in the best position to recommend fair regulatory frameworks that respect privacy and human rights. Second, that governments should work together to develop the expertise to design and encourage implementation of national or supernational rules. Despite the rhetoric of political leaders, no government wants to forge its own path, knowing that the big tech companies can simply shift their business elsewhere – as demonstrated by Facebook’s and Google’s threats to withdraw services from Australia in response to attempts to regulate the social media space there (Scroxton, 2021).

Indeed, lawmakers and regulators have still not even arrived at a broad consensus on what ‘AI’ itself is, a clear prerequisite for developing a common standard to enable its governance. Some definitions, for example, are tailored so narrowly that they only apply to sophisticated uses of machine learning, which are relatively new to the commercial world; other definitions (such as the one as in the recent EU proposal) appear to cover nearly all software systems involved in decision-making, which would apply to systems that have been in place for decades. Diverging definitions of AI are simply one among many signs that we are still in the early stages of global efforts to regulate AI.

From an ethical perspective, regulation is desperately needed to protect individuals, groups and communities. Smaller companies are slowly starting to change their ways when it comes to developing algorithms with social impact, with many beginning to view consumer trust as a competitive advantage. Meanwhile, the biggest players see their algorithms as the essence of their competitive edge, and aggressively protect their intellectual property, guaranteed to reduce transparency.

Recent research suggests the public is open to greater use of data: 72 per cent of survey respondents supported the use of data-driven technology during the pandemic, but they expressed ongoing concerns over its governance (CDEI, 2021).

This chapter considers the debate around regulation and governance of the AI and data industry. The aim is to offer a snapshot of the current situation, presenting the different positions that dominate the discussion at national and international levels. Possible pathways for future developments will be suggested.

The chapter arises out of the work of a Coordination and Support action funded by the European Commission (Grant No 788352), the PRO-RES project2 which aims to promote ethics and integrity in non-medical research. PRO-RES has designed a guidance framework regarding the delivery of responsible research and innovation. The partners of the project have strongly advised that promoting ethics in research does not mean producing rigid, prescriptive sets of rules, but rather the project aims to provide a clear backdrop on value and principles as well as a toolbox and a library of reference, to inform anyone engaging with scientific literature and evidence-based studies. The outcome of the project then provides a framework that adapts to changes over time, similar to what the European Commission (EC) proposal is addressing in terms of the regulations for AI.

The case of the EU proposal: transparency, ethics and responsibility

On 21 April 2021, the European Commission introduced a proposal for legislation to govern the use of AI, acting on its aim to draw up rules for the technology sector over the next five years and on its legacy as the world’s leading regulator of digital privacy. At the heart of the issue is the will to balance the need for rules with the desire to boost innovation, allowing the old continent to assert its digital sovereignty. On where the needle should be, opinions are divided – and the publication of the Commission’s draft proposal will not be the end of the discussion. But how will such rules fit in with broader plans to build European tech platforms that will compete globally with other regions? How will new requirements on algorithmic transparency come across to the general public? And what kind of implementation effort will this require from start-ups, mid-size companies and big tech? Another set of questions concerns the role of a single European platform for Member States’ national data. From health registries to education material, the COVID-19 crisis has accelerated the demand for storing large amounts of data for national epidemiological purposes. Countries in Europe, even the most advanced, are ill-prepared to respond to such challenges and distrust the ability of European institutions to host and accommodate their needs, particularly because a key European disadvantage lies in the lack of significant European digital corporations with global influence (Shapiro, 2020).

Prior to the proposal’s release, on 19 February 2020, the EC published a White Paper on AI – ‘A European Approach to Excellence and Trust’ (European Commission, 2020). The White Paper sets out policy options on how to achieve the twin objectives of promoting the uptake of AI and of addressing the risks associated with certain uses of such technology. This proposal published in April 2021 aims to implement the second objective for the development of an ecosystem of trust by proposing a legal framework that helps to ensure AI will be trustworthy. Furthermore, the EC proposal delivers on the political commitment by President von der Leyen, who announced in her political guidelines for the 2019–24 Commission, ‘A Union that Strives for More’ (von der Leyen, 2019), that the EC would put forward legislation for a coordinated European approach to the human and ethical implications of AI. The core of the EU AI recommendations can be split into three parts: AI systems should be lawful, ethical and robust. Lawful AI applications are those that respect common standards. Ethical AI applications should respect agreed rules which are based on guiding principles including: a human-centric and human-made AI; safety, transparency and accountability; safeguards against bias and discrimination; right to redress; social and environmental responsibility; and respect for privacy and data protection.

High-risk AI technologies, such as those with self-learning capacities, should be designed to allow for human oversight at any time. If a functionality is used that would result in a serious breach of ethical principles and could be dangerous, the self-learning capacities should be disabled and full human control should be restored.3

Robust AI applications take both a technical and a social environment perspective into consideration with regards to system behaviour. More importantly, a trustworthy environment for European companies means a stronger position for the EU market and a position in which EU institutions can overcome Member State scepticism and Europe could become a beacon for trusted technology.

To implement these three core parts, the EU Trustworthy AI recommendations list seven requirements for an AI system. In other words, the proposal calls for an AI industry which is based on the approach ‘ethical by design’.4 The proposal’s requirements apply to all those who are involved in planning, developing and managing AI systems. This long list includes developers, data scientists, project managers, line-of-business owners and even the users of the applications. The core requirements are:

  • Focus on human agency and oversight: AI systems need to support human objectives, enable humans to flourish, support human agency and fundamental rights and support overall goals of a healthy human society.

  • Technical robustness and safety: AI systems should ‘do no harm’ and even predict and prevent harm from occurring. They must be developed to perform reliably, have safe failover mechanisms – in other words a backup operational mode that automatically switches to a standby database, server or network if the primary system fails – that minimise intentional as well as unintentional harm and prevent damage to people or systems.

  • Privacy and data governance: AI systems should maintain people’s data privacy as well as the privacy of the models and supporting systems.

  • Transparency: AI systems’ developers and owners should be able to explain their decision-making as well as provide visibility into all elements of the system.

  • Diversity, non-discrimination and fairness: as part of the focus on human agency and rights, AI systems must support society’s goals of inclusion and diversity, minimise aspects of bias and treat humans with equity.

  • Societal and environmental well-being: in general, AI applications should not cause societal or environmental unrest, make people feel like they are losing control of their lives or jobs, or work to destabilise the world in one manner or another.

  • Accountability: at the end of the day, some human needs to be in charge. The systems might be working in an autonomous fashion, but humans should be the supervisors of the machine. There needs to be an established path for responsibility and accountability for the behaviour and operation of the AI system through the system’s lifecycle.

Therefore, the April 2021 proposal sets harmonised principles for the development, placement on the market and use of AI systems in the EU following a proportionate risk-based approach. Such an approach proposes a common definition of AI, that will endure into the future. Certain particularly harmful AI practices are prohibited as contravening EU values, while specific restrictions and safeguards are proposed in relation to certain uses of remote biometric identification systems for the purpose of law enforcement. The proposal lays down a solid risk methodology to define ‘high-risk’ AI systems that pose significant risks to the health and safety or fundamental rights of persons.

A closer analysis of the EC proposal

The EU has identified two types of system that require regulation: those deemed to pose an unacceptable risk, and those it believes present a critical risk. AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned. These include AI systems or applications that manipulate human behaviour to circumvent users’ free will and systems that enable ‘social scoring’ by governments.

The EU proposal lists eight applications of AI deemed to be of high risk. Broadly speaking, these cover critical infrastructure, systems for managing crime and the judicial process, and any system whose decision-making may have a negative impact on an EU citizen’s life, health or livelihood. The remit of these systems covers areas such as AI used to deny access to education or training, worker management, credit scoring and where AI is used to prioritise access to private and public services and border control.

More importantly, at the core of the proposal remain the key principles of transparency, ethics and responsible AI. Starting with transparency, the proposal states that humans need to have visibility into how the AI comes to its decisions as well as what data it uses. Without visibility, it is impossible to understand and dissect the reasons behind AI decisions if something goes wrong. Transparency gives people the opportunity to improve their systems by having visibility into how they fail or where they make mistakes. Transparency is more than just an additional feature; it is necessary for overall system accountability.

Transparency is not sufficient to address the issue of ethics. Even if we know how the system is working, it is important to know that the actions consequent on the application of AI are ethical. Companies are making use of algorithmic decision-making that has been shown to be prone to bias. For example, bias has been identified in the use of AI for recruitment purposes (Köchling and Wehner, 2020) or in the justice system (Noriega, 2020; Zajko, 2021). These biases can then become entrenched and magnified in the systems – as they often are in humans, if oversight is insufficient. Applications such as facial recognition have run into challenges with accuracy and the tendency for organisations to put too much emphasis on what is a probabilistic match. The question here is not only about the system’s functionality or transparency, but rather the context in which the AI is being used.

Related to the issue of ethics is the concept of accountability in AI. Even if the systems are transparent and they are operated ethically, it is important for organisations to ensure that any outcomes are responsibly handled. If these systems hold important decisions in the balance, then monitoring by employees is key. While these systems might be ethical on face value, they need aspects of responsibility to make them trustworthy.

This latest proposal complements existing European Union law on non-discrimination with specific requirements that aim to minimise the risk of algorithmic discrimination, in particular in relation to the design and the quality of data sets used for the development of AI systems combined with obligations for testing, risk management, documentation and human oversight throughout the AI systems’ lifecycle.

From the AI industry perspective, the recommendations translate into some key features:

  • Maintain data privacy and security. Look across the AI system lifecycle and make sure that elements that interact with data, metadata and models are secured and maintain data privacy as required.

  • Reduce the bias of data sets to train AI models. Examine training data sets for sources of potential bias and make sure that communities are represented in a fair and equitable way.

  • Provide transparency into AI and data usage. Organisations should let AI system users know how their data is being used to train or power AI systems and provide visibility into aspects of data selection, usage and even the business model that the AI system supports. To the extent that the AI system might be invisible to the user, responsible AI usage suggests you should let your users know they are interacting with an AI-based system.

  • Keep the human in the loop. Even when AI systems are operating in an autonomous fashion, there should always be a human monitoring the system performance. There should be an appointed human system owner or group of humans who are responsible. Users should also know who to reach out to when the AI systems are exhibiting problematic behaviours.

  • Limit the impact of AI systems on critical decision-making. If the AI system is being used for critical life-or-death or high-impact decisions, there should always be an identified failover process or human oversight to make sure that no harm is done.

Applying these key points gives users more confidence in the AI system and allows the AI to deliver the expected value without any fear of irresponsible behaviour or outcomes.

“Companies should remember that regulation also helps them because it creates a level playing field, where you know that your competitor is bound by the same rules as you”, comments Catelijne Muller, a Dutch lawyer and a member of the EU High Level Expert Group on AI (Sapra, 2021).5

The counterfactual argument

The approach in the UK since its post-Brexit deal is still a little unclear, but it seems to be fast moving towards a very different position from its neighbours in the EU. The main aim of the current UK government is to demonstrate the ability to set a regulatory environment free from the influence of EU in many economic areas. As a result, their position has started showing a clear divergence from the EU proposal and to diverge from many AI regulations.

Set up by Prime Minister Boris Johnson, the Taskforce on Innovation, Growth and Regulatory Reform (TIGRR), consisting of three pro-Brexit Conservative MPs and former ministers, and has called for key protections to be cut from the UK’s implementation of GDPR as it relates to automated decision-making (Duncan Smith et al, 2021). TIGRR recommends scrapping Article 22 of GDPR, which concerns ‘the right not to be subject to a decision based solely on automated processing, including profiling’. Article 22 had been seen as establishing a ‘right to explanation’ to data subjects who have had decisions made about them in an automated fashion (Iphofen and Kritikos, 2021). Acknowledging the potential for controversy, the taskforce report says: ‘If removing Article 22 altogether is deemed too radical, GDPR should at a minimum be reformed to permit automated decision-making and remove human review of algorithmic decisions’ (Duncan Smith et al, 2021: 53). Clearly this is a point which brings the UK into opposition to the EU proposal which, as mentioned before, seeks the opportunity to have human supervision on critical life-or-death or high-impact automated decisions. The task force makes some clear claims. The report authors believe that loosening the burden of regulation is necessary to promote innovation to the benefit of the UK AI sector.

Welcoming the recommendations of the taskforce, Johnson wrote that it is ‘obvious that the UK’s innovators and entrepreneurs can lead the world in the economy of the future … this can only happen if we clear a path through the thicket of burdensome and restrictive regulation’ (Skelton, 2021). Such a move would be controversial within the UK too. Some MPs have already raised concerns that moving away from AI regulations like those in the EU proposal could in fact impact negatively on the capacity of UK AI products to be welcome in the European market and as such retain its competitive edge across the continent of Europe. Furthermore, as Computer Weekly reported, trade unions have already objected to the proposal to ditch Article 22. “Scrapping Article 22 could be the green-light to the expansion of automated processing, profiling and transfer of personal data into private hands. We need data laws fit for the challenges of the digital economy, not a race to the bottom on standards”, said Andrew Pakes, director of communications and research at Prospect Union (Skelton, 2021).

Changing UK AI regulations could also bring into question the EU’s recent agreement to offer data adequacy to the UK, especially considering the EU’s inclusion of a review process over wider fears that the UK may dilute the protections inherent in GDPR. The question is still open, as are many other aspects of this difficult post-Brexit deal.

Some experts have examined the position of the US in this area. In a report published in January 2021, US expert Alex Engler claimed that: ‘This year is poised to be a highly impactful period for the governance of artificial intelligence (AI)’ (Engler, 2021). In his analysis, President Joe Biden is already capitalising on the increased investment that the Trump administration approved for hundreds of millions of dollars in AI research funding, and additionally his federal agencies are already working to comply with executive guidance on how to use and regulate AI. A new National AI Initiative Office has been set up and will coordinate all AI initiatives in synergy with Congress.

Two recent publications of the US-based Center for Strategic and International Studies, one on the EU Digital Services Act and Digital Markets Act (Broadbent, 2020) and another on artificial intelligence (Broadbent, 2021), suggest that the administration centres transatlantic discussions on the high-tech regulatory matters crucial to the competitiveness of US tech companies in Europe and also relevant to strategic competition with China. It is in fact in the interest of both the US and Europe to hold the line against China, as was often repeated in the 2021 AI Summit organised by Politico. China seeks to export its intrusive model of data governance and AI regulation – a model anchored in state control of all information and communication, draconian surveillance, data localisation, and other protectionist and autocratic practices. To succeed, Europe and the US should agree on a basic framework of top-line, democratic, regulatory principles for AI that can be promoted with trading partners in Asia-Pacific, where China is proselytising its model as an element of the Belt and Road Initiative.6

In truth, the US is balancing multiple priorities. It is committed to ensuring the technology can be built on democratic ideals. Congress has already proposed legislation like the Algorithmic Accountability Act, which is close to the EU regulations. Also, there are initiatives like the Joint Artificial Intelligence Center and National Security Commission on Artificial Intelligence that have stated that AI is needed mainly in the interest of security, an arguable view from the perspective of European human rights and lawyers. At the same time, US institutions see AI as an economic catalyst and, together with other governments, believe that a balance between ethical, fair and unbiased AI must not stifle innovation. For some companies in the US, the European path suggests a framework that is still too prescriptive and some of the requirements may hinder small businesses and start-ups. In fact, while Europe is moving quickly to craft concrete proposals for the EU-wide regulation of data, digital services and AI, the US has followed a slower and more fragmented approach where the only laws putting guardrails on AI are at the state level.7

At the federal level, the Federal Trade Commission (FTC) last year issued guidance emphasising the transparent, explainable, and fair use of AI tools (Smith, 2020). The FTC issued further guidance in April 2021 warning companies against biased, discriminatory, deceptive or unfair practices in AI algorithms. The National Security Commission for Artificial Intelligence’s March 2021 Final Report urged the adoption of a cohesive and comprehensive federal AI strategy (NSCAI, 2021).

The US position remains quite distant from the EU proposal and addresses directly some of the key concerns that the EU proposal is meant to resolve. The US is not the only country where the debate about regulating AI is viewed in competition with business development and where ethical concerns seems to be second in order of importance compared to empowering technological companies.

Citizens and AI

The concept of trustworthiness in AI is all about humans putting their confidence in machine-based systems. Trust is hard won and it is vitally important for those looking to put AI into real-world use that they pay close attention to these issues of trustworthiness and responsibility. As AI becomes an ever-increasing part of our daily lives, trustworthiness will make the difference between AI systems that are relied upon and those that are avoided due to legitimate concerns or individual fears.

When the political debate touches upon ‘ethical barriers’, it is widely acknowledged that we need to develop practical guidelines because AI ethics presents major issues for society. More importantly, people need to understand where they must take control of their data, and where data is needed. At the same time, regulators are developing simple frameworks and audit arrangements that can be easily applied and explained to people. New careers are likely to emerge like actuaries, accountants and lawyers who help companies audit algorithms for bias, fairness, accountability and ethics. The bigger risk perceived today is that systems now use deep learning, meaning the capacity of a machine to extract progressively higher-level features from raw data, whereas in the past, AI systems tended to be modelled on human decision-making. Because of deep learning, today’s systems are far more opaque and less controlled. The availability of data to improve AI algorithms impacts on the use of data for the public good. In fact, public data is often held by a few very large private US companies, not the public sector. Apple, Amazon, Facebook and Google are in a much better position than public sector organisations to advance because they have monopolistic access to public data. To create a more even playing field, such monopolistic control of platforms would have to be dismantled.

At the same time, concerns emerge about the centralised control of data by governments, as discussed earlier with the Chinese model, and the debate focuses on who should be entrusted to hold, manage and profit from the data. Societies are facing big challenges to protect their privacy, interests and individuals’ identities, profiles and independence. The European proposal on regulations is aiming mainly to make a responsible industry which will act ethically and guarantee customers’ interests.

AI applies to regulated industries: do more rules mean more costs for the industry?

Regulations will foster some major changes in the high-tech industry. Some would say that such a regulatory framework could entail a breakup of America’s largest tech firms, by prohibiting them from operating and competing on digital platforms at the same time, and that this shift may imply that tremendous costs would be imposed upon consumers and companies alike.

It is fair to say that for several years now, there has been a growing pushback against the perceived ‘unfairness’ of the tech industry. The main arguments address the unfairness of large tech platforms favouring their own products at the expense of entrepreneurs who use their platforms; incumbents acquiring start-ups to squash competition; and tech companies that spy on their users and use their data to sell them things they don’t need (House Committee on the Judiciary, 2019). On the other hand, critics say there is a chance that the reforms proposed by the House Judiciary Committee Antitrust Report, for example, would merely exacerbate the status quo (Miller and Mitchell, 2021). More importantly, it must be recognised that over the last decade, the tech sector has been the crown jewel of the US’s economy and has been a factor pushing technology development across the world. While firms like Amazon, Google, Facebook and Apple have grown at an amazing pace, countless others companies have flourished in their wake.

Google’s and Apple’s app stores have given rise to a booming mobile software industry. Platforms like YouTube and Instagram have created new venues for advertisers and ushered in a new generation of entrepreneurs including influencers, podcasters and marketing experts. Social media platforms like Facebook and Twitter have disintermediated the production of news media, allowing ever more people to share their ideas with the rest of the world (mostly for better, and sometimes for worse). Amazon has opened up new markets for thousands of retailers, some of which are now going public.

The recurrent question is whether it is possible to regulate this thriving industry without stifling its unparalleled dynamism. Acquisition by a ‘big tech’ firm is one way for start-ups to rapidly scale and reach a wider audience, while allowing early investors to make a quick exit. Self-preferencing can enable platforms to tailor-make their services to the needs and desires of users. In the online retail space, copying rival products via house brands provides consumers with competitively priced goods and helps new distributors enter the market.

Sceptics may think that all these practices would be heavily scrutinised or banned outright by new regulations. Beyond its direct impact on the quality of online goods and services, this huge shift would threaten the climate of permissionless innovation that has arguably been key to Silicon Valley’s success. Nothing in the EU proposal seems to really point at the quality of online goods and services but rather focuses on the risks of the AI systems. The distinction between high- and low-risk AI shows that there are no premises for a tight market framework. Some critics say: ‘It leaves Big Tech virtually unscathed. It lacks a focus on those affected by AI systems, apparently missing any general requirement to inform people who are subjected to algorithmic assessments. Little attention is paid to algorithmic fairness in the text of the regulation as opposed to its accompanying recitals’ (MacCarthy and Propp, 2021).

AI has applications in many products, some of which already fall under existing regulations (for example, antitrust or transport regulations). Products like cars and aircraft are already subject to regulation designed to protect the public from harm and ensure fairness in economic competition. In general, the approach to regulation of AI-enabled products to protect public safety will be informed by assessment of the aspects of risk that the addition of AI may increase or reduce. The EC proposal goes in this direction and does not propose new regulations but rather points at the existing one as models of efficiency and best practices.

At the same time, as companies begin refining their practices to abide by new regulations, some EC panel experts8 suggested that they must also consider tools besides technology itself to eliminate bias. ‘Diverse teams help to represent a wider variation of experiences to minimize bias. Embrace team members of different ages, ethnicities, genders, educational disciplines, and cultural perspectives’, says Francesca Rossi, IBM’s AI ethics global leader (IBM, 2019). Indeed, such training and technology that will be ‘ethical by design’, including all the checks and additional requirements for implementing trustworthy technology, may have some heavy costs. Experts claim that larger companies will of course be advantaged even in adopting the new regulatory system, whereas smaller businesses may lack the infrastructure, tools and resources to comply. According to Coadec (the UK-based Coalition for a Digital Economy) which campaigns on behalf of the start-up sector: ‘Regulation and taxes designed to target the biggest companies will have unintended consequences, damage start-ups and lead to more unequal outcomes’ (Allied for Startups, 2018: 2).

The ethical perspective: the EU proposals as a beacon to set standards for other countries.

Strengthened by the GDPR experience, the EC and the European Parliament are encouraging Member States to support their proposal. It would be the first step towards a new way to engage with AI companies and products which is in line with European values and principles. As has been mentioned, in this case regulations are not designed for the sake of limiting high-tech companies’ capacities and potential, but rather to ensure that, especially in deep-learning AI, we can maintain a very ethical and transparent approach. At its core, the EC proposal remains faithful to protecting human rights and European citizens. As such, the policy strives to maintain ethical values. It has also set the tone for an ecosystem for innovation which is meant to be sustainable and trustworthy.

Economically and politically, though, it raises the issue of how it fits into the global landscape and what impact it may have on other markets and non-European companies. More importantly, with AI (and digital technology in general) being the battleground of the future economy, the question is how policies and regulations may impact on European progress in this area. Too often China is mentioned as the competitive model which will not enforce ethical standards or allow them to determine policy directions. With a poor record in human rights, China’s policymakers will be less concerned about citizens’ protection.

The US and Europe are forging closer alliances and they represent the most substantial share of AI global market and technology development. Political and economic tensions will affect this sector as any other, but regulations could create a homogenous playing field that would also support technology adoption and make trustworthy technology more successful for citizens’ adoption. At the same time, risk assessments and costs analysis have been undertaken to assess the impact of such a proposal on EU companies and to ensure that what some entrepreneurs see as a risky strategy could eventually have a comparative advantage on the global market. Too many questions are still open and it is hard to make a real assessment as yet.

Conclusion

The proposal is published and will go through a long process of scrutiny by policymakers in the Member States, and in the European Parliament who will need to adopt it as the European approach for AI in the ordinary procedure. Only once these are adopted, will the regulations be directly applicable across the EU. At the same time, Member States are also working on a coordination plan for the implementation of the actions stated in the proposal. No doubt the road to implementation will be long and will require further clarification of the details as the industry continues to grow and prosper.

Notes

1

In particular, they include:

  • The Recovery and Resilience Facility: 20 per cent of its funds must be spent on the digital transition of Member States, including on digital skills.

  • The Digital Europe Programme: promoting digital skills is a core element of this new funding, which has a budget of around €200 million for 2021 and 2022.

  • The European Social Fund Plus: a fund to support Member States in reforming national education and training systems to support key skills.

  • The European Global Adjustment Fund: supports training in digital skills to help laid-off workers find another job or set up their own business.

  • Horizon Europe: finances grants for masters, PhD and postgraduate research activities in all fields including digital through Marie Skłodowska-Curie actions as well as the European Institute of Innovation & Technology.

2

https://prores-project.eu, a European Commission–funded project aiming to PROmote ethics and integrity in non-medical RESearch.

5

Catelijne Muller is a member of the EU High level Expert Group on AI. See: https://digital-strategy.ec.europa.eu/en/policies/expert-group-ai

6

The Belt and Road Initiative is a global infrastructure development strategy adopted by the Chinese government in 2013 to invest in nearly 70 countries and international organisations. See Belt and Road Initiative research reports from the World Bank, https://www.worldbank.org/en/topic/regional-integration/brief/belt-and-road-initiative

7

Legislation related to AI 16 April 2021.

References