This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 3 minutes read

Europe's (and the world's) battle for the soul of AI

The title of the Wired article is The Fight to Define When AI Is 'High Risk', written by Khari Johnson and published on 1 September. It's certainly worth reading, if you can access it. It's about the European Commission's Artificial Intelligence Act, which was presented in April this year. The AI Act, like the General Data Protection Regulation, will be a landmark, in that it is the first proposed regulation of AI of its kind. It is pretty comprehensive, represents a very different approach to the regulation of AI seen anywhere else, and it would, in effect, apply not just in the 27 member states of the EU, but also outside the territory of the EU. So, like the GDPR, the AI Act would have far-reaching consequences in every sense of that adjective.

The AI Act classifies and would regulate AI systems by their risk to humankind. The corollary is, of course, that the higher the risk the AI system poses, the more strictly it would be regulated. That would include an outright prohibition on AI deemed highest-risk, for example: certain types of real-time facial recognition.

The Commission consulted on the proposed AI Act. The consultation period ended on 8 August. To be enacted as EU law, the AI Act will have to be adopted by the European Parliament and the European Council. There will be scope for amendment.

And this is why Johnson's Wired article is both timely and important: lobbying to dilute or strengthen the AI Act is under way. The different interests and positions in that lobbying remind us just what is at stake in the enablement and regulation of AI. Those with interests and positions will come as no surprise, there are a range of corporate, civil society, religious, professional and other interest groups.

As Johnson says, at the heart of the positions taken in the lobbying is the debate about which kinds of AI should properly be treated as high-risk. Currently, the AI Act defines high-risk AI systems as those that could harm human health or safety, or infringe fundamental human rights - meaning such rights afforded to citizens of the EU, such as the rights to life, to live free from discrimination, and to a fair trial.

Johnson's article describes in some detail the differing lobbying positions, and it is worth reading for that alone. But, more importantly, it reminds us what may be at stake in regulating AI as the Commission has proposed.

Some of the fundamental questions are these. Will the AI Act as is, or as it will be amended:

  • Genuinely build user trust in AI and the AI ecosystem, including public trust in how AI is regulated in practice?
  • Encourage the growth of "good" AI and the markets for it?
  • Lead to the responsible deployment and reliance on "good" AI, and safe AI and data science practices?
  • Facilitate the EU as a globally competitive market for AI?
  • Actually stifle the development of "good" or potentially "good" AI applications through overly complex or restrictive regulation, or high compliance costs? (Johnson's article outlines well some of the officially and unofficially forecasted costs of compliance.)
  • Result in unintended, but predictable, consequences, for example increased costs of services like insurance or financial credit?
  • Go far enough in classifying high-risk AI?
  • Provide adequately for general-purpose AI systems, that is, those able to perform a range of tasks (noting that some general-purpose AI systems are open source, while others are proprietary)?
  • Address or redress the balance between, on the one hand, powerful deployers of AI, such as governments, law enforcement agencies, big tech and big business, and, on the other, EU citizens, especially those less powerful and vocal in society?
  • Properly address and allocate responsibility for AI systems among "users", "deployers", "providers", "distributors" and "importers" of those systems (a request attributed by Johnson to Google)?
  • Provide for appropriate and dynamic long-term regulation?
  • Enable adequate supervision and audit of AI systems by state authorities and other, responsible and disinterested, parties?

We await the answers and, of course, the outcome, though many will be some way off. But what is being played out in the EU right now is nothing short of a battle for the soul of AI.

These systems are a threat to our individual freedoms, including the right to education, the right to a fair trial, the right to privacy, and the right to freedom of speech. They often present a situation of severe power imbalance and have huge implications on people's fundamental rights. It is unacceptable to delegate their risk assessment to profit-oriented businesses who focus on obeying the rules when they have to and not on protecting fundamental rights. The Civil Liberties Union for Europe


commercial, ai, artificial intelligence, technology, eu