The EU’s AI Regulation Proposal – How to Fix It?
Now that pretty many tech-lobbying people have done their reading over more than a hundred pages of regulatory prose, perhaps aided by this excellent design exercise – it is time to start the discussion on how to make it feasible for European industries. Here is my try, based on various exchanges of ideas with colleagues and companies.
Definition of AI – through EDM classics
It was Haddaway who wondered what is love in the 90’s. When reading through article 3 and Annex I of the proposed AI regulation, one might ask what is not AI? Here we have a fitting reference to another EDM classic: there’s No Limit (by 2 Unlimited). The proposed definition seems to have no bottom as to how dumb a system should be to drop outside the scope of the regulation. When read together, basically all IT systems are covered.
Fix the Structure!
The proposed legislation has two distinct pillars. The first one is an extension to the EU’s new legislative framework for products, assuring their safety (machines, toys, lifts, different kinds of equipment etc.) – most means of transportation are already excluded in the article 2. Another pillar is an ethical one, consisting of ban of some AI uses in article 5 and high-risk use cases meant in Annex III – touching societal, privacy, work-related and legal use-cases, likely to be relevant for human rights.
From an industry standpoint, there is a clear problem, caused by unclear and unpredictable legislation. Requirements to various product manufacturers – a large chunk of European industrial employers – would come from sector-based regulation AND the new AI framework. To add things up, the proposal vests the Commission with wide powers to tweak the requirements on top of the existing ones. This will lead to a maze of regulatory requirements from different domains – something that is likely to propel the cost of compliance to a level that would prove too high, especially for SME companies.
“Existing product legislation should be excluded from this proposal.”
A fix to this is to be found from article two. Existing product legislation should be excluded from this proposal, and possible needs to regulate the use of AI in these sectors could be placed to the sector-specific legislation – thus creating a clearer legal environment for the industries. Such an approach already exists vis-à-vis most modes of transport as they are excluded from the scope in article 2.
By so doing, the proposed legislation would focus solely on yet unregulated uses of AI in article 5 and high-risk, societally relevant use cases meant in Annex III. These use cases would form a more homogenous domain to regulate. New horizontal regulation would ban certain harm-inducing uses not regarded fit for democratic societies and set requirements to use cases that closely touch human rights and functioning of societies in Annex III.
Focus on the Process and Predictability
The proposal has matured quite a bit from earlier versions, but there are still some overstretched requirements, such as the ones on training data on article 10 (free from errors and complete). All who work with data know that it is a continuous struggle with incomplete datasets.
“Instead of idealistic requirements, the regulation should focus on due process.”
Instead of idealistic requirements, the regulation should focus on due process: how to have relevant evaluation processes and controls in place to identify and mitigate risks relevant to a specific use case. Documentation and logs form the basis for accountability for proper development and running of AI systems – on this requirement, the Commission is on the right track.
There are many instances where the proposal gives the Commission quite wide powers to give delegated acts. The Commission could adjust the scope of the regulation, change the list that defines high-risk use cases and alter the standards or specification that systems need to adhere to. This causes great deal of unpredictability to the legislation. When fundamental elements are flexible, it is proper to ask whether the category of low-risk AI will remain a relevant category at all?
Sandboxes – testing ground for new common specifications?
Commission’s purpose is to create common specifications for high-risk use cases. At the moment, there are none. This is certainly something that needs to be done, if conformity assessments are to be carried out. However, the industry does not have entirely positive experiences with Commission’s involvement on standardisation. These new specifications would be solely for Commission to define – only subject to consultations to expert groups under relevant EU law. Development of specifications should be strongly industry-driven. The proposal contains a mechanism for fixing this.
“Common specifications should be developed on Regulatory Sandboxes.”
Common specifications should be developed on Regulatory Sandboxes. Now sandboxes seem like a compulsory add-on, with no relevant tasks to make them truly relevant. Making them a development laboratory for new specifications for high-risk AI systems would bring a completely new approach to technological regulation. Regulatory sandboxes could bring together developers, users, academia and regulators in a way that would facilitate sharing of information and experiences that would form the basis for European excellence.