Skip to content
Home » AI at work: Why a new EU Directive would do more harm than good 

AI at work: Why a new EU Directive would do more harm than good 

There is no regulatory gap for workplace AI. According to the Technology Industries of Finland, additional EU legislation would only add complexity and uncertainty.

Is the use of AI in workplaces currently unregulated? 

– No. There is no “AI Wild West” in European workplace contexts. To begin with, the AI Act prohibits the use of abusive and intrusive AI-based technologies by employers. It also obliges both providers and deployers to oversee high-risk AI systems – which covers the use and development of any HR-related AI systems by employers that could potentially harm the employees.  

– The General Data Protection Regulation (GDPR) restricts the processing of data. According to the GDPR, data can only be collected for specified, explicit and legitimate purposes. Furthermore, the data must be adequate, relevant and limited to what is necessary in relation to the purposes. Case law has proven that collection of too detailed data to create e.g. AI technologies that could meticulously monitor and evaluate workers is not allowed.  

– There is also a broad range of different EU labour laws on health and safety, as well as equality, information and consultation that also cover AI technologies because the principles enshrined in them are technology neutral.  

Won’t a Directive ensure a level playing field on the regulation of AI in workplaces across the EU? 

– Since the current restrictions on the use of AI in workplaces are rooted in EU law, introducing new rules – especially by means of a Directive that can be expanded upon and implemented differently by Member States – will at best only duplicate existing safeguards. At worst, it will create conflicting legal requirements that will make it more difficult for businesses to apply and respect the law due to the ensuing uncertainty and complexity. In other words, introducing a dedicated Directive on AI in the Workplace will create an uneven playing field within the EU when it comes to AI-based technologies in the world of work.  

– This will affect SMEs especially gravely, as they are already struggling to comply with the existing EU law on data and artificial intelligence. 

Won’t a Directive only affect a narrow range of workplace technologies and not industrial AI? 

– The productivity gains that artificial intelligence is projected to produce are not limited to industrial applications only – they crucially also relate to the workplace. By overregulating AI in workplace contexts, the EU would risk undermining a key pillar of its industrial recovery, therefore.  

– There is also a risk that “industrial AI” could be affected by a Directive on AI at the Workplace, not least due to the very broad definition of “personal data” under the GDPR that such a Directive would be building on. This is not a hypothetical risk: this very situation arose when trying to prepare the Data Act that was intended for industrial data only. Furthermore, the introduction of extensive consultation requirements for companies could lead to the use of certain AI technologies being vetoed on the pretext of their impact on workers’ rights.     

Doesn’t it make sense to extend the protections offered by the Platform Work Directive to all workers? 

– The Platform Work Directive’s (PWD) main purpose was to offer clarity as to when someone is self-employed or an employee. The Directive’s stipulations on algorithmic management only duplicate what is already enshrined in the GDPR – while also making it more difficult to ensure their consistent enforcement across the EU due to its status as a Directive that gives Member States many different options to implement it.  

– There are also major problems with the Directive’s approach to “algorithmic management”: first, it is never properly defined in the PWD. Secondly, it appears to be referring to practices involving “automated decision-making” and “automated monitoring” – neither of which necessarily involve artificial intelligence. 

Aren’t employers exempt from the requirements of the AI Act? 

– No. As users (“deployers”) of AI technologies, employers must comply with the rules under the AI Act for high-risk and low-risk AI systems. In any event, employers cannot use nor develop technologies that are classified as posing an “unacceptable risk” to persons. Specifically, the AI Act categorically forbids development and usage of the following AI systems: 

Shouldn’t employees have a right to know what data is collected about them and why?  

– The GDPR already makes it obligatory for companies to reveal what and how personal data about employees and solo self-employed person is collected, processed, and used. That includes when it is used for automated decision-making and monitoring. The GDPR also requires that employers must disclose to workers in an easily understandable manner to what kind of data they are collecting about them.  

Shouldn’t employees have a right to know when an employer is using AI technologies in workplaces? 

– The AI Act already requires companies to inform employees and solo self-employed person when they use high-risk AI systems in workplaces. In addition, since the GDPR requires that employers must inform employees about any use of automated decision-making systems, including profiling, the employees will receive information about the usage of other AI systems deployed by the employer as well. Beyond that, the GDPR also requires that employers must hear employees or their representatives about intended processing activities of personal data – and it makes no exception in connection with AI systems processing personal data in this respect. 

Shouldn’t employers be prohibited from collecting and using certain personal data? 

– The GDPR already states that only “adequate”, “relevant”, and “necessary” data must be collected by employers. With very few and specific exceptions, the processing of sensitive personal data (e.g., racial or ethnic origin, political opinions, religious or philosophical beliefs, sexual orientation, or trade union membership) is forbidden under the GDPR. Likewise, the collection of extremely detailed (“granular”) data is prohibited as a result and there are multiple precedents across the EU of companies have received penalties for using excessive automated monitoring techniques: the French Data Protection Authority (CNIL) thus fined Amazon France Logistique €32 million in December 2023 for setting up an employee monitoring system that infringed the GDPR.  

Isn’t there a risk that AI will take decisions on behalf of humans? 

– The AI Act obliges employers to oversee high-risk AI applications that they use. This includes systems intended to be used for:  

– Among others, providers and deployers of such AI systems are obliged to arrange human oversight for such systems. Furthermore, the GDPR ensures that workers can veto being subject to any kind of automated decision-making, including profiling, in workplaces. In other words, the existing EU law already prohibits making employment-related decisions without human control and against the will of employees. 

Don’t we need safeguards to ensure that AI technologies do not discriminate workers or pose unacceptable risks to their health and safety? 

– The AI Act already requires developers of AI system to install safeguards against discrimination. That obligation is continuous, throughout the whole lifecycle of the technology, meaning that it remains necessary to verify once it is on the market. Providers of high-risk AI systems must also make sure that the data used for training the AI systems does not lead to unlawful discrimination. Obligatory conformity assessments are also required, meaning that high-risk AI technologies are tested.  

– On a broader level, the EU’s comprehensive acquis on health and safety (European Framework Directive on Health and Safety), as well as anti-discrimination (incl. Anti-Racism Directive, Equality Directive, Equal Treatment Directive) also applies to AI technologies since the principles enshrined in them are technology neutral. Furthermore, the Directive on Health and Safety enables workers’ representatives to challenge employers to take measures to mitigate health and safety risks they perceive to be inherent to the use of particular AI technologies at work.  

For further information