Presence Detected Within System Cycle
In the realm of EU technology regulation, the role of humans has taken centre stage. This is evident in various areas such as robotics, AI regulation, content moderation, automated decision-making, governance of drones, and financial markets.
The European Artificial Intelligence Act (AIA), the General Data Protection Regulation (GDPR), and the Digital Services Act (DSA) are currently under scrutiny in the context of human-AI interaction. A shared vocabulary and conceptual basis for human involvement provisions in these legislations needs to be established, as proposed by Aline Blankertz in her study.
Human involvement is a primary safeguard against automation risks in EU regulations like the GDPR, DSA, and AIA. It is called to oversee, intervene, and reassess automated processes to create trust in machines in law. However, research shows that human oversight often falls short, with humans becoming mere rubber stampers.
The DSA encourages internet intermediaries to take voluntary measures to address illegal content, which may include automation. On the other hand, the GDPR and the AI Act explicitly exclude certain kinds of automation. In online platforms, human elements are still introduced in content moderation, but the effectiveness of post-hoc review is unclear.
The assumption of human exceptionalism in EU tech regulation needs to be confronted critically, as it may create blind spots and entrench flawed practices. Consolidating human involvement provisions in the AI Act, DSA, and GDPR could provide clearer definitions of human oversight, intervention, and review.
Decision quality and error rates could serve as benchmarks for human involvement in automated decision-making processes. Basic research is needed to address the normative questions of whether, when, where, and how humans should remain involved in such processes.
The EU has signalled a shift in its approach to technology regulation, focusing on simplification through various Omnibus packages. The role of humans in EU tech law has become a legal backup if automated decision-making fails, but it remains unclear what gives human intervention its legitimating effect.
The 'human in the loop' concept refers to a system or process where human involvement is important but not exclusive. The necessity of human involvement needs to be understood, and human agency in automated decision-making processes needs to be located. The European Court of Justice has emphasized the necessity for safeguards in automated processes.
In conclusion, the role of humans in EU tech law is a complex and evolving issue. As technology continues to advance, it is crucial to ensure that human oversight and intervention remain effective and meaningful in the context of automated decision-making processes.