Enhancing Data Protection Laws for the Age of Artificial Intelligence is Necessary Now
The UK's Data (Use and Access) Act 2025, formerly known as the Data Protection and Digital Information Bill, has made significant strides in updating data protection laws to address automated decision-making (ADM) and the broader use of AI systems. The Act, now in effect, aims to strike a balance between data protection and AI innovation.
Key Changes in the Act
The Act explicitly incorporates provisions dealing with ADM processes involving AI, reinforcing the accountability of organizations using AI to make decisions about individuals. The UK's data regulator, the Information Commissioner’s Office (ICO), is given enhanced powers and duties to oversee AI-related data use, ensuring compliance with data protection principles in AI contexts.
The Act also introduces measures to promote innovation while balancing data protection. For instance, it requires the Secretary of State to publish assessments on the economic impact of copyright and AI. However, it stops short of mandating AI developers to disclose all training datasets, supporting innovation and commercial interests.
Balancing Protection and Innovation
Unlike the EU’s AI Act, which bans AI systems that knowingly manipulate human behavior causing harm, the UK approach remains grounded primarily in existing data protection frameworks, adapting them for AI while delegating oversight to established bodies like the ICO.
The Act strengthens protections by maintaining strict rules under the UK GDPR for AI systems that make decisions with significant legal or other effects on individuals, such as automated recruitment decisions. It ensures individuals retain a "right not to be subject to automated decisions" without safeguards.
Challenges Ahead
While the Act strengthens data protection, it also dramatically increases the contexts in which automated decision-making can be used. Systematic bias, technical failings, or lack of training can result in unfair outcomes when important decisions are delegated to automated systems.
For meaningful human review to be effective, it needs to be performed by a person with the necessary competence, training, understanding of the data, and authority to alter the decision. More than half of respondents in a survey want clear procedures for appealing to a human against an AI decision.
One prominent example of the dangers of integrating complex technological systems into the economy is the Post Office scandal, involving flawed accounting software Horizon. The Act does not provide people with the right to receive detailed contextual or personalized information about how a decision was reached, which is a concern for many.
The onus of proof for the legality of automated decision-making shifts from organizations to individuals affected by the decisions. Article 22 of the UK GDPR largely prohibits the automated processing of personal data for decisions with 'legal or similarly significant' effects unless there is a contract, consent, or authorization by law.
In conclusion, the Data (Use and Access) Act 2025 represents a significant step forward in updating legal frameworks to account for AI-driven automated decision-making and potential harms to individuals. However, it is crucial for the Government and Parliamentarians from all parties to work with organisations like Ada to make improvements to the Act to further strengthen data protection for the AI era.
- The Data (Use and Access) Act 2025, as it updates data protection laws, addresses automated decision-making and AI systems in data-and-cloud-computing, signifying the intersection of technology and policy-and-legislation, integral to the general-news discussion on AI innovation and regulation.
- The requirement in the Act for the Secretary of State to publish assessments on the economic impact of copyright and AI is an example of striking a balance between promoting innovation and upholding data protection principles, a key aspect in the politics surrounding data-and-cloud-computing and AI.