Skip to content

Washington's Proposed "Antidiscrimination by Artificial Intelligence Act" May Favor Human Decision-Makers Over AI

Discriminatory algorithms under scrutiny: This week, the D.C. Council deliberated on the Stop Discrimination by Algorithms Act, a bill introduced last December by D.C. attorney general Karl Racine, which aims to restrict specific data uses in algorithmic decision-making to prevent...

DC's Suggested "Algorithm Discrimination Prevention Act" Could Inadvertently Discriminate Against...
DC's Suggested "Algorithm Discrimination Prevention Act" Could Inadvertently Discriminate Against Algorithms Instead

Washington's Proposed "Antidiscrimination by Artificial Intelligence Act" May Favor Human Decision-Makers Over AI

=====================================================================================================

The D.C. Council discussed the Stop Discrimination by Algorithms Act at a hearing this week, a proposed law aimed at preventing discrimination through the use of algorithms. The Act, if passed, would impose restrictions on the use of algorithms by both for-profit and nonprofit organisations.

Key Issues

Wide Scope Affecting Many Entities

The legislation applies to a broad range of businesses and organisations in D.C., from large hospitals, universities, and employers to small retail shops and sole proprietors. This means even small- and medium-sized businesses would face major new compliance requirements and penalties.

Prohibition on Algorithmic Decisions Based on Protected Traits

The Act would forbid decisions stemming from algorithms that use protected personal characteristics—such as race, colour, religion, national origin, sex, gender identity, sexual orientation, familial status, source of income, or disability—if these decisions deny important life opportunities. This broad prohibition could limit legitimate algorithmic uses and create compliance complexity.

Likelihood of Significant Compliance Burden and Penalties

The new requirements could impose heavy operational and financial burdens on smaller businesses that currently use or rely on algorithmic decision-making, potentially threatening their viability or forcing discontinuation of useful AI tools.

Potential Challenges in Implementation and Enforcement

Because algorithms rely on complex data and often operate as black boxes, detecting and proving when discriminatory impact occurs can be difficult. The legislation’s broad terms like “important life opportunities” may leave ambiguity about what decisions are covered and how companies need to comply.

Lack of Balance Between Innovation and Protection

Critics worry that the Act could stifle AI innovation and the use of algorithmic tools that could provide benefits if used carefully, given it outright bans any algorithmic decision-making linked to protected categories rather than focusing on transparency, testing, and mitigation of bias.

Definition of Important Life Opportunities

"Important life opportunities" are defined as access to credit, education, employment, housing, insurance, or a place of public accommodation.

Requirements for Documentation and Disclosure

The second provision requires organisations to provide detailed documentation on how they use personal information in AI-enabled algorithmic decision-making. The documentation must be provided in English, Spanish, Chinese, Vietnamese, Korean, and Amharic and explained completely but in no more than one printed page.

Annual Audits of Algorithmic Decision-Making

The third provision of the Stop Discrimination by Algorithms Act requires annual audits of algorithmic decision-making by third parties. The audits are to look for disparate-impact risks and create an audit trail for at least five years.

Civil Action and Enforcement Mechanisms

The private right of action allows individuals to bring a civil suit against organisations that violate the act. The fourth provision of the Act establishes enforcement mechanisms, including empowering the D.C. attorney general to investigate potential violations and seek fines.

Concerns about the Act

The legislation, while well-intentioned, is criticised for being misguided and harmful due to its potential to discourage the use of AI and increase costs for consumers. The specific concerns and problems with the Act mainly revolve around its broad and strict restrictions on the use of algorithms by both for-profit and nonprofit organisations.

[1] Source: TechNet, Washington D.C. Algorithmic Accountability Act: A Discussion of Potential Concerns and Challenges (2022). Link [5] Source: Brookings Institution, The Washington, D.C., Algorithmic Accountability Act: A Well-Intentioned but Flawed Approach (2022). Link

  1. The Stop Discrimination by Algorithms Act, currently under discussion by the D.C. Council, aims to prevent discrimination through the use of algorithms, imposing restrictions on both for-profit and nonprofit organizations.
  2. The Act's wide scope includes a broad range of businesses and organizations, potentially imposing major new compliance requirements and penalties on small- and medium-sized businesses.
  3. The Act forbids decisions stemming from algorithms that use protected personal characteristics, which could limit legitimate algorithmic uses and create compliance complexity.
  4. The new requirements could impose heavy operational and financial burdens on smaller businesses, potentially threatening their viability or forcing discontinuation of useful AI tools.
  5. Because algorithms rely on complex data and often operate as black boxes, detecting and proving when discriminatory impact occurs can be difficult, making implementation and enforcement challenging.
  6. The Act defines "important life opportunities" as access to credit, education, employment, housing, insurance, or a place of public accommodation, and requires organizations to provide detailed documentation on how they use personal information in AI-enabled algorithmic decision-making.

Read also:

    Latest