Skip to content

AI accountability: Geneva outlines reliability guidelines in the digital age

International organizations establish AI credibility standards in Geneva, aiming to combat misinformation, ensure content accountability, and foster digital reliability as the world increasingly relies on artificial intelligence.

AI accountability defined in Geneva as trust-building measure
AI accountability defined in Geneva as trust-building measure

AI accountability: Geneva outlines reliability guidelines in the digital age

In a bid to combat the growing challenge of synthetic media and maintain trust in the digital ecosystem, the AI and Multimedia Authenticity Standards Collaboration (AMAS) has been launched. Led by the World Standards Cooperation, a triad consisting of the International Electrotechnical Commission (IEC), the International Organization for Standardization (ISO), and the International Telecommunication Union (ITU), AMAS aims to develop robust technical standards for digital media authenticity [1][2][3].

The initiative is a call for shared responsibility, recognizing that no single entity can address the challenge of AI-generated misinformation alone. A diverse coalition of stakeholders, including technology companies, research institutions, and civil society organizations, are involved in AMAS. Notable participants include C2PA, JPEG Group, EPFL, Shutterstock, Fraunhofer Heinrich Hertz Institute, CAICT, DataTrails, Deep Media, Witness, and others [2][3].

Silvio Dulinsky, Deputy Secretary-General of ISO, has emphasized the need for practical, scalable solutions for preventing and responding to challenges caused by synthetic media. Gilles Thonet, Deputy Secretary-General of the IEC, stated that international standards provide guardrails for the responsible, safe, and trustworthy development of AI [4].

AMAS's mission is to develop interoperable, international standards that can be adopted universally, providing a shared language for digital authenticity. The key objectives include establishing a framework for digital media authenticity, combating misinformation, promoting transparency and accountability, and supporting regulatory frameworks [1][2][3].

The first paper from AMAS provides a comprehensive overview of the global landscape of standards and specifications related to digital media authenticity. It maps existing standards, identifies gaps, and proposes frameworks for future standardization [1][4]. The second paper from AMAS is a policy guidance document aimed at regulators and lawmakers, detailing how international standards can serve as the foundation for governance frameworks in the age of generative AI [2][3].

The Geneva launch of AMAS marks a turning point in how the international community is preparing to address the darker dimensions of artificial intelligence. The IEC, based in Geneva, sets standards for Europe and beyond. Geneva, with its large United Nations presence, is a fitting location for such an initiative [5].

The tools to detect and manage AI-generated media must evolve just as quickly as AI-generated media floods our screens. AMAS hopes to prevent a future in which society can no longer trust what it sees, hears, or reads online. The International Electrotechnical Commission (IEC) stresses the need to develop tools that are technically robust yet user-friendly, so that authenticity signals can be detected by both machines and humans [6].

The International AI Standards Summit will take place in Seoul from 2-3 December 2025, aiming to accelerate progress on global AI standards. For those concerned with the future of digital integrity, the message is that the time to build that future is now [7]. AMAS positions international standards as a balancing mechanism, enabling both creative freedom and ethical oversight, without pushing for a one-size-fits-all global law [3].

The diverse coalition within AMAS, including technology companies and research institutions, recognizes the importance of artificial-intelligence in addressing AI-generated misinformation. Recognizing this, AMAS aims to develop interoperable, international standards powered by artificial-intelligence to combat misinformation and promote transparency.

Read also:

    Latest