Active Community Engagement and Artificial Intelligence
The UK Global Summit on AI Safety, held at the end of 2023, highlighted the crucial role of public involvement in shaping the future of Artificial Intelligence (AI). A rapid evidence review titled 'What do the public think about AI?' published before the summit underscored the need for meaningful public engagement in AI-related decisions.
The summit's agenda focused on 'frontier' AI models, systems with newer, more powerful, and potentially dangerous capabilities. Recognising the importance of public input, the event explored various methods for meaningful public involvement in policymaking.
One key approach is the deployment of inclusive, values-first frameworks. Initiatives like the "AI for Citizens" emphasise principles such as diversity, accessibility, transparency, openness, ethical use, and collaboration. This ensures AI deployment reflects local cultural contexts and fosters public trust. For instance, multilingual chatbots in South Africa and disaster prediction systems in Japan are examples of geographically tailored, inclusive AI applications.
Democratic processes and participation are also essential. Techniques such as Reinforcement Learning from Human Feedback (RLHF) and Value-Sensitive Design (VSD) integrate stakeholder preferences and ethical values directly into AI development. This promotes inclusivity and broad societal input across regions.
Public oversight through disclosure and rulemaking is another critical aspect. Citizens can engage by demanding transparency from public agencies about AI usage in policymaking processes. This empowers technical experts and the public to scrutinise, influence, and improve governance of AI technologies globally.
Culturally adaptive tools and open-source civic AI are also key. Deploying open-source infrastructures and technologies that enable local communities to adapt AI tools to their languages and norms enhances meaningful involvement. Collaborative governance models that decentralise control support equitable participation and safeguard digital sovereignty in diverse regions.
Research shows that people expect their diverse views to be taken as seriously as those of other stakeholders, including in legislative and oversight processes. The harms of AI can only be addressed from the perspectives of those who stand the most risk of being harmed. In-depth public participation is particularly important for complex policy areas and topics that can threaten civil and human rights.
Professor Helène Landemore, Nigel Shadbolt, and John Tasioulas argued that the event highlighted the fundamental question of how genuine deliberation by the public and accountability will be brought into AI-related decision-making processes. AI uses related to accessing government services or requiring the use of health and biometric data require a serious and long-lasting engagement with the public.
Marietje Schaake, a member of the AI Advisory Board for the United Nations, suggested improving openness and participation by involving a random selection of citizens in any advisory body on AI. Examples of public engagement from different places and contexts should be considered, as AI affects and transcends countries and regions.
The public wants a say in decisions that affect their everyday lives, not just to be consulted. There are important gaps in the research conducted with underrepresented groups, those impacted by specific AI uses, and in research from countries outside of Europe and North America. The People's Panel on AI, a randomly selected group of people, wrote a set of recommendations for policymakers and the private sector, emphasising the need for a system of governance for AI in the UK that places citizens at the heart of decision-making.
An open letter, signed by international organisations and coordinated by Connected by Data, the Trades Union Congress, and Open Rights Group, called for a wider range of voices and perspectives in AI policy conversations, particularly from regions outside of the Global North. More calls for public participation in AI decision-making are growing.
However, very few civil society organisations were invited to the Summit, despite the significant impact of AI technologies on people and society. Speakers at the Summit and parallel fringe events emphasised the need for the inclusion of diverse voices from the public. Professor Noortje Marres pointed out that there was no mention of mechanisms for involving citizens and affected groups in the governance of AI in the official Summit communiqué.
In conclusion, the UK Global Summit on AI Safety marked a significant step towards ensuring public involvement in AI policymaking. The collective methods discussed ensure AI policymaking incorporates public perspectives meaningfully while respecting geographical diversity, promoting equity, transparency, and accountability worldwide.
- The deployment of AI for citizens, such as multilingual chatbots in South Africa or disaster prediction systems in Japan, are examples of geographically tailored, inclusive AI applications that ensure AI deployment reflects local cultural contexts and fosters public trust.
- Marietje Schaake, a member of the AI Advisory Board for the United Nations, suggested improving openness and participation by involving a random selection of citizens in any advisory body on AI, as AI affects and transcends countries and regions, and the public wants a say in decisions that affect their everyday lives.