Reframing AI Governance with LMIC-Led Leadership

The following piece is part of the U.S.-India AI Fellowship Program’s short-form series.

By: Resham Sethi

In an era where Artificial Intelligence (AI) is reshaping every facet of society, establishing ethical governance frameworks is an imperative. According to the European Union (EU), AI comprises systems that display intelligent behavior by analyzing their environment and autonomously taking actions to achieve specific goals. Complementing this definition, UNESCO’s Recommendation on the Ethics of Artificial Intelligence describes AI ethics as a dynamic, holistic framework — rooted in human dignity, well-being, and the prevention of harm — that guides societies in navigating the multifaceted impacts of AI on human lives, communities, and ecosystems.

Global discourse on AI ethics has coalesced around core principles such as transparency, justice, non-maleficence, accountability, and privacy. Meta-analyses, including a landmark study by Jobin et al. (2019) and a more expansive review by Correa et al. (2023), reveal a robust commitment to these values. Yet, a closer look exposes significant disparities in global conversation. While Western Europe and North America dominate the narrative, voices from the Global South are notably sparse. For example, despite India's rapid strides in AI research, a burgeoning talent pool, and significant contributions to open-source projects and hiring trends, it is represented in only 0.5% of AI ethics publications.

This disconnect poses a critical challenge: as AI drives healthcare innovation, ethical frameworks must be contextually adapted to diverse socio-economic and cultural realities. Without such adaptation, AI may deepen existing disparities rather than fostering equitable progress.

Global AI ethics standards and the representation gap
The rapid expansion of AI has sparked a global conversation about the ethical principles that should guide its development and application, particularly in sensitive sectors like healthcare. Numerous private, public, and multilateral organizations have released AI ethics guidelines, and while a growing convergence has emerged around foundational principles, stark disparities remain in whose perspectives are shaping these norms.

A meta-analysis by Jobin et al. (2019), which reviewed 84 ethics documents, identified five core tenets across frameworks: transparency (86%), justice (81%), non-maleficence (71%), accountability (71%), and privacy (56%). However, a more recent and expansive meta-review by Correa et al. (2023) covering over 200 global AI ethics documents revealed a significant concentration of authorship: 66% of all publications originated from Western Europe and North America, while less than 5% came from regions such as Africa, South Asia, and Latin America.

Moreover, 77% of all documents analyzed were produced by just 13 countries, highlighting a highly centralized global discourse. Intergovernmental organizations such as the EU (4.5%) and the United Nations (3%) contributed only marginally. These findings underscore the systemic underrepresentation of voices from low- and middle-income countries (LMICs) in shaping the ethical governance of AI, raising concerns about the legitimacy, inclusivity, and contextual relevance of current global standards.

This imbalance in authorship and thought leadership has led to a global AI governance architecture that often fails to account for the public health priorities, infrastructure constraints, and health data systems concerns specific to LMICs. As a result, LMICs are frequently faced with a difficult choice: adopt ethical frameworks that are poorly aligned with their local contexts, or attempt to develop fragmented and under-resourced governance systems without adequate global recognition or support.

Building sustainable AI for health ecosystems
To move toward sustainable and equitable AI for health ecosystems, international policy frameworks must be reshaped with LMICs — not for them. This requires centering LMIC voices, leadership, and lived experiences in both the design and implementation of global AI governance. To truly foster meaningful partnerships between high-income countries (HICs) and LMICs, these frameworks must evolve from top-down, one-size-fits-all approaches to ones that are inclusive, adaptive, and grounded in contextual realities.

Key strategies for strengthening frameworks through LMIC collaboration:

  • Co-creation and representation: LMIC experts, institutions, and civil society actors must be meaningfully included in the creation of global ethics standards, taskforces, and AI policy forums. Representation must go beyond tokenism to influence substantive agenda-setting.

  • Support for implementation-oriented research: Funders and global bodies must invest in ethics research that is grounded in LMIC realities, testing and adapting AI governance models in real-world, resource-constrained healthcare settings.

  • Decentralized and contextual governance: International frameworks should promote regionally adaptable governance structures that combine national regulation, sub-national flexibility, and strong civil society oversight to ensure AI is safe, relevant, and just.

  • South-South collaboration: Peer learning and co-development between LMICs should be actively facilitated. This horizontal collaboration can enable the development of governance models that are not only context-aware but also scalable across similarly situated health systems.

  • Strengthening community and patient voice: Ethical AI governance must include grassroots stakeholders — community health workers, patient groups, and local watchdogs — to ensure transparency, accountability, and rights-based implementation.

By realigning international frameworks to reflect the needs, knowledge, and leadership of LMICs, the global AI health ecosystem can move toward a more inclusive and collaborative model, where innovation is not only driven by equity but governed by it. LMICs have the insight, experience, and urgency to lead this shift — what they need is enabling global structures that recognize and invest in their capacity. This is not just a matter of fairness; it is essential for creating governance systems that are globally legitimate, locally actionable, and resilient in the face of rapid technological change. 

Resham Sethi is part of the U.S.-India AI Fellowship Program at ORF America. She is currently a Senior Program Officer for Digital Health (India & South Asia Hub) at PATH.