The Literacy Trap
Why AI literacy programmes in the Global Majority are building competence without power
Between 2022 and 2024, job vacancies requiring generative AI skills surged ninefold globally. In lower-middle-income countries, AI-related job postings grew at 11 percent, faster than in high-income economies, where the growth rate was just 2 percent. The World Bank, which published these figures in its 2025 Digital Progress and Trends Report, presented them as evidence of opportunity. But read alongside another number in the same report — that lower-middle-income countries account for just 5 percent of global data centre capacity, and low-income countries for less than 0.1 percent, they also describe something more complicated than opportunity. What they actually describe is a workforce being prepared to participate in an infrastructure it does not own, govern, or in most cases meaningfully influence.
This is the context in which the development sector's AI literacy programmes are being built. And it is a context that most of those programmes are not yet designed to address.
The Mandate Nobody Asked For
By 2025, “AI-readiness” had become the new shorthand for progress, much like “digital transformation” before it — woven into grant requirements and organisational capacity assessments. NGOs and CSOs across Africa, South Asia, and Southeast Asia found themselves required to demonstrate AI fluency to remain competitive for funding, regardless of whether AI was relevant to their programmatic priorities. The technology integration came first. Strategy struggled to catch up, and training often followed as a way to make sense of it after the fact.
This sequencing has a structural consequence: when adoption is the implicit goal, programmes optimise for tool competence; “readiness” simply becomes a compliance requirement and not really a design imperative.
This is what cognitive scientist Abeba Birhane, writing in SCRIPTed, identified as algorithmic colonisation: the importation of AI tools built in the West, for Western contexts, with Western incentive structures, which simultaneously "impoverishes development of local products while leaving the continent dependent on Western software and infrastructure." This creates a circularity worth naming: the funding conditions that accelerated AI adoption in the sector are largely the same ones now funding programmes to manage its consequences.
The Structural Mismatch
The AI literacy curricula now circulating across low- and middle-income countries (LMICs) are largely inherited from the digital literacy frameworks of the 2010s. Those frameworks were designed for a world of tools: browsers, search engines, social media platforms. Their underlying logic was about individual access and agency: “Can a person find, evaluate, and act on information in a digital environment?”
AI systems operate on a different logic. They are not primarily tools that individuals use. They are systems that make, influence, or shape decisions, and most often about the very individuals who encounter them. Consider a credit scoring model, an agricultural advisory system, or a content moderation tool. These are all decision-making infrastructures that determine outcomes. Often at population scale. Often without clear explanation. And often without mechanisms that allow those affected to question, contest, or even fully understand what has been decided on their behalf.
The cognitive and civic demands this creates are categorically different from what digital literacy was originally built to address.
Digital literacy asks: can you use this? AI literacy, done properly, asks something harder: can you interrogate this? Can you ask what it is optimising for, whose interests shaped its design, what it gets wrong and for whom, and what recourse exists when it fails? These are as much technical questions as they are ethical ones. They demand different forms of knowledge, different pedagogical approaches, and definitely different ways of defining and measuring outcomes.
Treating AI literacy as digital literacy's sequel creates what might be called a Literacy Trap: the sector invests in competence while the communities it works with remain without civic agency over the systems now operating in their lives.
Failure Modes
The trap manifests consistently across contexts.
The competent user who cannot contest. Participants leave programmes with genuine, if narrow, capability: they can prompt effectively, flag hallucinations, apply an ethics checklist, and produce AI-assisted outputs. What they leave without is contestability. No framework for challenging an AI-driven decision that affects them, no vocabulary for advocacy, no understanding of what redress looks like in regulatory environments where the right to explanation does not exist in law. In much of sub-Saharan Africa, AI is deployed by governments with minimal transparency obligations. For instance, the OECD/EC’s draft AI Literacy Framework equips learners to engage with AI in contexts where data protection law exists, regulatory bodies enforce it, and recourse is accessible. In most contexts where development sector programmes operate, those conditions are absent or nascent, which means the competencies the framework builds have nowhere to land.
Frameworks that travel poorly. The responsible AI principles embedded in most multilateral toolkits were designed for contexts with functioning data protection laws, independent judiciaries, and competitive AI markets. As Birhane's work on algorithmic colonisation makes clear, the importation of frameworks built for one regulatory ecology into another does not produce safety, but only the performance of safety.
The alternative is already being constructed, unevenly, in the regions the sector claims to serve.
The African Union Executive Council endorsed the Continental AI Strategy in July 2024, committing to an Africa-centric, development-focused approach that promotes ethical, responsible, and equitable AI practices, explicitly framed around African sovereignty, not Northern adoption curves. The Strategy calls for African-owned datasets, local language model development, and governance mechanisms grounded in African values. It is frank about the infrastructure gap: data centres in Africa represented less than one percent of the global total in 2024. But the strategic orientation toward governance and contestability points in a different direction than most donor-funded AI literacy programmes currently running on the continent. As far as the AU is concerned, AI ethics "is not framed as a neutral technical discipline, but as a geopolitical arena where voice, history, and justice must be reclaimed."
Community-level alternatives are also emerging. Masakhane, a pan-African NLP research initiative, builds language models for low-resource African languages through community-driven data collection and governance, a model premised on the idea that communities should shape the AI that represents them. The CARE Principles for Indigenous Data Governance, developed by a network of Indigenous sovereignty researchers across the Pacific and published by the Global Indigenous Data Alliance, also offer a different starting point entirely — one that treats collective benefit, community authority, and accountability as foundational to the governance structure.
These efforts share an orientation that most development sector AI literacy programmes lack, in that they begin with contextually rooted governance first, before moving to adoption.
Toward Sovereign Literacy
Current programmes position the learner as a user. As someone to be equipped with skills to operate systems designed elsewhere, under terms set by others. A more adequate approach would position the learner (and crucially, the community), as a subject with civic standing. Someone with both the right and the capacity to participate in decisions about whether and how AI enters their ecosystems.
This reframing has concrete pedagogical implications. It means building interrogative capacity alongside tool competence: how to ask who built this, on what data, with what incentives, and who bears the cost of its errors. It means designing for collective sense-making, because AI systems affect communities at scale and not individuals in isolation. And it means orienting programmes explicitly toward governance participation, building the knowledge and confidence for communities to engage with the institutions, however weak or distant, that govern AI deployment in their contexts.
The Question Worth Sitting With
None of this is starting from zero. The AU Strategy, Masakhane, and the CARE Principles each point toward what a governance-first approach looks like when it is taken seriously, as the foundation and not an add-on to adoption. The opportunity for the sector is to recognise the alternative approaches to AI literacy and integration already being built, and ask honestly whether its funding frameworks are designed to support them.
The speed of AI deployment across LMICs is not slowing down to accommodate that question. Grant cycles do not pause for governance design and the frameworks consolidating right now will encode assumptions about what AI literacy is for, and who it serves, for the next decade. The assumptions built into those documents will shape a generation of programmes before anyone thinks to question them.
This is less about inevitability than it is about momentum. And this momentum can be redirected — by practitioners who already recognise the disconnect, by funders willing to rethink what they reward. And, ultimately, by communities in LMICs who are increasingly articulating what governance-first AI could look like on their own terms. Whether that redirection happens before these approaches become embedded, or only after they are difficult to unwind, remains an open question. But it is one worth asking now, while there is still room for the answer to matter.
Maathangi Mohan works at the intersection of AI governance, digital public infrastructure, and capacity-building across Africa, Asia, and the Pacific.