A new competition launched at MWC26 Barcelona invites data scientists worldwide to stress-test AI language models across Africa’s 2,000-plus languages — and in doing so, reshape how the world thinks about AI safety.
BARCELONA — In the gleaming halls of Mobile World Congress 2026, amid announcements of next-generation networks and foldable devices, a relatively quiet but potentially consequential initiative was unveiled. The GSMA, the global body that represents the world’s mobile operators, and Zindi, the continent’s foremost data science competition platform, jointly launched the African Trust & Safety LLM Challenge — a competition that places Africa not at the receiving end of artificial intelligence, but at the centre of defining its future.
The challenge, which runs from March 4 to April 19, 2026, is deceptively simple in its structure: participants are asked to generate adversarial prompts and safety classifications designed to expose vulnerabilities in large language models (LLMs) deployed in African contexts. The outputs, however, are meant to be anything but simple. Organisers say the resulting dataset will form the foundation of a reusable, Africa-focused AI trust and safety benchmark — a practical toolkit for evaluating AI systems across one of the world’s most linguistically diverse regions.
A Gap in the Global Safety Framework
The urgency behind the initiative stems from a well-documented blind spot in how AI safety is currently measured. Most evaluation frameworks have been built around a narrow set of dominant global languages — primarily English, Mandarin, Spanish, and French. Africa, by contrast, is home to more than 2,000 languages, characterised by widespread multilingualism, dialect mixing, and what linguists call “code-switching” — the practice of fluidly moving between languages within a single conversation.
For AI language models trained predominantly on Western internet data, these conditions present a formidable challenge. A model that performs safely and accurately in standard English may produce biased, factually incorrect, or even harmful outputs when confronted with Pidgin English, Hausa-Yoruba mixing, or Swahili-influenced syntax. The consequences in real-world deployments — in healthcare chatbots, financial services, government information systems, or educational platforms — can be significant.
“The future of AI will not be defined solely in Silicon Valley or Beijing — it will be defined wherever AI meets linguistic and cultural complexity at scale.” — Celina Lee, CEO & Co-Founder, Zindi
Louis Powell, Director of AI Initiatives at GSMA, framed the challenge in terms of economic necessity as much as ethical responsibility. “As AI adoption accelerates across Africa’s mobile ecosystem, safety and reliability are paramount,” he said at the Barcelona launch. “We are supporting the development of practical tools and benchmarks that reflect Africa’s linguistic diversity and deployment realities.”
The Competition Architecture
The challenge will draw on Zindi’s global community of more than 100,000 data scientists and AI practitioners spanning over 180 countries. Founded in 2018, Zindi has run more than 460 competitions and awarded close to $1 million in prizes, working with major partners including Microsoft, Google, IBM, and UNICEF. Its particular strength lies in mobilising talent across emerging markets — a profile that makes it well suited to this kind of geographically and culturally focused initiative.
Participants in the African Trust & Safety LLM Challenge will be asked to generate structured adversarial prompts: carefully crafted inputs designed to probe where AI models fail, produce unsafe outputs, or demonstrate bias. Alongside these prompts, participants will submit safety classifications — annotations that categorise the nature and severity of identified failures. Together, these contributions will build a dataset with applicability far beyond any single competition cycle.
A prize pool of $5,000 USD is on offer, open to participants both within Africa and internationally. Registration is available through the Zindi platform at www.zindi.world.
Timing and Regional Context
The launch comes at a moment of heightened attention to AI governance across the African continent. Just two days before the Barcelona announcement, Ghana unveiled its National AI Strategy, which identifies safe, transparent, and purpose-driven AI deployment as foundational priorities. The Ghanaian strategy specifically flags African language data as a national asset requiring both protection and strategic development — an aspiration that the GSMA-Zindi challenge directly supports.
Nigeria, Kenya, Rwanda, and Egypt have all released or are developing AI policy frameworks in recent years, and the African Union adopted its Continental AI Strategy in 2024. Yet the infrastructure for evaluating AI safety in African contexts has lagged behind these policy ambitions. The new benchmark being built through this competition is designed to close that gap.
Beyond the Continent
Organisers are careful to position the initiative not merely as a regional exercise but as a contribution to global AI governance. The argument is that Africa’s linguistic complexity makes it an unusually rigorous testing environment — one that, if navigated successfully, yields safety insights applicable to any multilingual, multicultural deployment context worldwide.
The implications extend into international standards-setting. As bodies such as the International Telecommunication Union, the OECD, and the United Nations wrestle with frameworks for responsible AI, the benchmark produced by this challenge could provide the kind of empirical grounding that such deliberations often lack. African data scientists, in other words, are being invited to author a piece of the global AI safety canon.
“Strengthening AI trust and safety is essential to unlocking the full potential of AI for inclusive digital growth.” — Louis Powell, Director of AI Initiatives, GSMA
Whether the competition delivers on these ambitions will depend on participation rates, the quality of the adversarial datasets produced, and — critically — how the resulting benchmark is adopted by AI developers and policymakers. Those questions remain open. What is already clear is that the framing itself represents a shift: Africa is no longer being spoken about purely as a market for AI products, but as an active contributor to the standards by which those products are judged.
How to Participate
The African Trust & Safety LLM Challenge is open to individuals and teams globally. Submissions are accepted through the Zindi platform at www.zindi.world. The competition closes on April 19, 2026. The $5,000 prize pool will be distributed among top-ranked submissions according to criteria published on the platform.
This article was produced based on official announcements from the GSMA and Zindi, verified through multiple independent sources.
