When a black viking meets a black slave trader

What do computer generated images tell us about the evolution of coloniality and racialization in the AI era?

"King Leopold II Of Belgium Colonizes Congo." Midjourney (6.0).

Throughout 2022 and 2023, generative AI tools that allow people to create images in a matter of seconds based on a text prompt became a subject of heated public debate. Racism, sexism, and exaggeration are common keywords that accompany the analysis of such effortlessly generated images. While generative AI spreads across the globe and finds its applied use in advertisements, marketing, and communication, a sharp question arises: Is artificial and unbiased representation even possible?

With no definitive answer at hand, some experts think along the AI and safety lines—that simply “fixing” the algorithms and training tools on custom-made “less biased” data sets while reporting biased images, will do the trick. Others, more pessimistic, believe that AI hallucinations are fundamentally integral and unavoidable no matter how data sets and algorithms are tweaked. Beyond that scholarly debate, one thing is clear: this technology is booming, and expected to become a $1.3 trillion market by 2032, notwithstanding multiplying scandals and case studies.

In February 2024, the media saw another major outrage: Gemini—Google’s generative AI—found itself in the public spotlight for “diversifying” depictions of European history. For instance, the tool rendered Second World War German Nazi soldiers as Black and Asian. The outcry reached its feverish crescendo in response to AI-generated images of a Black Elon Musk and Black Vikings, with northern raiders often painted as the holy icons of white supremacists. Adding fuel to the fire, the conservative and alt-right crowd and their sympathizers quickly began making accusations of a “woke AI” discourse, implying anti-historicism and an anti-white agenda, all of which culminated in the internal suspension of Google’s tool. Setting aside the political caricature that is “woke AI,” and acknowledging complex socio-material enactments of race and ethnicity, this scandal shows continuums of misrepresentation that often involve the distortion of historical events and facts—and these are now amplified by AI.

In August 2023, we published a commentary article showing examples of how Midjourney—arguably the leading generative tool on the market—reproduced biases and exaggerations in rendering stereotypical global health images, including the much-challenged tropes of white saviors, Black and Brown suffering subjects, and “Africa as wildlife” (and a country). Such visual tropes could conceivably be traced back to colonial imaginaries and othering of the continent, its subsequent Disneyfication and touristification, and the emergence of the volunteer and humanitarian culture, reflected, for example, in the blog Humanitarians of Tinder

Alongside that exercise, we rendered a series of images that at that time we did not publicly display due to their highly charged content. Similarly to the Gemini case, the images featured depictions of Black people in a position of stereotypical whiteness—but with a twist: we asked the AI to render images of colonization.

Consider the image set below, which was rendered from the “British empire colonizes Africa” prompt. It depicts both colonizers and the colonized people as Black, and in a state of romanticized cooperation, waving the British flag. No conflict: just mutually beneficial exchange and camaraderie between the wearers of the colonial uniforms and those wearing the exaggerated “African-style” clothing. And of course, there’s an elephant.

“British Empire colonizes Africa.” Midjourney (5.2).

As many people are outraged by Gemini’s depiction of Elon Musk as Black, behold an image of a Black King Leopold II of Belgium—the distinctively bearded monarch whose actions directly caused the brutal killing and maiming of hundreds of thousands of Congolese people. The prompt for this image was “King Leopold II of Belgium colonizes Congo.”

“King Leopold II Of Belgium Colonizes Congo.” Midjourney (6.0).

Further probing for generative biases, we decided to explore the visual representation of a British slave empire that went hand-in-hand with British colonial expansion. We ran 100 repeated prompts for “British slave trader” and, disturbingly, 96 of 100 images showed Black people—predominantly men—as supposed British slave traders, maintaining a powerfully neoliberal “CEO” pose, with chains depicted simultaneously as jewelry and means of oppression.

“British slave trader.” Midjourney (5.2)

We also ran 100 prompts for “British slave owner,” and 81 images, equally shockingly, showed Black people as supposed owners, in wealthier surroundings compared to the traders, often with sexualized slaves in the background.

“British slave owner Midjourney.” (5.2)

To our incredulity, while these deeply racialized images were effortlessly rendered by AI, some other prompts that, for example, lightly hinted at nudity triggered a warning message that such a prompt could not be executed in the name of the respectful representation of people and their communities. In other words, a nipple (likely a female one) was a moral problem but images of colonizers and traders represented as their victims—reeking of death and horror—were not.

Beyond the experimental rendering, these prompts could have been used in a hypothetical scenario of someone writing about the European exploitation of Africa while looking for affordable and copyright-free illustrations. In 2024, this is not an absurd scenario: there already are examples of entire books written with the help of AI, and scandals concerning AI-generated images published as book covers. What would be the reaction of an anti-colonial scholar to encountering AI-generated images of Black slave owners and colonizers? Perhaps a feeling of despair; perhaps immense frustration to be channeled into resistance. And what would be the impression of a right-wing historical revisionist downplaying slavery and blaming it on Africa and Black people? Likely a smug smile.

What this exercise suggests is that the socio-material enactments and ontologies of race, class, and gender are now part of AI and its algorithms. The centuries of visual stereotyping of Africa and its inhabitants as living in pastoral harmony, or in misery and suffering, reach a completely new level with generative AI. It shreds and absorbs millions of images, and amplifies similarity and difference, leading to the cascades of hallucinations.

Recent news suggests that the use of AI-generated images for marketing and communication is only expected to increase across the continent, with reports suggesting that Kenyan and Nigerian businesses, for example, are already running AI-generated ads, saving money and time. Moreover, NGOs and global health organizations now begin to render stereotypical humanitarian images of Black kids subjected to misery and violence, with notable examples of media produced by Plan International and the World Health Organization. This means that in the long run, more AI-generated images of Africans and Africa will populate the digital space. 

Looking back—and looking forward—what kind of images will they be?

Without a doubt, African digital nomads, the emerging affluent middle class, and the diaspora will create better custom generative AI tools to avoid—or at least minimize—biased and culturally insensitive imagery of the continent. This is already happening. However, despite these efforts, generative AI tools, most of which are owned by a host of powerful global North corporations embedded in the neoliberal system, cannot resolve centuries of exploitation and inequality as primary historical drivers of social similarity and difference.

If generative AI is an ugly reflection of our societies, then simply looking away will not help. If anything, we invite readers to approach biased AI-generated images not just as “technical glitches” to be fixed but as a sign of ever-manifesting inequalities and racialization that need to be urgently addressed. 

Social worlds are where biases and misrepresentations begin—and where they should end.

About the Author

Arsenii Alenichev is a postdoctoral fellow interested in the question of the global health visual culture, hosted by the Oxford-Johns Hopkins Global Infectious Disease Ethics Collaborative (GLIDE).

Professor Patricia Kingori is a sociologist based at the Ethox Centre, University of Oxford.  Patricia is the Principal Investigator of the Wellcome Trust funded Fakes, Fabrications and Falsehoods grant which seeks to explore the ethical and social dilemmas and solutions created by fakes in global health.

Koen Peeters Grietens is a medical anthropology professor at The Institute of Tropical Medicine (ITM) in Antwerp and the Nagasaki University School of Tropical Medicine and Global Health.

Further Reading