Anthropology in the Age of AI: Why We Need More Empathetic Humans in Machine-Driven Futures

Illustration of AI microchip weighed against diverse team on balancing scales, symbolising ethical leadership, human-centred decision-making, and the value of people in the age of artificial intelligence

Let’s get one thing straight: Artificial Intelligence (AI) isn’t inherently evil.

It’s not some rogue robot army plotting to overthrow humanity. But let’s not kid ourselves—AI is far from neutral.

These systems are built by humans, trained on human data, and deployed in human societies, which means they come preloaded with all the messy biases, blind spots, and power dynamics we’ve been grappling with for centuries.

Enter anthropology—the discipline that’s been unpacking human complexity long before algorithms were even a thing.

If we’re serious about building ethical and inclusive spaces and workplaces in the age of AI, anthropology isn’t just helpful—it’s essential.

The Social Dangers of AI

Here’s the crux of the problem: AI systems don’t just reflect the world as it is; they reinforce the world as it has been—complete with all its inequalities and injustices. This is what scholars call epistemological injustice. AI operates on the assumption that knowledge is universal and objective. But, the datasets used to train AI are often biased, incomplete, or skewed toward dominant perspectives—Western, male, white, cisgendered, you name it. Marginalised voices? Barely a whisper in the machine. 

Let’s break that down.

Testimonial Injustice 101: Imagine screaming into a void where your voice gets automatically downgraded to “unreliable narrator” status. That’s AI for marginalised groups. When systems like HireVue analyse speech patterns and facial expressions to rank job candidates, they’re not measuring competence—they’re codifying decades of workplace discrimination into a sleek, automated package. You can read more about that here.

You might remember when Amazon’s resume-screening tool famously penalised applications containing the word “female” or “women” (e.g., “women’s chess club”) because its training data reflected a male-dominated tech industry. The result? A self-replicating cycle where the algorithm mistakes historical bias for “merit.”

Hermeneutical Injustice: Now imagine lacking the vocabulary to even describe your oppression. Generative AI tools like ChatGPT amplify this by drowning out minority perspectives in their training data. For example, indigenous knowledge about sustainable land management gets flattened into Western-centric “best practices,” while non-binary gender identities are erased by binary language models. If you're interested, you can find out more here.

When ChatGPT describes a nurse as “she” and a CEO as “he” 80% of the time, it’s not a glitch—it’s a reflection of training data poisoned by centuries of sexist labor hierarchies. These systems don’t just fail to represent marginalised voices—they systemically perpetuate racial and gender hierarchies and suppress the conceptual frameworks needed to challenge dominant narratives. 

Echo Chambers & Bullshit Machines: Large Language Models (LLMs) like ChatGPT don’t know anything—they’re just really good at predicting which words statistically go together. That means they can spew out total bollocks with the polished tone of an Oxford grad. This is called confabulation—basically, the AI equivalent of making shit up. And in an online world already drowning in disinformation, that's a pretty big problem. If you’re already inclined to believe that climate change is a hoax, the algorithm’s job is to keep you engaged—which often means feeding you more of the same crack. The result? A turbocharged propaganda loop that turns fringe ideas into mainstream “debate.”

And then there's hate speech. AI systems are increasingly being used to generate hate content—not just amplify it. From deepfake videos stirring political chaos to bots writing manifestos that glorify violence, we’re seeing what happens when toxic ideologies get the power of scale and speed. You might have heard of Microsoft’s Tay chatbot fiasco—Tay was designed to “learn” how humans talk by interacting with users on Twitter. Boy, did she learn. Within hours, Tay went from bubbly teen-bot to full-blown bigot, spouting racist and sexist slurs targeted at women and Black communities. Why? Because she was training on unfiltered data. Microsoft had to pull the plug in less than a day. Learn more about it here.

Why This Isn’t Just a Tech Problem

Let’s be clear: this isn’t just about flawed code or dodgy datasets. It’s about who holds power and whose knowledge gets to count. When Western cultural assumptions are baked into systems presented as “objective” or “universal,” we get digital colonialism—where algorithms quietly enforce one worldview while silencing others.

So no, we can’t just keep handing off human decisions to machines in the name of “efficiency.” That kind of outsourcing—of empathy, critical thinking, ethical judgement—is going to bite us in the arse.

Because here’s the truth: the most important work in the age of AI can’t be automated.

What we need now isn’t another tech upgrade. It’s a human upgrade.

We need to invest in the kind of organisational culture that can actually handle AI responsibly. Open, honest communication. Psychological safety. Cultural intelligence. The stuff that makes humans worth working with. Those that help us ask better questions, challenge bad assumptions, and decide how this technology gets used—and who it serves.

Why Anthropology Matters in the Age of AI

So how does anthropology fit into this picture? Think of it as the antidote to AI’s blind spots. Anthropology is all about understanding human diversity—cultural, social, historical—and challenging dominant narratives. It teaches us to listen actively, practice cultural humility, and approach problems with empathy and nuance. These aren’t just feel-good buzzwords; they’re critical tools for engaging with  AI systems in ways that serve all of humanity, not just the privileged few.

  1. Amplifying Marginalised Voices: Anthropologists have long worked to bring marginalised perspectives into the conversation. Whether it’s indigenous knowledge systems or non-Western ways of thinking about gender and identity, anthropology reminds us that there’s no single “correct” way to understand the world.
  2. Challenging Assumptions: AI developers often operate within narrow cultural frameworks, assuming their systems will work universally. Anthropologists can help identify these blind spots and adapt technologies to different cultural contexts. 
  3. Ethical Accountability: Anthropology doesn’t shy away from tough questions about power and ethics. Who benefits from this technology? Who gets left out? By embedding anthropological insights into AI development and adoption, we can start addressing these questions head-on.
  4. Humanising Data: Behind every dataset is a human story—someone’s life experience reduced to zeros and ones. Anthropology reminds us to see the people behind the data and consider how our technologies impact their lives.

AI Isn’t Evil—But It Needs Humans who Give a Shit. 

Let’s be clear: AI isn’t some malevolent force plotting against us. It’s a tool—a powerful one—that we have the responsibility to shape thoughtfully and ethically. But right now, too much of AI is driven by profit motives or techno-utopian fantasies that ignore real-world consequences.

We’re at a crossroads with AI. Will we use it to perpetuate existing inequalities or to challenge them? Will we let it homogenise human experience into neat little boxes or celebrate our messy, complex diversity? The choice isn’t up to machines; it’s up to us. 

Because here’s the thing no algorithm can replicate: the courage to ask hard questions. The ability to sit in discomfort. The willingness to be accountable. These are deeply human capabilities—and they’re the backbone of any organisation that wants to use AI ethically, not just efficiently.

So no, this isn’t just a job for your IT team. It’s a job for your culture.

Investing in Humanity Is the Smartest Move You Can Make

Psychological safety. Cultural intelligence. Ethical reflection. Open, honest, human conversations. The kind that make room for disagreement, vulnerability, and growth. These are skills you simply cannot automate.

Just like you can’t automate emotional intelligence, or template trust. And no matter how many dashboards you build, there’s still no substitute for a workplace where people feel seen, heard, and respected.

That’s where we come in.

At Habitus, we bring anthropology, emotional intelligence, and educational psychology to the heart of organisational transformation. We help teams navigate complexity, not run from it. We build cultures where people aren’t afraid to challenge the algorithm—or the status quo.

AI isn’t going anywhere. But neither are we.

If you want to build a future where technology serves humanity, not the other way around—it starts with culture.

More News & Insights

More Articles