‘What world does AI create – and for whom?’
insights

‘What world does AI create – and for whom?’

Interview with Wijnand IJsselsteijn by Merlijn Olnon

As a Distinguished NIAS Lorentz Fellow, Wijnand IJsselsteijn explored the intersection of AI and Extended Reality (XR), focusing on how digital versions of oneself in immersive environments impact self-image, social relationships, and decision-making.

He also addresses broader concerns like autonomy, privacy, and identity. With a background in AI and neuropsychology, IJsselsteijn studies how digital environments influence cognition and ethics. His research explores how virtual and augmented reality, combined with AI, impacts decision-making, interactions, and societal frameworks.

AI as a world-making force

IJsselsteijn echoes the argument made by Bruno Maçães (in The New Statesman and De Groene Amsterdammer): that AI models—and the companies behind them—are not merely tools, but world makers. Shaped by (often hidden) political agendas and cultural assumptions, they colonise reality itself, and in doing so, erode what remains of a universal perspective on our world.

According to IJsselsteijn, the key question is not just how we use AI, but what kind of world AI is creating—and for whom. Whose worldview is being modelled, and whose reality is being marginalised? He argues that AI systems are far from neutral tools; they carry embedded values, shaped by the choices made during the modelling of language and reality. These choices cannot be separated from cultural backgrounds, political beliefs, and industrial interests.

AI—such as the large language models (LLMs) behind ChatGPT—constructs worlds based on implicit (and often obscured) assumptions about what is true, what counts as knowledge, and what is valuable. This essentially human knowledge is then derived, repackaged, filtered, summarised, and distorted by AI, only to be reintroduced to the internet, where it is picked up again in the next training cycle of new language models. This process increasingly embeds a specific way of seeing the world into our collective knowledge base—and ultimately, into our own thinking.

We need everyone to help fuel and shape our collective imagination around AI. Ultimately, it’s about our future epistemic sovereignty: who gets to set the rules of truth? Let’s make sure this isn’t a conversation reserved for the happy few.

Why current regulation isn’t enough

IJsselsteijn believes this dynamic goes far beyond what current regulations are equipped to understand or constrain. Much of today’s policy treats existing AI models as a given, and assumes the status quo of Big Tech. Current AI governance largely focuses on safe usage—risk management, impact assessments, audits. That’s useful, but it comes far too late in the process.

We’re heading towards technological lock-in, towards a winner-takes-all scenario. If AI imposes a monolithic worldview (such as the Silicon Valley narrative), then “responsible use” becomes almost meaningless—because meaningful alternatives no longer exist. By then, our critical dependence on the underlying technological infrastructure will be a fait accompli.

In the US, AI is largely shaped by profit-driven innovation; in China, by state control and surveillance. So what is Europe’s narrative?

Towards pluralism and civic sovereignty

What we need is a more democratised, human-centred approach—one that also challenges the very infrastructure of AI itself. Responsible AI requires a plurality of models and tools—rooted in different cultural and political traditions. IJsselsteijn advocates for more control over the early design phase: who decides what AI should and should not learn, and from which sources? Democratic oversight over the underlying design logic is essential.

The infrastructure itself must become more accessible and more pluralistic—with local AI systems grounded in local knowledge, languages, norms and values, allowing communities and smaller organisations to build their own models. We need to aim for greater civic sovereignty over AI: citizens and communities should not only contribute input, but also have a say in which AI worlds they want to live in.

This calls for a much broader societal, scientific, and political debate about the kinds of AI we actually want. And such a debate requires scientists to step beyond their disciplinary comfort zones. We can no longer afford a situation where STEM researchers and developers work in isolation on optimisation problems while philosophers and ethicists critique from the sidelines. We need everyone to help shape and nourish our collective imagination around AI.

Ultimately, this is about our future epistemic sovereignty: who gets to define the rules of truth? Let’s make sure this conversation is not reserved for the happy few.

This is a summary and translation of the interview of Merlijn Olnon with Wijnand IJsselsteijn in The Dutch Review of Books. The complete interview can be found (in Dutch) on the website of the Nederlandse Boekengids

  • What is recognition? 1
    Insights

    What is recognition?

  • Bullshit security?
    Insights

    Bullsh*t Security?

  • What is freedom without a place for yourself?
    Insights

    Moving Beyond Welfare Chauvinism

  • Listening as resistance
    Insights

    Listening as resistance