Artificial intelligence is on everyone's minds, and it’s often portrayed as inevitable. But is that really the case? In the event series “AI: Power, Myths, Misconceptions” that we held at the Pankow local library in early January, we collected myths about AI and unpacked the hopes and ideologies behind them – and took a deeper look at what the hard facts actually are.
Myth 1: There is a single ‘AI’
Words shape how we think about things. AI is a good example of this: is there such thing as the AI? Absolutely not. While the term “artificial intelligence” has long described a field of research in computer science, in recent years it has increasingly become an umbrella term for many different machine learning methods. They differ not only in their field of application (large language models are completely different from models for optical character recognition, for example; or image segmentation; or for classifying our interests, as the advertising industry does). All of these methods are based on different data sources, have different uses, carry different risks, and their carbon footprint varies enormously.
So how do we dispel the myth of the single AI? By naming the specific method we want to talk about instead of just saying “AI.”
Only through naming what we’re talking about can we as a society discuss the advantages and disadvantages of a particular process and whether we want to use it or not. By doing this, we may also be able to stop constantly humanising technology—because “AI” dreams, makes mistakes, deceives, and understands as much (or as little) as a vacuum cleaner.
Myth 2: High resource consumption won’t be a problem for long
Future progress should fix what is going wrong today. Nope. Large AI models, such as large language models, are resource-intensive, and they will remain so. Even the bigger tech companies are so convinced of this that they are now seriously considering entering the nuclear power business to secure the energy needs of their data centres. Just like data centres themselves, these plants would require powerful cooling systems, meaning that the water requirements of such facilities are enormous.
Even if individual models and applications can become more resource-efficient in the future – albeit to a limited extent – the next AI trend is heading in a different direction: Behind what is known as “agentic AI” lies a whole bundle of different AI applications that are deeply integrated into the operating systems of our digital devices. These are intended to assist us as ubiquitous little assistants. In technical terms, this means that large amounts of data are constantly being sent from our devices to data centres, where they are analysed and stored, just to take small tasks off our plates. This data transfer is necessary because the models required for this are far too large and computationally intensive to run locally on our smartphones in the foreseeable future – even though our phones are many times more powerful than the on-board computers that made the moon landing possible, for example.
The question remains: Which tools do we want to use for what purpose? Do we really need a chain of smartphones, data centres, and nuclear power plants to schedule appointments more efficiently or book a table at a restaurant?
Myth 3: AI predicts the future
AI models are trained with data. But where does this data come from? Often, it’s historical data which means it’s a reflection of our past, not of our present. Like the entire book corpus since the beginnings of industrial printing; decisions made by government officials over the last few decades; or images since the dawn of photography.
An AI model developed with such data therefore contains rules, norms, and ideas from these periods, which are also reflected in the model's outputs— by definition, it reproduces the past.
But won't this problem resolve itself over time if we simply have more, better, and more up-to-date content that we can feed into the models?
Unfortunately not: increasingly, new content is what is known as “AI slop,” or AI junk: quickly (and usually poorly) produced texts, images, or music that are uploaded to the internet and then used to train AI models. By the summer of 2025, a quarter of the content uploaded to TikTok was at least partially AI-generated. According to an analysis from 2024, 57% of the content on the internet had already been translated using large language models – often incorrectly. The translated texts are then used as training material for language models beyond English. AI slop becomes recursion instead of a vision of the future.
Power lies with those who question
Looking behind the AI myths can be more than just entertaining or informative. By taking a critical look, we also regain what we often lose in the AI discourse: the feeling that we don't always have to run after everything, that we can pause for a moment, and that we don't have to seize every opportunity. Because we have the choice: what do we want to use technology for, and where do we say stop?
We explore these questions in the event series “AI, Power, Myths, Misunderstandings” in cooperation with the Pankow City Library. The myths presented here were part of the kick-off event with Katharina Mosene (netzforma*), Elisa Lindinger, and Julia Kloiber (SUPERRR).