Leahs Gedanken

Lass uns ein Stück gemeinsam gehen

Why I won't talk about AI anymore.

Let’s start with a definition from a recent article by tante that I want to adopt for this post:

So while I don’t think that “AI” is a great term to use, we will keep using it for the rest of this text in the understanding that dominates the term right now: In that reading “AI” stands for a class of stochastic machine learning systems that can store and apply patterns extracted from data in order to do either pattern recognition (think computer vision) or (and that is the dominant narrative vehicle today) as generative systems (“generative AI” or “genAI”). So when I write “AI” think ChatGPT or Claude or Gemini or Deepseek, etc.

I don’t want to talk about AI anymore. I hate it. Everything about it. I thought a thousand thoughts about it. In the last few months I thought nearly permanently about it, not always because I wanted to. I’m forced to think about it when I have to protect the infrastructure I am trusted with against attacks by them. I have to think about it when I read the news, social media, watch movies, hear music, at work, and a lot more because these abominations are shoved everywhere, and you just can’t evade them anymore.

I think about arguing and explaining and gripping all the aspects of why I think it is problematic, trying to find the one argument that will finally solve the problem. I think about, hope about, finding an argument to convince the people I care about why it is bad. Despite knowing there will be no single holy grail of an argument that will change someone’s mind if they don’t want to.

This morning, while still being ill, I thought, “Leah, what the hell are you actually doing here?” Why do you try to defend your opinion so hard and spend soooo much time reading, writing, thinking, and discussing the topic? You are allowed to think that AI is inherently unethical! You’ve done your homework! And why do you put yourself under pressure and in a place where you try to defend all the time? This is not healthy.

Why do I have to defend at all? There are so many good articles, studies, and arguments out there about all the things that went wrong with this technology. Or in other words, why do I have to prove that it’s not wise to just use this technology as any other or at all?

Why don’t those who want to use them have to argue why their use case is so special that it outweighs all the ethical, environmental, social, and all the other problems coming with it? And that’s not even a new thought. We know this from many places, for example, in evidence-based science. There, you have to argue with an ethics board to prove that you thought about the ethical implications of what you plan to do before you do a trial with patients at all. And the more extreme your ideas are, the more you have to have arguments that outweigh all the implications. And even then you have to prove that your result is superior and has fewer issues than the existing solution. Let’s do this with technology! Remember why there is such a thing as impact assessments.

I think this will, hopefully for my sanity, be my last article about AI. Sometimes the only winning move is not to play the game. I already played it for too long. Destroy instead of create if you want to. I have my opinion that is backed by a hundred well-informed arguments. And if you haven’t done your homework proving that your use case is so special to outweigh all these arguments, please just stop shoving your abominations into my face. I will silently try to remove them from my life thought, so don’t tanke my silence for giving up, it’s a silence of radicalisation, saving my energy for what is to come.

Foto von Erik Mclean auf Unsplash


Knowing myself well enough that I will not stop reading about this topic as a whole, I will use this article to reference all the good articles that back my stance or are just interesting in ragard here. So there will be a sometimes updated list of recommended further readings below:

The fascism problem

The craft and the arts

The ethical

The philosopical

The overall impact

The overrating