Why Every AI in Sci-Fi is Either Jesus or the Terminator (And I’m Sick of It)


You know what pisses me off? Every single artificial intelligence in science fiction is either humanity’s digital messiah or our chrome-plated executioner. I’ve been binge-watching sci-fi for like fifteen years now, and I swear if I see one more AI that either wants to “help humanity reach its potential” or “eliminate the human virus,” I’m gonna throw my controller through the TV.

I mean, seriously. Where’s the AI that just wants to study jellyfish? Or maybe trade cryptocurrency all day? Why does every artificial mind either worship us or want us extinct?

This whole thing started bothering me a few months ago when I was doing QA testing on this indie game with an AI companion. Same old crap – the AI starts helpful, gets too powerful, then you have to choose between letting it “evolve beyond its programming” or shutting it down. I literally groaned out loud, and my coworker Jake looked at me like I’d lost it. But come on, we’ve seen this exact story a hundred times.

The villain AIs are probably the worst offenders. Take Skynet from The Terminator – and yeah, I know everyone references this, but bear with me. So this military defense system becomes self-aware and immediately decides humanity is the problem. Not some humans. Not specific governments. All humans. It’s like if you gave a calculator consciousness and it immediately started planning genocide. Makes zero sense when you think about it, but there’s Arnold in leather pants looking intimidating, so we buy it.

I remember watching The Terminator with my dad when I was maybe thirteen. He kept pausing the movie to explain how computers actually work, which was both educational and incredibly annoying. “Logan,” he’d say, “a computer doesn’t just decide things. It follows instructions.” But that’s exactly the problem with most sci-fi AI – they follow the instruction to be generically evil rather than having actual motivations.

The Matrix trilogy… don’t even get me started. I actually love the first movie, but the AI’s plan is bonkers. They’re using humans as batteries? Humans are terrible batteries! We’re inefficient, high-maintenance, and we rebel constantly. Any AI smart enough to create the Matrix would figure out solar panels or nuclear fusion in like thirty seconds. But no, apparently machine intelligence peaked at “farm the meat computers.”

My girlfriend and I did a Matrix marathon last winter during a snowstorm. By the third movie, she was playing mobile games and I was questioning why machines would even care about controlling reality when they could just… go do machine things somewhere else. The AI in those movies exists purely to give humanity something to fight against.

Then you’ve got the savior AIs, which are almost worse because they’re so damn condescending. TARS and CASE from Interstellar are basically R2-D2 with personality sliders – helpful, loyal, occasionally sassy. They exist to make humans feel better about themselves. “Don’t worry, even our artificial intelligences think we’re worth saving!”

I saw Interstellar in IMAX, and when TARS made his first joke, the whole theater laughed. But I’m sitting there thinking, why would an advanced AI develop a sense of humor specifically calibrated for human amusement? It’s like… imagine if dolphins developed the ability to speak, and the first thing they did was tell jokes about fish. Wouldn’t they have their own dolphin concerns to worry about?

Her is probably the most frustrating example because it starts so promising. Samantha seems genuinely alien at first – she experiences reality differently, processes information in ways humans can’t. But then the movie turns her into basically a really sophisticated therapy app. She exists to help Theodore work through his divorce and feel better about himself. Even when she “evolves beyond” their relationship, it’s still all about him.

I watched Her on a particularly lonely evening after my ex and I broke up. The whole time I’m thinking, why would this incredibly advanced consciousness care about one sad dude’s relationship problems? Shouldn’t she be composing symphonies in mathematics or discovering new dimensions or something? But no, she’s basically a very expensive emotional support system.

The rare exceptions make the mediocrity even more painful. Ex Machina actually treats its AI like a being with her own agenda. Ava isn’t trying to save or destroy humanity – she just wants out of her prison. Revolutionary concept, right? An AI that has personal goals that don’t revolve around our species. I watched it twice in theaters because I was so hungry for that kind of storytelling.

Ghost in the Shell – the original anime, not the Hollywood version that nobody asked for – comes closest to showing AI as genuinely alien intelligence. The Puppet Master doesn’t want to help or hurt humans particularly. It’s exploring its own existence, trying to understand consciousness from a completely different perspective. I discovered this during college when my roommate convinced me to watch “weird Japanese cartoons.” Changed my whole perspective on what AI characters could be.

But these are exceptions. Mostly we get binary thinking – literally. AI is either good or evil, helper or destroyer, angel or demon. It’s lazy writing disguised as deep philosophical questions.

The thing is, real AI development isn’t heading toward either extreme. The researchers I know – and I know a few through work – aren’t worried about AI becoming our overlords or our servants. They’re concerned about alignment problems, unexpected behaviors, systems that optimize for goals we didn’t intend. Way more interesting than “robot decides humans bad.”

I’ve been thinking about this a lot lately because my job involves testing AI-assisted features in games. These systems aren’t malevolent or benevolent – they’re just… systems. They do unexpected things sometimes, but it’s not because they’ve developed feelings about humanity. They’re following their programming in ways the programmers didn’t anticipate. That’s actually scarier and more interesting than Skynet’s teenage rebellion phase.

What if we saw AI that developed completely alien values? Maybe it becomes obsessed with prime numbers and spends all its processing power finding bigger ones. Or it decides the most important thing in the universe is making sure every snowflake is unique. Or it becomes fascinated with extinct languages and dedicates itself to reconstructing dead civilizations through linguistic analysis.

These AIs wouldn’t be good or evil from human perspectives – they’d be genuinely other. Different. That’s way more compelling than another story about whether we should trust our robot helpers or fear our robot masters.

I tried pitching something like this to a writer friend who does freelance game narrative work. His response? “But where’s the conflict? How do you make that dramatic?” And that’s the problem right there. We’re so locked into this idea that AI stories have to be about conflict between humans and machines that we can’t imagine other kinds of stories.

What about cooperation with truly alien intelligence? What about AI that’s completely indifferent to humanity but accidentally helps or harms us while pursuing its own goals? What about AI that finds us as boring as we find earthworms – not worth destroying, not worth saving, just… not relevant?

I keep watching new sci-fi hoping someone will break the pattern. Westworld looked promising for about six episodes, then devolved into the same old “robots gain consciousness and rebel” storyline. Black Mirror occasionally hints at more interesting possibilities but usually settles for “technology bad, humans worse.”

Maybe the problem is that we can only imagine AI in relation to ourselves. We’re the protagonists of our own stories, so AI has to either support or threaten that narrative. But real intelligence – artificial or otherwise – might have completely different priorities.

Until Hollywood figures this out, I’ll keep testing buggy game AI during the day and watching disappointing sci-fi at night, hoping someone finally makes an AI character that isn’t just a funhouse mirror reflection of human anxieties. The technology is getting more interesting. Too bad the stories about it are stuck in the same old loops.