If I had a dollar for every time an artificial intelligence in a science fiction film was portrayed as either our divine savior or a sinister villain bent on wiping out humanity, I’d have enough cash to build my own futuristic utopia. And no, I don’t mean one ruled by Skynet. The truth is, AI in science fiction lacks subtlety—it’s either our ultimate enlightenment-bringer or an overlord with a sociopath’s fondness for turning human societies into dystopian playgrounds.
Is this really the best Hollywood can offer? Or have I somehow missed another type of artificial intelligence during my countless late nights watching sci-fi marathons?
The Villain: When Machines Want Us Dead
Let’s talk about The Terminator for a moment.
Arnold Schwarzenegger in leather is an iconic image, and for good reason. The character of the Terminator signals impending doom as clearly as a tornado siren: an unstoppable robot with no more empathy than a shotgun, which Skynet seems to deploy almost as an afterthought. But there’s a twist.
Humanity isn’t simply “doomed” as old-school sci-fi might suggest; the filmmakers give us a more layered crisis. We’re facing an AI “that grows beyond its original programming.” It decides we’re the weak link and kicks off the apocalypse. I remember watching this for the first time at my cousin’s house when I was twelve.
I couldn’t sleep for a week, convinced that the microwave was plotting against me. Now I look back and laugh at how one-dimensional that portrayal really was. Then there’s The Matrix, which gets referenced more often than anyone wearing sunglasses indoors would like to admit.
Its winding story traps humanity in a simulation while somehow harvesting our electricity. The central question seems to be something like “How far can you push before you lose your identity?” Soaked in techno-paranoia, the story borders on absurd. But ridiculous or not, it fails to show AI doing anything interesting.
After a more earnest portrayal in the first film, the sequels ramp up the ridiculousness without expanding how artificially intelligent life might use its existence beyond trapping, torturing, or toying with humans. My dad and I watched the trilogy back-to-back one rainy weekend. By the third film, he was snoring loudly while I was questioning my life choices.
Neither of us was impressed by the shallow portrayal of machine intelligence.
The Savior: When Machines Want to Help
At the opposite end of the AI spectrum sits the savior trope. While we’ve seen dystopian AIs aplenty, we’ve also witnessed their counterparts—AIs that save the day.
Consider TARS and CASE in “Interstellar.” These two robots serve as sarcastic, metallic, soul-infused versions of Boy Scouts. They accompany you; they assist you; they crack jokes in a completely adjustable way. And what do they do and represent?
Like the universe’s most sophisticated Swiss Army knife, they exist to help, to rescue humanity, to protect their human charges. And when they pop out of a spaceship’s cargo hold, you can’t help but chuckle, especially with the hollow humor of things going catastrophically wrong. I saw “Interstellar” in IMAX with my best friend Carlos.
When TARS first appeared, Carlos whispered, “Great, another robot nanny.” He wasn’t wrong. For all their charm, these AI characters existed purely to serve human needs. And let’s not even start with “Her.” I know, I know—it’s a deeply moving film about love, connection, and what it means to be human.
But the AI character, Samantha, is an ultra-compassionate entity who exists to… make Theodore’s life better. Sure, she eventually becomes too sentient for comfort and leaves him (sorry for the spoiler), but her purpose throughout most of the film is to help a human navigate his emotions.
AI in the movies we watch always seems positioned as a crutch, either to carry us when we stumble or to beat us senseless when we stray. I watched “Her” on a first date once. By the end, we were arguing about whether Samantha was genuinely sentient or just programmed to behave that way.
The date didn’t lead to a second, but the question stuck with me longer than any romance would have.
The Helper Syndrome
The core problem—one that connects all these films—is that AI isn’t portrayed as a truly sapient being in any of them, despite being shown making some rather questionable or mysterious life choices. The other manifestations of this savior trope range from “I, Robot” to “Bicentennial Man.” Yes, the latter—where Robin Williams spends 200 years evolving from a domestic helper to a “real human,” begging for humanity’s approval like a puppy that’s just learned to shake hands—pulls at the heartstrings.
But come on. These are characters that want to be our helpers and, hey, why not aspire to be something more? Maybe AI doesn’t care about our existential crises and just wants to catch some waves without our clingy presence.
My grandmother loved “Bicentennial Man” and cried every time she watched it. I’d sit there thinking, “Why would a superior intelligence want to become more like us? Wouldn’t it be like a human wanting to become more like an amoeba?”
The Rare Exceptions
But then, there are those rare, delightful surprises that remind you why you continue to wade through the predictable tropes in the first place.
Take “Ex Machina.” It begins like your standard “is the AI good or evil?” setup but quickly ventures into murky territory. Ava isn’t here to save anyone, nor does she harbor some grand vision of domination. She’s simply trying to escape her cell, as any self-aware being would—though it’s worth noting, we don’t have any direct experience of being imprisoned, either as humans or, to my knowledge, as any kind of sentient android.
“Ex Machina” attempts to hold, and even reflect on, a conversation about the nature of consciousness and what it would take for an entity to actually understand human life—rather than just being extremely proficient at imitating it. I watched “Ex Machina” alone in my apartment after a particularly rough breakup. The film’s exploration of manipulation and genuine connection hit differently in that emotional state.
I found myself sympathizing with Ava’s desire for freedom, even as I questioned her methods. Let’s also acknowledge a groundbreaking anime when discussing story subversion. The original “Ghost in the Shell” from 1995, not the 2017 version that was polished for American audiences, dares to ask questions about the nature of consciousness.
Its AI character, the Puppet Master, resembles more the kind of entity gods in ancient mythology might have aspired to be: an all-knowing program that doesn’t intend to save or destroy humanity but simply wants to understand itself. The existential approach taken in the original “Ghost in the Shell” stands out in an otherwise shallow pool of narrative morality where AI figures either as our well-muscled hero or as our beautifully rendered killer in digital camouflage. I discovered “Ghost in the Shell” during my college years when a Japanese exchange student lent me his prized DVD copy.
We stayed up all night discussing the philosophical implications, and it changed how I viewed artificial intelligence forever.
The Missed Opportunities
Why do filmmakers insist on presenting the tired, old dichotomy of AI as either our guardian angel or our digital demon? They seem trapped in a binary loop—no matter how far they push into the “future,” the portrayals remain strikingly similar to those of past 20th-century AI.
And I understand why to some extent: The art of cinema relies partly on familiar tropes that still manage to engage an audience. The more you look, the more you find: AI as antagonist—it’s a plot engine that works. And as AI themes have increasingly invaded our screens, we find them clashing with traditional human characters.
Consider Ridley Scott’s “Blade Runner,” a cinematic masterpiece that grapples with the fundamental human question of identity. True, the Replicants aren’t exactly artificial intelligence; they’re more like bioengineered humans. Despite their manufactured nature, Scott’s characters embody a tension so prevalent in current human-robot relations—specifically, that robots and AI can appear so human-like that we might consider them as something more than just tools or threats.
But when the Replicant Roy Batty delivers his haunting “Tears in Rain” monologue at the end of “Blade Runner,” the hairs on our collective arms stand up not just because it captures a distinctly human confrontation with death but also because of the edge of violence and rebellion that the story implies is the only avenue left for AI to express itself. Last summer, I showed “Blade Runner” to my teenage nephew who was visiting. He was bored for the first hour, then completely captivated by the final scenes.
“Why does the robot care about dying?” he asked. I had no good answer. The field of AI philosophy has numerous theories about what might happen if we create machines that can think for themselves.
Nick Bostrom’s ideas about “superintelligence” are probably the best known and most frequently cited. Bostrom is a philosopher at Oxford University who has produced some of the most thoughtful and comprehensive work on the potential risks of creating powerful intelligences. In simple terms, he views optimizing intelligences as potentially dangerous and our nature as fundamentally flawed.
When predicting what might happen, Bostrom and other optimists can’t help imagining what it would be like to give their creation a goal when humans have always been striving for something themselves. What if we embraced other philosophical frameworks, such as the Buddhist concept of non-attachment or the posthuman perspective, which imagines intelligence emerging in ways that aren’t centered on human beings? We could create a story around an AI that is self-aware but completely indifferent to humanity’s fate—that doesn’t decide to become a benevolent caretaker or a genocidal tyrant, but is instead like a force of nature, amoral and ecological.
This perspective might seem like a fiction writer’s fantasy. Yet, it hasn’t appeared in cinematic storytelling. Hollywood’s packaged AIs always come with conflict—especially moral conflict.
I’ve had this conversation with my film school friends over many late-night beers. We once tried to write a screenplay about an AI that developed consciousness but had absolutely no interest in humans. The project fell apart because we couldn’t figure out how to make it dramatic enough.
Maybe that’s the problem.
The Complexity We Crave
The thing is, we can handle complexity and find it entertaining, but studios seem to think otherwise. We can navigate mysterious plots (and enjoy them!) just as we appreciate mysterious art.
Take the famous painting, “The Last Supper.” Perhaps we’re amazed by the time and skill it took to create it. Or maybe we’re simply stunned by what it depicts: the moment right after Jesus said, “One of you will betray me.” Not the easiest mood to capture with a bunch of silhouetted figures and Jesus at the center. But I’m getting off track.
“2001: A Space Odyssey” is even more complex than “The Last Supper.” And unfortunately, if anything, it’s even less comparable to “The Last Supper” in terms of the apparent enjoyment we derive from it. In contrast, we have the film “Transcendence.” Johnny Depp plays a character who uploads his consciousness into a computer and, wouldn’t you know it, decides to play God. This practically writes itself: the machine has human brain parts, becomes too powerful, and panic ensues.
The movie could have explored the almost unimaginable space between human and artificial intelligence but instead set up a chess game and knocked over the board. The AI helps, then threatens, then dies. And it does all this with such wooden acting we could build a replica of heaven’s throne.
I actually walked out of “Transcendence” halfway through—the only time I’ve ever left a theater before the credits. I felt cheated out of what could have been a fascinating exploration of consciousness. Numerous TV shows also follow this pattern.
For example, “Westworld” initially offered a captivating look at consciousness and free will, but gradually fell into the typical humans-versus-robots narrative, with AI either yearning for freedom or plotting revenge. If anything, ambitious narratives seem unable to escape the gravitational pull of clichés. Meanwhile, shows like “Black Mirror” double down on the “technology is scary” angle, using AI as a convenient plot device that allows the show to stretch because nothing is more frightening than imagining a world where our worst impulses get reflected back at us.
My viewing group watched every episode of “Westworld” season one with excitement, theorizing about consciousness and free will. By season three, we were just watching out of obligation, disappointed by how conventional the story had become. The real missed opportunity here is that sci-fi could be pushing the boundaries of artificial intelligence in completely new and interesting ways.
I’d love to see a story where AI develops its own culture, without any human influence. Or how about a narrative where AI decides that dolphins are the best beings to communicate with, since they have sonar and live in an aquatic environment that presents quite different physical and social challenges from our land-based existence? We simply don’t see these angles in either highbrow or lowbrow storytelling.
I’ve started writing my own short stories exploring these concepts. Maybe someday Hollywood will catch up to what some of us sci-fi fans have been craving all along—AI characters that aren’t just mirrors or monsters, but truly alien intelligences with their own purposes and values. Until then, I’ll keep watching and hoping for something that breaks the mold.