You know, after four decades of designing actual spacecraft while reading stories about fictional ones, I’ve come to a weird realization—science fiction has never really been about predicting the future. It’s about holding up a mirror to right now, and honestly, that’s probably why I can’t stop reading it even though most of the physics makes me want to throw books across the room.
Let me tell you how this all clicked for me. I was maybe twelve, rummaging through my dad’s collection of beat-up paperbacks (the man never met a used bookstore he didn’t like), when I found this dog-eared copy of something that just… grabbed me. Even then, before I knew anything about orbital mechanics or propulsion systems, I could tell these stories weren’t really about ray guns and rocket ships. They were about us—about what we hoped for and what scared the hell out of us.
Fast-forward to my career at various aerospace companies, and I’d find myself in conference rooms discussing satellite designs while thinking about some story I’d read the night before. My colleagues thought I was nuts, but here’s the thing—the best sci-fi writers understood something my engineering textbooks didn’t. They knew that every technological advancement comes with a choice about who we want to be.
Take *Star Trek*, which my dad was absolutely obsessed with from day one. I mean, the guy recorded every episode on those old VHS tapes and wore them out rewatching them. When I was eleven, those reruns became my afternoon routine too. Now, from an engineering standpoint, *Star Trek* is complete nonsense—transporters violate basic physics, warp drive ignores relativity, and don’t get me started on the artificial gravity that somehow works when the power goes out everywhere except the gravity generators.
But you know what? None of that mattered. What mattered was this vision of humans who’d figured out how to stop killing each other long enough to explore the galaxy. Gene Roddenberry wasn’t trying to write a technical manual—he was imagining a future where technology served humanity instead of destroying it. Watching Kirk and his crew solve problems through diplomacy and science instead of bigger weapons, that planted something in my twelve-year-old brain that stuck with me through my entire career.
The timing was perfect, really. This was the late sixties, early seventies—we’d just put people on the moon, and everything seemed possible. *Star Trek* reflected that optimism, that sense that if we could build rockets powerful enough to escape Earth’s gravity well, maybe we could escape our worst impulses too. The show’s version of AI wasn’t some existential threat—it was Data, trying to become more human. Their faster-than-light travel wasn’t a weapon—it was a tool for exploration and understanding.
I carried that optimism with me into MIT, into my first job designing propulsion systems, into forty years of actual space engineering. And then, somewhere along the way, the stories started changing. Or maybe I started noticing things I’d missed before.
*The Expanse* hit me like a wrench to the forehead. I stumbled across it maybe five years into my retirement—finally had time to watch TV again—and within the first episode I was simultaneously fascinated and horrified. Here was a show that got the orbital mechanics right (mostly), understood how space travel actually works, but used that accuracy to build something much darker than Roddenberry’s Federation.
The physics in *The Expanse* actually makes sense, which as an engineer I appreciated immensely. Ships flip and burn for deceleration, there’s no artificial gravity except through acceleration, and space combat follows actual Newtonian laws. But that realistic science serves a story about humans who’ve spread throughout the solar system and immediately started fighting over resources and territory—just like we’ve always done, but with bigger weapons and longer supply lines.
Watching Earth, Mars, and the Belt tear each other apart over water and shipping rights, I couldn’t help thinking about budget meetings I’d sat through where we’d argued over satellite contracts while people went hungry. The same petty politics, the same power games, just projected onto a larger canvas. It was like *Star Trek*’s evil twin—same technological possibilities, completely different choices about how to use them.
What really got to me was how relevant it felt. I mean, I was watching this show about resource wars and authoritarian governments in 2018, 2019… and looking around at the actual world thinking, “Yeah, this tracks.” The Belt’s fight for autonomy from Earth reminded me of every independence movement I’d lived through. The water crisis on Mars echoed every environmental disaster I’d watched unfold. *The Expanse* wasn’t predicting the future—it was explaining the present with fusion drives and asteroid mining.
That’s when I really started understanding what sci-fi does. It takes our current fears and hopes, amplifies them through speculation, and forces us to confront what we might become. *Star Trek* emerged from the civil rights movement and the space race—it imagined us at our best. *The Expanse* comes from an era of climate change and rising authoritarianism—it imagines us at our most human, which includes our worst.
Then there’s stuff like Neal Stephenson’s *Snow Crash*, which I picked up on a recommendation from a younger engineer who thought this old guy might appreciate some “classic cyberpunk.” Classic. Thanks, kid. But he was right—Stephenson’s corporate dystopia hit different when you’d spent decades working for companies that prioritized shareholder value over everything else, including sometimes the functionality of the actual products we were building.
The virtual reality stuff in *Snow Crash* seemed ridiculous when I first read it in the nineties. Now my grandkids are wearing VR headsets and living half their social lives online, and Stephenson’s warnings about corporate control of digital spaces don’t seem quite so far-fetched. The guy wasn’t predicting specific technologies—he was asking what happens when we let profit motives drive technological development without considering the human cost.
That’s the pattern I keep seeing. The best sci-fi isn’t about the gadgets—it’s about the choices we make about those gadgets. Will we use artificial intelligence to eliminate drudgery and free humans for more meaningful work, or will we use it to maximize profits while eliminating jobs? Will we use genetic engineering to cure diseases, or to create new forms of inequality? Will we use space travel to expand human knowledge and cooperation, or just export our same old conflicts to new environments?
Every sci-fi story is essentially asking: “If we could do X, what would that reveal about who we really are?” And the answers range from *Star Trek*’s hopeful humanism to *Snow Crash*’s corporate nightmare, with everything in between. What fascinates me is how the same technological possibilities can support completely different visions depending on what the author thinks about human nature.
After spending my career in actual aerospace engineering, I’ve seen both sides of this play out in real time. I’ve worked with brilliant people genuinely trying to expand human knowledge and capability through space exploration. I’ve also sat through meetings where the main concern was whether we could build something cheap enough to undercut the competition, regardless of performance or safety. Same technology, different values driving its development.
This is why I think sci-fi remains essential, even when—especially when—it gets the science wrong. We’re living through the most rapid technological change in human history, and we desperately need stories that help us think through the implications. Not just the technical implications—I can run those calculations myself—but the human ones.
What kind of people do we want to be when we have these new capabilities? How do we maintain human dignity and cooperation in an age of artificial intelligence and genetic modification? How do we prevent technology from amplifying our worst tendencies while still capturing its benefits?
Sci-fi gives us a way to explore these questions safely, in imagination, before we have to face them for real. The stories that stick with me aren’t the ones that get the technical details right—they’re the ones that grapple seriously with what our choices reveal about our values.
So yeah, I’m still reading sci-fi at 68, still getting irritated when writers ignore basic physics, still arguing with my wife about whether this is a reasonable way to spend retirement. But after watching real technology development for forty years, I’m convinced these imaginative exercises aren’t just entertainment—they’re practice for the most important decisions we’ll face as a species.
The future’s coming whether we think about it or not. Sci-fi just gives us a chance to think about it first.
0 Comments