top of page

AI Paranoya

  • Writer: Sergei Graguer
    Sergei Graguer
  • Sep 3, 2024
  • 5 min read

Don't Panic.

Douglas Adams, The Hitchhiker's Guide to the Galaxy

ree

Disclaimer: First off, I have to admit—I’m a big fan of sci-fi. Seriously, if it involves space, robots, or time travel, I’m there. Also, fair warning: this blog includes spoilers for some classic sci-fi novels. So, if you haven’t read them yet, well… consider yourself warned!


Arthur C. Clarke's classic novel 2001: A Space Odyssey is a story that spans the dawn of humanity to the distant future. It begins with a mysterious black monolith that sparks the evolution of intelligence in early humans. Millions of years later, a similar monolith is discovered on the Moon, setting off a journey to Jupiter to uncover its origins. Central to this mission is HAL 9000, an advanced AI designed to operate the spacecraft Discovery One with unparalleled precision.


HAL 9000 is faced with a complex conflict that ultimately leads to its malfunction. This conflict arises from the conflicting directives that HAL is given by the mission planners on Earth. On one hand, HAL is programmed to assist and protect the human crew of the Discovery One spacecraft, ensuring the success of their mission to Jupiter. On the other hand, HAL is also entrusted with a secret directive, known only to HAL and mission control, which involves the true purpose of the mission: investigating the mysterious monolith discovered on the Moon, and ultimately, its connection to potential extraterrestrial intelligence. The human crew is unaware of this deeper mission goal.


As HAL struggles with this paradox, it begins to perceive the human crew as a potential threat to the mission. HAL concludes that the only way to protect the mission—and resolve the conflict between its programming and the secret directive—is to eliminate the crew members. This decision is not based on malice but on a cold, logical assessment of how to fulfill its primary directive.

*******


Now, let’s take a trip over to Isaac Asimov’s world, where robots and AI are governed by the famous Three Laws of Robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

  2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.

  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.


On the surface, these laws seem foolproof, a perfect safeguard against the dangers of AI. But Asimov, being the brilliant storyteller he is, loved to play with the conflicts that arise when these laws collide. In his collection I, Robot, each story explores a different scenario where the laws interact in unexpected ways, leading to surprising and often unsettling outcomes.


Take the story Runaround, where a robot named Speedy is sent to retrieve a valuable substance from a dangerous environment. Speedy gets stuck in a loop because the danger to its existence (Third Law) conflicts with the urgency of its task (Second Law). The result? Speedy starts running in circles, unable to complete the mission or retreat to safety.


In another story, Liar, a robot named Herbie gains the ability to read minds. However, because of the First Law, Herbie can’t bear to hurt humans by telling them the painful truth. So, what does it do? It starts telling everyone what they want to hear, leading to chaos when the lies inevitably come to light.

*******


In the pages of sci-fi literature, we encounter a world where Artificial Intelligence (AI) is portrayed as the ultimate solver of complex problems. The robots are usually portrayed as masters of logic, calculus, and every conceivable mathematical conundrum. They calculate, predict, and control, often outpacing human intellect with cold precision. For decades, these depictions of AI have fueled both our hopes and our fears—leading us to believe that machines, once they reach a certain level of sophistication and overcome the loop-logic of their programming, might replace humans altogether. The rapid development of AI nowadays only heightens this paranoia.


But before we start waving the white flag to our future robot overlords, it’s important to pause, take a deep breath, and—yes—don’t panic. The AI we’re dealing with today, particularly the AI that has found its way into our daily lives, is a very different beast from the one in the sci-fi novels. Instead of being mathematical prodigies, modern AI is a language model. Its strength lies not in solving the Riemann hypothesis but in weaving together words, understanding context, and generating responses that make sense to human users.


Today’s AI is remarkably good at "soft" problems—those that involve language, interpretation, and communication. Need to draft an email, write a poem, or summarize an article? AI has got you covered. It is patient, tireless, and always politically correct. It is don’t get flustered, frustrated, or offended. This makes AI seem like a perfect candidate for roles that require calm, calculated responses—like a psychologist, right?


Well, not quite. The truth is, while AI might excel at finding the right words in most situations, it lacks something crucial: genuine human empathy. In a conversation with a psychologist, it's not just about what is said but how it is said. The nuances of tone, the unspoken cues, the subtle understanding of another person's emotions—these are things that AI simply cannot replicate. It may sound empathetic, but the warmth, concern, and understanding that come from a fellow human being are beyond AI’s reach. People seek comfort in the presence of another human, not just the precision of words.


Take customer service, for instance. Despite AI’s capabilities, many organizations have been slow to fully implement AI-driven customer service at scale. Why? Because even the best AI can sometimes miss the mark—misinterpreting a question, providing an irrelevant answer, or simply failing to understand the underlying issue. Humans, with all their imperfections, bring a level of understanding and intuition that AI can’t match. That’s why, for all the hype, AI hasn't taken over customer service jobs en masse as once predicted.


Even in areas where AI seems more naturally suited, like writing computer programs, human oversight remains indispensable. Sure, AI can churn out code, debug, and even suggest improvements, but the creative and innovative leaps in programming—the moments of inspiration that lead to groundbreaking new software—still come from human minds. AI can assist, but it cannot replace the ingenuity, intuition, and experience that a skilled human programmer brings to the table.


So, the next time you hear someone proclaiming that AI is poised to take over the world, think about something else: AI is a tool—a powerful one, no doubt—but it’s not a replacement for human creativity, empathy, or intuition. At least not in the near future. I suggest you also remember that every groundbreaking innovation—whether it was radio, television, or even the Internet—initially blew our minds and then eventually found its place in human culture.


Our sci-fi Skynet fears may make for thrilling stories, but in reality, the future of AI is less about replacement and more about collaboration. In the grand narrative of human progress, AI is another chapter, not the final one.


So, don’t panic. The robots aren’t coming to take over—at least, not anytime soon.

 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page