59: The Illusion of Choice
I've been thinking about the ending of Isaac Asimov's I, Robot for a really long time now. If you haven't read the book, it's essentially an anthology of encounters between humans and a nascent robot race, loosely linked by the presence of a robopsychologist named Susan Calvin whose research is fueled by the desire to make robots more like human beings. The book collects stories about Calvin's research into, essentially, a bundle of newspaper articles following the arc of robotic development through human history into the very near future. The final story, "The Evitable Conflict," haunts me; it was written in 1950 and takes place in 2052, but I see shadows of the story in the way we approach art and commerce today.
The story isn't long. Humanity has ceded control of the economy to computers, which make economic decisions; they're comfortable doing this because all robots must adhere to Asimov's Three Laws of Robotics. (Law 1: robots can't harm any human, either through action or through neglecting to act; Law 2: robots must obey humans, and Law 3, robots must avoid situations in which harm will come to them.) When humanity cedes the power of economic choice to the computers, the computers begin to interpret Law 1 as "robots can't harm or allow harm to humanity, rather than to any individual human. The robots take over, evolving into a system that shapes human evolution, but that does allow harm to come to a small portion of the population "for the greater good." Our future is removed from our hands, and we can't take it back; the machines control it.
Asimov's humanity gives up without putting up a fight; the robots are better at weighing the costs of humanity's actions than humanity is itself. Everybody wins, except the unlucky few whose lives are affected by the computers' decisions to sacrifice them. The humans in I, Robot blithely cede their power in the name of progress and collective prosperity.
This ending haunts me; it's awful, and yet optimistic about the future. We're going through a technological inflection point ourselves, a few decades before the human beings in I, Robot, and a lot bleaker. I don't want to be too pessimistic, but in my darkest moments, I see Google rerouting traffic over dangerous mountain passes and through local streets, and I think about Asimov's computers. The algorithm funnels us posts and news and entertainment that is tailored to what we tell it about ourselves. We treat ChatGPT like both a toy and a viable work tool, when it shouldn't be used as either.
What really scares me about these developments is that, unlike Asimov's robots, ours can't actually think for themselves. ChatGPT is, functionally, a very sophisticated search engine with a good text-prediction algorithm built in, but ChatGPT doesn't understand the human language it's spitting out. It just tells you what you already want to know, without factoring in the meaning of what it's saying. We're accepting the choices the robots make for us, but our version of the future is even darker than Asimov's, because the robots we use don't have the Three Laws built into them.
Thank you for reading. If you have any thoughts, or just want to drop me a line, feel free to get in touch. This newsletter is free, but if you'd like to support my work, you can pay for a subscription, which helps me keep the pilot light on.
What I talked about:
For Seeing & Believing podcast, Kevin and I reviewed Pixar's Elemental (which we felt mixed on) and Kelly Fremon Craig's 2016 directorial debut The Edge of Seventeen (which we both liked quite a lot).
What I watched:
I watched The NeverEnding Story for Eye of the Duck podcast. They're doing a series on dark 80s fantasy movies this summer, which I'm excited about because with a few exceptions, I haven't seen most of the movies they've programmed. The NeverEnding Story is one of the movies I hadn't watched, and it was great to talk to Adam and Dom about it.
What I'm reading:
I got a hammock for the backyard last weekend. The Infinite Jest Read of 2023 has only gotten more enjoyable.
Member discussion