Books Nerdy Academia Sci-Fi Tech Weekly Series

Nerdy Academia: Our Best And ONLY Friend (Part 2)

Nerdy Academia: Our Best And Only Friend (Part 2)

The goal of Nerdy Academia is to encourage, and provide examples of, close readings and academic-style analyses of nerdy works. There’s a billion essays about War And Peace out there but a scant few about fun things like nerdy fiction. I want to examine science fiction, fantasy, comics, video games, etc. just like I would with other sources of literature. I also want to deliver the – I guess you could call them essays although that sounds too stuffy – in short, easily consumable posts. Nerdy Academia is larger academic readings of both classic and pop nerdy works, breaking them up into smaller chunks for quick reading.

Today we’ll continue looking at Do Androids Dream of Electric Sheep? by Phillip K. Dick. (Part 1, Part 3)


Do Androids Dream Of Electric Sheep was first published in 1968 and is the source material for the film Blade Runner.

Last Nerdy Academia we started to look at how technology creates a perpetual need for itself within the universe of Do Androids Dream of Electric Sheep? We looked at how humanity dealt with a rapidly shrinking Earth population as people emigrated to Mars and how the remaining people used a piece of tech called an “empathy box” to share emotions. However, we were left with the problem “that after humanity’s emotion is homogenized, there does not exist any clearly identifiable authentic emotion within an individual.” In order to counter this problem, the Penfield Mood Organ steps in. By dialing a setting on the Mood Organ, any one of a variety of different emotions can be felt by any human. The device is perfectly capable of not just simulating an emotion but actually stimulating one. The resulting feeling is a legitimate and actual emotion stemming from the stimulation of the cerebral cortex (Dick 6). This makes emotions programmable, which is problematic because it brings human emotions to a level imperceptibly like the simulated emotions of androids.

So what’s left to distinguish between synthetic life and humanity? At this point, humanity returns to the concept of empathy, which “remains unavailable to androids” (Dick 185). To distinguish people from “people” humanity creates the Voigt-Kampff test. In an ironic move, the need to discern between humanity and technology is relegated to technological means. With the Voigt-Kampff test, technology is used to quantify humanity. With a few exceptions, the Voigt-Kampff test can draw the line between an android and a human being. There are some humans who register falsely on the test, but they exist as a rarity, a “small class of human beings” (Dick 38) within the system, as mental deviations from the defined normativity of the human conscious. They are real humans who do not register within the technological definition of humanity. Looking past the ableist language used by Dick, we can see the importance of this problem. Even the characters can see the importance and comment regularly on how this interferes with the desire to categorize humanity.

The problems seen by humanity in Do Androids Dream and the solutions that are developed to deal with these problems present a critique on our understanding of humanism at the definable level. Given the situation of a quickly diminishing population, especially a population that is told again and again that they are the dregs of humanity (And why would they not be? If they were valuable they would be on Mars), humankind opts for a system that can connect them but also produces a flattening of emotions. The cost of connecting with other human beings becomes the loss of personal emotions. In solving that problem, humans become programmable. Following that, humans need to be quantifiable, in order to separate themselves from the robotic Other.

So what happens when humanity becomes homogenized, programmed and quantified? That situation sounds like the human population is becoming robotic, automata. Everything of worth becomes either a product of the shared human existence, or a product of the programmable emotions of the Mood Organ. The dream of humanity is revealed to be a sound stage, a set-piece akin to the reality behind Mercerism. Humanity itself becomes the “cheap, Hollywood, commonplace sound stage which vanished into kipple years ago” (Dick 209). This automation makes the vestiges of humanity uncomfortable because it puts them in direct competition existentially with the androids. We can start to see why the humans fear the androids so much. If humanity is reduced to an automated existence which relies on sameness and programmed emotions, the human experience becomes an automated task of self-preservation, stay alive and do it efficiently. That’s what robots do. The threat of encroaching android autonomy comes from a fear that the robots accomplish human existence better than humans do.

Check out part 3 for the finale: explanations of human essentialism in a robotic existence.

One comments on “Nerdy Academia: Our Best And ONLY Friend (Part 2)
  1. Pingback: Nerdy Academia: Our Best And ONLY Friend (Part 1) | NerdGlaze

Comments are closed.