Nerdy Academia: Our Best And ONLY Friend (Part 3)
Nerdy Academia: Our Best And Only Friend (Part 3)
The goal of Nerdy Academia is to encourage, and provide examples of, close readings and academic-style analyses of nerdy works. There’s a billion essays about War And Peace out there but a scant few about fun things like nerdy fiction. I want to examine science fiction, fantasy, comics, video games, etc. just like I would with other sources of literature. I also want to deliver the – I guess you could call them essays although that sounds too stuffy – in short, easily consumable posts. Nerdy Academia is larger academic readings of both classic and pop nerdy works, breaking them up into smaller chunks for quick reading.
Last Nerdy Academia we looked at why humanity finds androids threatening – because they have to potential to do “human” better than actual humans. Today we’ll finish up by exploring what happens when we mess with the line between human and android.
If the line between human and android is lost, then the existence of something like a definable humanism is also lost. If there are no traits which can be seen in what makes a human special, there will be no larger characteristics with which to measure humanity. Already there exists some of the feedback loop because the Voigt-Kampff test enters again as a means not only of quantifying what is human, but what is not human. The test serves as a barrier to keep the robot population from entering the humans on a set of seemingly arbitrary attributes. The tenets of humanity have to be invented to exist. This excludes any notion that human beings are inherently special because the method of evaluating that specialness is a human invention.
If there is no inherent human specialness, then anything resembling a universal human experience also has to be evaluated. If the human experience has been removed, an existential need for a replacement arises to fill the vacuum. This is where the recursive need for the empathy box shows up again. Despite its superficial nature and its clear artificiality, it serves a very definable purpose. Its fakeness doesn’t cheapen the actual effect unless all of humanity would allow that to happen. This will be a non-occuring phenomenon since the need for human interaction will be ever present on the planet. Isidore reflects on this thinking, “You can’t go back… You can’t go from people to nonpeople” (Dick 204). While it’s ironic that he is saying this as a thought on his time with androids, the quote pertains to the entirety of the Earth’s population. There is no return from sharing each other through the empathy boxes. There will be no return from Mercerism.
If there is no return from Mercerism, there is no reversal of the homogenization of emotion (see Part 1). In order to retain the diverse spectrum of human emotion in an eternal unification of humanity, the recursiveness will then cement the importance of the mood organ. For as long as humanity will both gain and lose feeling within Mercerism, there is no such thing as a naturally occurring human emotion. If Deckard and his wife share their emotions over purchasing a goat as they were discussing previously in this essay, they run the risk of losing those emotions. There is some hinting that if one concentrates hard enough, they can retain the strength of the emotion, but generally it’s suggested that the entire emotion is averaged or lost entirely (Dick 174). To recover the elation of owning an animal, Deckard and his wife would need to dial in to their Penfields. If there’s a setting for a well-disposition towards the world, (Dick 3) rage stimulant or suppression (Dick 4) and even a desire to dial for a random emotion (Dick 6) then it would be logical to assume that there would be a setting for the extreme joy that purchasing an animal would give. This runs the risk of being aware of the mood organ’s effect, something that fills Iran with dread. The unease that comes with an awareness of the manipulative effects of the mood organ stems from the fact that the human brain is aware of the encroaching similarities to the android. In a culture that demonizes the uncanny robots because they represent a threat to an essential purpose, any obvious similarity would either have to be rejected (as Iran initially does with the mood organ) or be distracted from. The need to quantify humanity allows for a diversion from the crisis of the programmable emotions because it suggests that there is an inherent difference between the Us of humanity and the Them of the android. This, of course, is as false as Mercerism, but like Mercerism, its effect is independent of whether or not it’s legitimate. As long as humans are convinced that there’s a difference between them and the androids, the justified automation can continue.
Originally conceived in order to combat the problems within humanity concerning a rapidly depleting population, the technological replacements for humanity’s place in existence cause problems themselves that make their purpose self fulfilling. The importance of the tools and their effects does not come from their express outward workings, but from the holes in society caused by their lack, in any stage of the recursion. The walls between human and android would fall apart without the invented barriers and manipulators put into place by humans who are seeking to replace the humanity lost by a society devolving into automation. The various benefits and crises endowed by the empathy box, the Penfield Mood Organ and the Voigt-Kampff test create a self-fulfilling loop that pushes humanity more towards the androids they fear than it does in helping them hold onto their humanity.