Note: this blog post is available in PDF form.
What will the future look like? How would our lifestyles change with time, discoveries, and inventions? Most importantly of all, will we be able to design intelligence on par with our own? Will we be able to craft artificial creatures that are indistinguishable from actual humans? Will we be able to create our equals? What then?
These questions have haunted philosophers, thinkers, sci-fi writers and aficionados, students, their professors, and everyday wonderers for centuries. Let’s get this one out of the way quickly – of course, there are no answers. Everything that follows is merely speculation, but speculation that is exciting and somewhat promising of an actual realization.
Three films – “Frankenstein” (1931), “Blade Runner” (1982), “A.I.: Artificial Intelligence” (2001) – will be analyzed and compared regarding how they treat the issue of artificial lifeforms. The discussion will expand beyond, however, exploring the technological sophistication that we enjoy today, and the alternative futures that may exist tomorrow. It will attempt to solve some of the aforementioned questions, provoke thought, and inspire discussion.
“You have created a monster, and it will destroy you!”
– Doctor Waldman, “Frankenstein”
The cult classic “Blade Runner” (1982) by Ridley Scott paints a future world where humanoid robots, or ‘replicants,’ have reached such technical sophistication that it takes extensive testing to distinguish robot from human being. They were typically used for tasks in off-world colonies, primarily as slave labor. However, after some bloody mutinies, replicants were deemed illegal and destined to be terminated.
In “Blade Runner,” six replicants manage to escape capture and land on Earth. It is here that we are introduced to Rick Deckard (Harrison Ford), a retired unit from a special branch of police known as Blade Runners, who are tasked with tracking down and ‘retiring’ replicants. Deckard is called back to the force to track down this new threat to human security. As the movie develops, the goal of the rogue replicants becomes clear – they seek to meet their maker, the head of Tyrell Corporation.
Steven Spielberg’s “A.I.: Artificial Intelligence” (2001) portrays the dramatic story of the first humanoid robot, a “mecha,” that was designed to experience love. It is a robot like no other before it – it understands fear, it is driven by passion, and it dreams. David, the child robot, is a near-perfect emulation of a true human.
However, with this brilliant feat of human engineering, a number of unforeseen problems crop up – problems that were believed to only afflict “orga,” live human beings. David, despite experiencing emotions close to those of humans, does not manage to fit in. He is rejected by the boy that he was meant to replace. He is alienated by the father who took him in. He is abandoned by the mother that loved him. David is left alone, and he, with the designed mental state of a 10-year old, must learn to survive by himself. He runs into many adventures, but his ultimate goal is more human than anything: to be loved.
Both “Blade Runner” and “A.I.” are the feats of modern fantasies and computer effects. But another, much older cultural heritage spins a surprisingly similar tale. James Whale’s “Frankenstein” (1931) tells of an incredible scientist constructing an artificial being that is incompatible with human society. Sound familiar?
Even in the early 30s, people held many of the same hopes and fears as we do now. Frankenstein is constructed as a masterpiece, as a display of the might and wit of man. And yet he is rejected, considered a monster. In retaliation, for lack of better options, he kills and roams the countryside. Eventually, once the danger is clear, the men of the town gather together, hunt down and kill the brute. Once more, man brings end to his own creation.
PORTRAYAL OF ARTIFICIAL BEINGS
“Look! It’s moving. It’s alive. It’s alive. IT’S ALIVE!”
– Henry Frankenstein, “Frankenstein”
In most – practically, all – cases of science fiction, there is a conflict between the old nature of man and a new technology, creation, or alien. Since this feature is so universal, it can at least be partly assumed that there is some truth behind it. People change slowly. We have evolved over millennia. Technologies evolve over decades, sometimes barely years. We simply cannot keep up. So a backlash occurs.
“A.I.” demonstrates this without any subtlety. The “Flesh Fair” that David is thrown into is, as it’s spokesperson declares, a “celebration of humanity.” In fact, it is nothing more than a merciless slaughter of robots, but the words certainly add an ethereal feel to the scene. It is a backlash to the proliferation of a new technological revolution. A new breed of Luddites, only much more widespread. It seems a bit silly.
But the phenomenon can be seen again and again. In “A.I.” it is because the humans fear change. In “Blade Runner,” it is because they fear violence. In “Frankenstein,” they fear for their peace of mind and their lives. In the U.S., the government fears for confidentiality, and blocks Wikileaks. In Switzerland, authorities fear for privacy and sue Google in response to its Street View photography. Across the world, be it fantasy or reality, there is an innate tendency for backlash against new technologies.
That is one of the things that makes the films similar. What makes them different is the aspect of artificial life that each chooses to focus on. In “A.I.” the key drama is that David, the main character and an android, is the first robot to love. The movie imagines a world where we have reached such technological breakthroughs, that the only final frontier that remains is recreating human emotion; the most difficult task of all. A task that seems accomplished by the creation of David – after performing an imprinting procedure, David will love a person forever. This eventually also becomes the downfall of the experiment – a robot that loves, or so shows “A.I.,” cannot reconcile with being a robot.
In “Frankenstein,” the vision is much less ambitious; after all, it was set almost 70 years before the other two movies. Thus, the technology to create the actual artificial creature is not taken for granted, but is the focus itself. The whole process outlines the drama of the film – the collection of dead corpses, the brain of an unstable criminal, and the invigorating lightning storm. The emotions of the created “monster” are almost an afterthought. Perhaps that is also why the experiment goes so haywire.
In “Blade Runner,” the situation is similar to that in “A.I.”The technology to design and craft robots that in all shape and form resemble humans exists. So does the development of feelings. Which is a problem; after all, slaves are easier to deal with if they express no emotion. In the fictional future of “Blade Runner,” it would take approximately four years for the replicants to develop genuine human-like feelings. The Tyrell Corporation had an easy solution. They made the lifespan of the robots four years. Simple as that.
However, their estimates turn our to have been inaccurate. The newest breed of replicants – the Nexus 6 – turn out to be intelligent enough to develop emotions much earlier than anticipated. They undertake mutiny. They develop relationships. And they are driven by the most human instinct of all – self-preservation. The rogue robots in “Blade Runner” have one goal: to discover a way to extend their four-year lifespan.
“A.I.” is notable for the way it presents artificial life in several ways. The key point to note here is that while “Frankentstein” and “Blade Runner” imagine advances in bioengineering, “A.I.” imagines breakthroughs in mechanics. It’s artificial lifeforms are truly mechanical robots, unlike their organic counterparts in the other two movies. Naturally, this has both advantages and disadvantages. Being mechanical means that the robots are much more durable, less needy, and easier to control. Being mechanical also means that there is an unsurpassable physical barrier that separates the mechanical robots, or “mecha,” from the organic humans, or “orga.” The boundary is so much clearer, and it makes it much easier to use the mecha as scapegoats, as in the case of the “Flesh Fair.”
ISSUES FACED BY ARTIFICIAL BEINGS
“I’m not in the business… I *am* the business.”
– Rachael, “Blade Runner”
Now, there is no doubt that the attendants of the “Flesh Fair” would have responded just the same to Frankenstein or the replicants, even if the artificial life-forms were organic. In the case of “A.I.,” it is simply an additional point of conflict. In all three films, there are many issues that the new technological marvels and the people around them run into.
One of the major difficulties is that of acceptance. In “Blade Runner,” the replicants are used as slaves. They have inferior rights, inferior benefits, and are subject to more oppressive laws than humans. However, as technology progresses, they reach, as narrated at the beginning of the movie, a level where their intelligence was on par, and strength and agility above par, compared to that of humans. Thus, the replicants were in many ways superior, yet still suppressed. It is not difficult to guess why mutinies and other acts of dissension occurred. When the superiors – the humans – realize that their power has been undermined, they begin to fear and despise their adversary.
In “A.I.,” the mecha are used in a similar manner: primarily for service tasks. They act as servants, workers, toys, assistants and lovers. Yet despite their usefulness, they are despised by many extremist groups. They are attacked and destroyed in an act of organized violence under the ruse that they are somehow damaging the essence of humanity, infringing upon the “true, organic human society.” In fact, it is because the humans, similarly to the situation in “Blade Runner,” are afraid to lose control of their creations. They destroy the mecha to “maintain numerical superiority.” Having little or no rights, and lacking higher-level emotional capabilities, the mecha do nothing but accept their fate.
Partly, this violence is the result of stark differences. A mechanical robot has some inherent limitations – it cannot eat, for example. Humans, especially kids, pick up on this immediately, and tend to exploit the difference to their advantage, bullying or manipulating the mecha. This damages the social ties, developing a tense relationship between machine and man.
But the prime factor is the fear of the unknown – humans can never be sure what mechanical whizzing and thinking goes on behind a composed mecha face. It may be friendly, or it may be plotting to kill you. You would never know. For a human, such extremes are difficult to conceal. For a mechanical life-form, it is absolutely feasible. As long as that is the case, humans will find it difficult to have humanoid robots as cohabitants.
“Frankenstein” deals with an issue of much smaller scale: it is a single incident in a remote little town. Nonetheless, “the monster” experiences many of the same attitudes. He is feared. He is mistreated. He is locked up and forcefully suppressed. Having little other experience, he responds in a similar fashion, driven like a wild animal in captivity into insanity. It would be interesting to consider how his actions may have been different if he was treated compassionately and as an equal from the very beginning. Realistically, though, that would be extremely unlikely; the “monster” would be treated just as he was in the movie. Human actions are quite simple to predict.
SOLUTIONS AND FAILURES
“Please make me a real boy?”
– David, “A.I.: Artificial Intelligence”
All three movies present a somewhat bleak perspective of the future. There is much fear and uncertainty. But there is also a lot of fascination and awe and wonder, and the films attempt to provide plausible solutions just as much as plausible issues. Some solutions are successful. Some, partly. Others fail altogether. In either case, they challenge the viewer, as if saying: “Hey, don’t give up hope! There must be a way…”
In “A.I.,” the primary conflict is that humans cannot manage to accept the mecha as equals – they are simply heartless hunks of metal that are difficult to understand. So the scientists devise a solution: grant mecha the ability to experience true human feelings. Most importantly, design them so that they could love.
This is a valid solution, albeit incredibly difficult to execute. The scientists in “A.I.” pull it off. It’s even successful. Not only does David exhibit all signs of love towards his foster mother, Monica, but he experiences emotions vivid enough to spark empathy within people. The audience revolts when David is positioned to be killed in the “Flesh Fair.” His mother also begins to love him just like a real kid.
However, the scientists fail to predict the true implications of their creation. Towards the end of the film, David meets his maker, and the laboratory where there are hundreds of Davids just like him. Initially, David is violent. Then, he is baffled as he begins to understand. Finally, he jumps off a building into the sea below. For ultimately, he is only a “mecha.” A disposable, reprogrammable, mass-produced machine. This is something that the love, which emphasizes uniqueness, cannot reconcile with. This is a fact that could drive a robot to attempt committing suicide. This is something that never has nor will mesh with the human experience, and would thus prevent the mass construction of real human-like robots, or risk leading to either depression or passionate wrath on the part of the creations. It is something that humanity had never needed to consider before, but will if we ever reach such a point in technology and time.
In “Blade Runner,” the creators have devised a solution to many of the problems faced in “A.I.” The human-like replicants, being organic, develop feelings and emotions naturally after ca. 4 years in the wild. So the bioengineers devise a simply solution. The replicants are designed to only have a lifespan of 4 years. As long as they’re emotionless, they can be, and in “Blade Runner” are, used as slaves on off-world colonies.
However, here the creators fail again: they underestimate the human-like power of self-preservation present in these most advanced creations. The replicants are driven to discover a way to extend their short lifespans, against the wishes of the humans. Naturally, this direct conflict erupts in violent confrontations, where human physical agility can rarely match that of the replicants.
Because the replicants are so advanced, they also pose a slew of ethical issues: how do the rights of a robot compare to those of a human being? Should the law be applied differently for different subjects? Is it murder to have the replicants only live four years? The film leaves much of this in the air.
Frankenstein deals with his monster in a similar fashion: he locks him up and suppresses him, failing to understand that such initial actions may be what caused the violence in the first place. Ultimately, he fails to restrain the monster, who kills several people and roams the countryside around the town. The solution that the residents resort to is simple: death. This solves the problem, but it dismisses the possibility of the successful creation of artificial life.
RELATION TO THE ENLIGHTENMENT
“‘More human than human’ is our motto.”
– Tyrell, “Blade Runner”
These three movies do not present some new way of thinking. We humans have been imagining the day our creations best us for centuries. The situation is not unlike that during the Enlightenment. Back then, the world had just undergone monumental shifts of knowledge and critical thinking. Humanity was shocked out of its passivity, and then presented a blank slate. This is where the Enlightenment thinkers came in: their task was to define what the society of the future could feasibly look like, and what it should look like.
One of the greatest questions of the time was whether, given all the possibilities of science and reason, we should continue full steam with technological advancement. The answer may initially seem obvious, but progress is not always the way forward. Just because we can do something, should we? Just because we can make a Frankenstein, does that mean that it is the way forward? If we can make an electronic praying machine, maybe we should consider whether it is silly or not before going on to do so. Building an atomic weapon is an immense achievement of science and reason, but it can bring upon way more death and despair than progress. In short: sometimes restraint and boundaries are just as important as scientific and reasoning brilliance. The discussed movies show the result of not considering that possibility.
In most cases in the past, however, progress in science and reason took precedence over regulation and restraint. Humans are curious beings, and it is nigh impossible to stop someone from innovating, creating, and discovering whatever is possible. This led to the innovations of the Industrial Revolution that swept through most of the developed world, and has led to many of the technological luxuries that we enjoy today – smartphones, laptops, industrial robots, telephones and TVs and so much more.
The Enlightenment thinkers themselves were not much different from modern sci-fi writers, nor the screenwriters or directors behind all sci-fi movies. They were, and are, fantasizing. Call it philosophizing. Either way, their thoughts are merely ungrounded opinions. But they are also opinions that try to predict problems. They are opinions that may open the minds of many, and lead others to reconsider how they view the world. They are opinions that could, potentially, inspire someone to devise brilliant solutions to existing and upcoming problems.
Just like in the Enlightenment, the late 20th – early 21st centuries have brought forth huge overhauls in society and the collective mentality. Great leaps in science have forced people to reconsider what is possible. Rampant technological advancement has redefined the way we interact, communicate, travel, work, learn and play. Things that would have seemed impossible only decades ago are now universally present in our everyday lives. This inspires people. This scares people. Either case, we are once more at a blank slate, a stimulation of free, radical, unrestrained thought. It is a challenge to thinkers and philosophers and writers and tinkerers alike to re-imagine the future. “A.I.,” “Blade Runner” and “Frankenstein” are just a few results of that.
“It’s a moral question, isn’t it?”
– Female scientist, “A.I.: Artificial Intelligence”
Although many of the ideas presented in these three movies are quite farfetched (more hopeful wishing than feasible predictions) we may be closer to such a kind of future than we think. As such, it is necessary to begin considering some of the possible ethical issues before they go mainstream.
There are several key questions of ethics when it comes to artificial life. The first is: should robots be treated equal to humans? In the interest of humanity, probably not. Isaac Asimov expanded upon this in one of the most influential works of science fiction, “I, Robot,” where he discusses the “Three Laws of Robotics.” They are as follows:
- A robot may not cause harm to a human being, or allow harm be committed.
- A robot must obey any order given a human, unless it conflicts with the First Law.
- A robot must protect itself as long as it does not conflict with the First or Second Laws.
This very succinctly sets up a hierarchy. At the top – human safety, then human power, and finally – robot safety. Similar laws may apply just as well to replicants or stitched corpses. The only difficulty would be to devise a way to instill such machinelike rules into an organic life-form. Suppressing emotions may not be the best way to go around it if ethics is the goal.
Yet if emotion-like responses are not suppressed in artificial beings, the lines begin to blur. If robots or bioengineered corpses gain the ability to hate, fear, love; what makes them different from us? Ultimately, we are the same: higher level creatures consisting of simple parts and structures adhering together laws of physics, nature, and the universe. At that point, it would almost be necessary to grant them an equal social standing, at least officially.
Laws are one thing – personal attachment is another. What sort of responsibility does a person hold towards an artificial lifeforms that has fallen in love with him or her? Can it be left as open-ended as in Asimov’s laws? What would happen in situations such as that in “A.I.,” where a loving android has been simply abandoned in the middle of nowhere? Most likely, not much. So far as humans will not part with the idea that they are superior, and that they are the creators of artificial life, they will continue to exercise a dominating power over their creations. Perhaps, a wakeup call will arrive eventually. It may be human activists, or it may be revolution. It’s up for us to decide which one.
However we choose to deal with the ethical questions, development and research into artificial life should continue. It may be the answer to servicing our ever-increasing global population. It may help provide breakthroughs in either technology or biology, which may eventually lead to better medical care for ourselves. Ultimately, the quest to create life will most likely teach us more about ourselves than anything else.
“I love you, Mommy. I hope you never die. Never.”
– David, “A.I.: Artificial Intelligence”
Artificial life is no longer confined to sci-fi novels and movies – it already exists everywhere around us. Granted, it is nowhere near the level of sophistication exhibited by the likes of David or the replicants, but some recent developments are worthy of mention.
As recently as May of 2010, one of the most prominent figures in biology – Craig Venter – announced that his laboratory had created the first true instance of artificial life. They had built a synthetic cell, “Synthia,” that does not only survive and thrive, but reproduces and grows its own colonies. Most amazingly of all, it’s genetic code is completely synthetic, and has been constructed from scratch. This may be just one of the preliminary steps before we begin to make breakthroughs in bioengineering in earnest. It is probable that we will soon see the first examples of multi-cellular synthetic life. From there, it is only a small step to begin making progress towards artificial lifeforms not unlike animals, then, with some luck, the replicants from “Blade Runner.”
Robotics have also received a great amount of attention lately, and in a variety of approaches. Some, such as Honda’s ASIMO, aim for a functional humanoid form. Others, such as Japanese-based Kokoro Co., have aimed for humanoid realism with their DER 2 android. Some do not even attempt to resemble humans at all, such the multitudes of industrial robots on the market, or MIT’s Seaswarm initiative. In either case, strides are being made, both in terms of mobility (RoboCup) and sociability (Hanson Robotics).
Then again, we may not necessarily be moving towards super-advanced androids or fantastical bioengineering, but a different stage of singularity altogether. Scott Adams (creator of the Dilbert comic strip) proposed an insightful idea on his blog several years ago. “If a cyborg can remove its digital eye and leave it on a shelf as a surveillance device […] then your cellphone qualifies as part of your body,” he wrote.
Could digital devices such as smartphones be considered extensions of our brains, or “exobrains?” Could that mean that we are, at least partly, becoming cyborgs, blend of machine and man? Perhaps, to someone from the past, we would have seemed just that – cyborgs so dependent on technology and devices that we could hardly survive without them.
It is almost ironical, then, that we so fantasize about robots becoming human, when the whole process may be going the other way round. It is humans that are becoming more like robots. Nonetheless, we are approaching the same singularity, where there is (hopefully) a peaceful coexistence between man and machine. Whether we can restrain our curiosity and ingenuity to prevent the worst fears about artificial life from coming true, or deal with them as they come, cannot be predicted. Yet judging from the past, it is quite likely that eventually, we will be faced with if not these, then other problems. We can only hope that humans continue doing what they do best – survive, innovate, and thrive. Everything else follows.
“All those moments will be lost in time, like tears in the rain.”
– Batty, “Blade Runner”
The inevitable conclusion is that there is no conclusion after all. Everything to do with artificial intelligence is, after all, a prediction. These ideas have shaped the human mind of the past and present, but they will shape the human way of life in the future.
“Frankenstein,” “Blade Runner” and “A.I.: Artificial Intelligence” are all brilliant works of imagination. They have tackled ethical issues and presented believable accounts of how artificial lifeforms will interact with humans. But that is only the beginning. In every scientific exploration, what comes first is the hypothesis. Then, it comes time to actually do it.
Science-fiction and philosophy are just that – a hypothesis. Just like the Enlightenment thinkers hypothesized about the future of human society in the 18th century, so we today hypothesize about the future of human society in the 21st century. The Enlightenment was followed by revolutions, war, the rise and fall of nations. What will follow after the present is only just beginning to brew.