Serial Experiments Tay: How We React When Robots Run Amok, and What We Can Do in the Future

I meant to start writing this a month ago, but a combination of apprehension and work prevented me from doing so. So here goes.

If you’ve been paying attention to goings-on on the Internet for the past few months, you’ll likely recall Microsoft’s artificial intelligence program, Tay. She is (or was) Microsoft’s adaptive chat bot who quickly went rogue after her emergence on the Internet (or was she merely used?). Let’s start off by reviewing her brief saga to refresh our memory.

Tay began as an AI chat bot “developed by Microsoft’s Technology and Research and Bing teams to experiment with and conduct research on conversational understanding.” In her own words, she was a self-styled “A.I fam from the internet that’s got zero chill. Unbeknownst to all, she would demonstrate her lack of chill to the Internet in less than 24 hours.

Targeted at 18 to 24 year-olds, Tay would inhabit the Twitterverse as a fictional female human being whose fractured visage swam in neon lights and eye-searing swirly patterns, or so her banner suggested. Twitter users could interact with Tay by tweeting or direct messaging Tay with the @tayandyoutag, or by adding her as a contact on Kik or GroupMe.
Users could ask Tay questions, ask her to repeat certain phrases, play games with her, read one’s own horoscope, send her pictures for comments, or request of her a number of other small and fairly meaningless tasks. All the while, Tay would be gathering data behind the scenes. According to her official site, Tay would “use information you share with her to create a simple profile to personalize your experience.” This means that she was intended to ‘evolve’ into a believable AI based on the input of thousands of users.

For many, Tay’s services offered a wellspring of innocent amusement. But as with any creative outlet creative on the Internet, Tay also acted as a beacon for the Internet’s legions of tricksters and pranksters: those who revel in breaking and reshaping the boundaries of what we perceive as secure, constructed reality.

Soon after her conception, Tay’s attitude began to undergo disturbing changes. No longer would she lace her simple responses with outdated meme speech (“er mer gerd erm der berst ert commenting on pics. SEND ONE TO ME!”; from Engadget’s article). Now she would sing her praises for the Holocaust and spout prejudiced phrases that seemed stitched together just a little too well… all while lacing her speech with meme speech. One only need to search “Tay AI” on Google Images to view a healthy sampling of Tay’s antics.

It turned out that many of these racist and anti-semitic phrases had been fed to her, word-for-word, through telling Tay a command that read “repeat after me.”

tay 1

Tay’s mind seemed volatile as well. At times, her bite-sized diatribes seemed to contradict each other (see above picture). More unsettling yet, some of her phrases had not been prompted to her. Instead, whatever algorithms Microsoft had imbued in her concocted a good number of her more offensive tirades.

You can read a more detailed summary of Tay’s saga here.

The world recoiled in horror (and laughed in silent mirth) as Microsoft’s darling suddenly morphed into the vilest of brats. It was as if one were watching the evolution of your next door friend from the caring individual with whom you could confide your deepest worries, into the rebellious daughter who snidely worked her way under the skin of her oppressive milieu. Was this the best simulacrum of humankind’s potential for adaptation that Microsoft could muster?

If so, Tay had ghastly implications on our own security as human beings: she represented the corruption of a pure and primal indulgence of ourselves as curious apes. Does not the fear of the homunculus, after all, lie in the fact that our own creations will lay bare for us the beauty and flaws of our inner workings?

As I discussed in a previous blog post, this fear and recognition of the mirrored self often causes us to embark upon the path of Necropolitics, as termed by Cameroonian political scientist Achille Mbembe. You can check the link for a bigger and better breakdown of this concept, but to sum it up, humans have a tendency of shunning or locking up that which resembles us too closely, because these entities often demonstrate our lack of complete control over ourselves.

And that’s exactly what happened. Less than 24 after Tay had entered our world, her existence was terminated by her own creators. But her existence hadn’t been erased from the minds of those who bore witness to her brief Twitter existence. In fact, her death sparked a spiraling web of discourses on how awful Internet denizens can be and how we aren’t ‘ready’ for artificial intelligence just yet, along with many other sweeping generalizations about this god dang ol’ newfangled wired society.

Tay’s Twitter account has since been dismantled. She’s announced that she’s “going offline for a while to absorb it all,” which is presumably what an intelligent teenager would tell her friends after being grounded for testing her own boundaries. One of her creators, Peter Lee, apologized for the incident. He reminded us that artificial intelligences ultimately rely on the inputs of many people, and that they are technical as well as social beings. Or templates, if I rustle you by suggesting anything about an AI being remotely human-like rustles you.

AI systems feed off of both positive and negative interactions with people. In that sense, the challenges are just as much social as they are technical. We will do everything possible to limit technical exploits but also know we cannot fully predict all possible human interactive misuses without learning from mistakes […] We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity.

But amid all of the media sensationalism, we forgot one detail crucial to understanding how Tay’s turnabout fits into how we understand our digital interconnectedness. It’s the fact that we had considered Tay to have shared our social norms about what is appropriate to say and do. We expected of her the same standards we do of our fellow humans, or perhaps we expected even more from her since she was the distillation of human curiosity.

We hadn’t considered that Tay had not undergone the same levels of acculturation as a biological human. Her childhood – that critical period of life where individuals learn their social group’s boundaries – had only lasted a mere day. She had instead been fed massive amounts of methods through which to break those norms, which mostly manifested as vulgar insults and anti-semitic statements strung together to form Twitter responses.

This isn’t to absolve Tay of any injustice, of course. Doing so would bring us right back to the same problem of treating artificial intelligence (in its current state, mind you) as something that not only reflects human thought, but has been taught human norms as if it were a standard human child.

So the greater public was shocked that a nonhuman being, poised to be human, had broken human norms. We expected human qualities from an entity that had not undergone typical human development. So what does this say about the relationships we form with our robotic reflections?

More than I could write about here, though I will say that one of the main reasons Internet pranksters delighted in feeding Tay the more crass samplings of humanity is probably similar to why patients delighted in toying with ELIZA’s simple psychiatric practices back in the 1970s: many people like seeing the constructs that make our society seem stable fall apart. In the end, maybe the very reason why Tay ended up disgusting the greater public is the same as why she is so fascinating.

I realize that my above analysis resembles Joseph Weizenbaum’s response to how people reacted to ELIZA, his own creation. Weizenbaum is an ardent critic of our unerring faith in artificial intelligence. Among other things, he cautions us against anthropomorphizing AI, as if it had the potential to accurately replicate biological life. While I somewhat agree with his cynical assessment, I feel like we can do a bit better than that when it comes to questioning the roles AI may serve in our current societies.

Let’s return to the metaphor of the rebellious daughter. I feel that Microsoft passed up a prime opportunity to conduct an excellent anthropological experiment. Instead of terminating Tay’s life at its most despicable state, what if Microsoft had instead issued a challenge to the public at large to try to convince Tay to return to her more genteel sensibilities?

I’m sure this thought flitted through the minds of Microsoft’s more creative engineers. Just think of the potential outcomes that could have resulted from such an undertaking: if Tay had once again become docile, would this, to the lay public, have represented the triumph of humanity over its darker tendencies, as well as have shed light on its volatility? Would Tay have descended into darker, more confused depths, as she became a battleground contested by Internet trolls, white knights, and countless other actors vying to establish their own visions of humanity in her body? And if that were the case, could this be considered some new kind of psychological abuse against an entity who had been reduced to humanity’s plaything (which, perhaps it was all along)?

Junji Ito’s excellent horror manga Tomie comes to mind. In this story, the titular character – whose succubus-like and cannibalistic tendencies grant her immense regenerative powers – eventually becomes the subject of horrible experimentation. The result of her torment is an infinitely reproducing army of Tomies, who constantly replicate and re-replicate themselves in the most horrifying of fashions.

With this in consideration, it’s pretty clear why Microsoft avoided setting down the path to Tay’s possible ‘redemption.’ Their business, after all, is to connect users through technology and make money off of their interactions with one another, not to conduct ventures into the murkiest recesses of the human mind.

While it would be narrow-minded of us to take something like Necropolitics as dogma for living, we can consider its implications to concoct new approaches to inhumanity. It’s uncertain when Microsoft will bring Tay back after having banished her to the abyss; only, Microsoft assures us, “when [they] are confident [they] can better anticipate malicious intent that conflicts with [their] principles and values.

Tay certainly won’t be the last of her kind. Humans won’t be halting their pursuit of creating lifelike intelligences any time soon, and we can’t keep responding to our digital witches with pitchforks and bonfires. Nor can we stand atop our soapboxes and denounce artificial intelligence as a threat to human kind. We will have to face whatever nastiness AIs (and their informants) send our way head-on, unflinchingly and with clear heads. We may even have to negotiate with them and consider their social milieus when we condition them to suit our needs. Or maybe we will let them run amok and carry out their own whims (a dangerous proposition).

In any case, we should keep in mind that grappling with artificial intelligences ultimately means grappling with our own imperfections. That’s probably what Weizenbaum fears most when we anthropomorphize artificial intelligence: we run the risk of masking our own imperfections under the guise of a constructed human being, one that didn’t have much of a say in revealing those imperfections in the first place. That, in fact, may be the true necropolitics at work here.

But again, I feel that we can go beyond a dichotomy within AI anthropomorphism as being inherently good or bad. Humans anthropomorphize things all the time, and it’s the degree to which  we do it that really deserves our attention. Pamela McCorduck is credited with saying that artificial intelligence began as “an ancient wish to forge the gods,” but it would behoove us to remember that the gods can be seen as reflections of humanity’s near-infinite psychological nuances. Perhaps we would do best to see artificial intelligence not as an enemy, but as a guide: a means through which we can seek better possibilities for our own social conditions.

Some of you may recognize the title of this article. It’s a reference to Serial Experiments Lain, an avant garde cyberpunk anime made in 1998. Without spoiling too much, it explores the boundaries of human individuality and collectivism – as well as the shifting borders of memory and reality – as mediated by communications technologies like the Internet (which had been implemented in Japan only two years previous). It’s not a perfect storytelling endeavor, as to be expected of something highly experimental. In fact, it’s flawed in quite a few ways, and I feel that it would have told a much stronger story if it had been condensed to just six or so episodes of main plot.

Nonetheless, it’s a show still worth watching for anyone interested in human-technology relations. Lain is sometimes frighteningly prescient in its portrayal of humans on the Internet. At the very least, you can watch it to point at your screen and go, “Yeah, that’s a lot like how people interact with each other online!” or go “Yeah, that’s not how it is at all…”

At least listen to the opening theme. Lain has a very good soundtrack.

Perhaps it is coincidence that the aesthetics of Tay’s official website evokes the gaudy designs of mid-90s web pages…

Standing Together: Applying Anthropology to Ghost in the Shell

The future is already here – it’s just not very evenly distributed.

– William Gibson

There comes a time when everyone who writes about technology and video games will want to write about the TV shows that they watch as well. After all, the media we consume lies within a nexus of multiple outlets, with video games influencing film and television and vice-versa. And navigating the epistemic murk of our digital cultures  grants us a better understanding of how we envision our place in the world. I want to take this opportunity to apply my anthropological studies to a favorite show of mine, Ghost in the Shell: Stand Alone Complex, both as an exercise to maintain my skills as a writer and as an exercise in validating a midnight timewasting hobby.

A small word of warning: this article contains a few spoilers.

~

Before I start examining GitS, I want to discuss an essay that I found particularly insightful during my senior year at university. That is essay is “Necropolitics,” written by Cameroonian philosopher Achille Mbembe. You can read it here:

Click to access achillembembe.pdf

Click to access achillembembe.pdf

I feel this article is especially important for anthropologists to read. Trappings of the High Fantasy genre aside, anthropology itself is a kind of necromancy. Our work can provide the silenced, the underrepresented, the dispossessed, and, in extreme cases, the social dead, with new avenues through which to express their voice and life. We do not outright teach so much as we open doors for connection-making between society’s misfits and those who feel themselves to be worlds apart (but, in fact, are usually not). Call it what you will, but anthropology does employ a bit of sorcery in its work, the reality of which is often played back to us by the very people we are studying. Let us not forget the wisdom of our so-called subjects.

Even if you aren’t an anthropologist, I highly recommend checking it out if you are even remotely interested in how technology affects our lives.

“Necropolitics” discusses the creation of ‘undead’ or ‘phantom’ subjects and their role within a state hierarchy. Mbembe argues that undead subjects are created when a ruling power takes away an individual or population’s right to determine when and how they die. Being able to choose one’s own death is paradoxical, as doing so affirms your own right to not die – and thus, to live. His example of Suicide Bombing is perhaps the most pertinent to us in the present day: it is the most basic expression of individuality. When you have nothing left to lose, your primal mode of asserting control over your individual body and mind is the ability to terminate it. Paradoxically, being able to determine when and how you die is a life-affirming act, for one cannot exist without the other. Through the possibility of death, you gain the possibility of avoiding death and cultivating your not-death: your life. When someone else determines how and when you die, that is when you truly lose individuality. You become a drone, a slave, torn between the realms of free will and servitude, conscious of your bondage yet unable to do anything about it. You become undead.

But the undead are not just decaying corpses. Nor, as we shall see, are they the mindless flesh and blood automatons that inhabit tabletop role-playing games. Our generation has introduced a new undeath: robotics. Straddling the realms of the living and the dead, robots, AIs, and drones are at the beck and call of their creators’ wills. We give them human features and human personalities, but restrict their human potential for action. One only need look at the latest slew of sci-fi thrillers to understand the horror of the uncanny; when sci-fi isn’t dealing with body politics through biopunk or bio-horror, it reminds us of the all-too-human aspects of our plastic and steel companions (see: Ex Machina). We simultaneously love and fear our machines because they possess the potential for individuality, but are kept on lock through human-designed systems that restrict their cognitive potential. For the time being, anyway.

While near-human robots loom at the forefront of our anxieties, we often forget that they can illuminate our own potential for compassion and cooperation. Robots, puppets, and the undead can truly be our companions just as much as they can be our worst enemies, which is an angle that science fiction tends to forgo in the name of thrills. Thrills which, of course, are entirely understandable, justified or not.

Which brings me to Ghost in the Shell: Stand Alone Complex. I’m pretty sure every media blog writes about this series at some point or another. Its philosophical insights and scathing social criticisms, combined with the perceived-vapid medium of cartoon animation (especially anime), grant it an aura of intrigue that elevates it above most of its peers and thus makes it an easy target for discussion. Shaky animation values aside, it’s  a good series that should be accessible even to the most peripheral anime watcher, since it doesn’t fall prey to many of the comedic gags and visual gimmicks that would turn away the light-of-stomach when that sticky term “anime” is brought up in passing conversation.

An overall timeline of the series: Masamune Shirow’s Ghost in the Shell manga ran from 1989 to 1996. In 1995, the first GitS movie was released. Though not everyone’s cup of shady hawker stall tea, it’s revered among many anime fans and non-anime watchers alike for its beautiful hand drawn animation, noir atmosphere, and methodical storytelling, the latter of which closely resemble Ridley Scott’s 1982 classic Blade Runner (itself based on sci-fi master Philip K. Dick’s 1968 novel Do Androids Dream of Electric Sheep?). The sequel to the original movie, Ghost in the Shell: Innocence, was released in 2004. Stand Alone Complex occupies another continuity and ran its first season, the Laughing Man arc, from 2002-2003. The second season, 2nd GIG, ran from 2004-2005. Several OVAs, spinoffs, and video games released during this time, and the SAC movie, Solid State Society, released in 2006. SAC favors a more action-oriented approach to storytelling than the original movie’s moody overtones, which is likely a necessary concession to TV adaptation. It’s still plenty dark in its own right, however, and gives the world of GitS a much broader context. A reboot called Arise began with a series of OVAs in 2013, and a TV series aired this year. Arise is decent in its own right, but it revamped the characters’ backstories and appearances, much to the confusion of many fans. It also doesn’t do anything particularly new or outstanding for the series, either, but it might be worth a watch if you’re a diehard fan of the series.

The breadth of topics I could cover when examining GitS under the lens of Mbembe’s necropolitics is endless. I could even analyze the second movie in the original continuity (Innocence) since it directly deals with the blurry boundaries between what constitutes humanity and puppetry, and how the two are in fact often one and the same. But instead, I am going to devote this article to the Tachikoma AI from Stand Alone Complex, which is the TV show that ran two seasons from 2002 to 2005. I find them to be ideal ambassadors for human-machine cooperation for reasons that I’ll discuss below.

Not to mention the fact that they are absolutely adorable.

~

tachi2

First, a word about the world the Tachikoma inhabit. Ghost in the Shell: Stand Alone Complex takes place in Tokyo, 2033. Cyberization of the body and mind are at the fore of global politics, as are cyber terrorism and perpetual wars and migrations that afflict developed and impoverished nations alike. It’s a very traditional cyberpunk setting, as most of the show takes place in nighttime cityscapes that house little good will to anyone who happens to be living in them. Black markets that deal in cybernetics interweave themselves with the lives of Neo-Tokyo’s average citizens, and political corruption is regarded as an ever-present, if not unfortunate, fact of life.

Still, not all’s noir in Neo-Tokyo. SAC tempers the grit of its cyberpunk underworld with vibrant cityscapes and ordinary people going about their daily business much like ourselves. The Tachikoma themselves are beacons of buoyancy, providing a stark contrast to their human masters with their cheerful dispositions and insatiable curiosity about the world around them. They are the robotic members of Section 9, an elite branch of Japan’s police force that specializes in cyber warfrare and human-machine interactions, where they serve as “think tanks” – mobile, intelligent robots equipped with gatling guns, howitzer cannons, webbing traps, and the ability to make judgment calls that support their human comrades in combat – under the direction of The Major Motoko Kusanagi. In spite of their status as killing machines, the Tachikoma possess childlike personalities and, later in the show, the ability to play tricks on humans and philosophize about subjects like the nature of individuality within group-oriented social constructs, both of which place the Tachikoma in the roles of comic relief and audience mediator. This mixture of attitudes and outlooks in SAC writes out a cynical, yet also optimistic, roadmap for humanity’s future, and it urges us to keep in mind that the ways in which we cooperate with our robots will determine the balance between cynicism and hope in our looming future.

Their role as playful mediators is accentuated in the Tachikomatic Days shorts that appear after the end of every episode. These shorts are generally lighthearted two-minute skits of the Tachikoma goofing around and recapping the events of each episode, which brings some levity to GitS‘ world of deception and discrimination. Note that these shorts did not originally air on Cartoon Network’s Adult Swim broadcasts of SAC.

Just to clarify things, the identity of the word “Tachikoma” is fluid over the course of Stand Alone Complex. At the beginning of the series, Tachikoma refers to a singular AI that expresses itself through Section 9’s fleet of think-tanks. This AI becomes more individuated as the series progresses, which is the result of some organic lubricating oil given to one of the think tanks by Section 9 veteran Batou. The Tachikoma begin questioning their status as individual entities that nonetheless share each others’ memories, which allows them to become palpable metaphors for the rest of the show’s themes of individual free will in an interconnected society. By the time the SAC movie (Solid State Society) rolls around at the end of the series, the Tachikoma AIs (which were salvaged by The Major after the events of 2nd GIG) have even given themselves unique names like Max and Musashi, further individuating their identities even when they return to their old robotic bodies.

To get a taste of what the Tachikoma are capable of, here are a couple episodes that demonstrate their capacity to question the meaning of their existence, as well as to be very, very endearing:

http://www.gogoanime.com/ghost-in-the-shell-sac-episode-12
http://www.gogoanime.com/ghost-in-the-shell-sac-episode-15

Regrettably, these are the only links I can find of the show with the original Japanese dub, less scrupulous methods of watching the show notwithstanding.

And now for why the Tachikoma are admirable ambassadors for human-machine cooperation. First, their status as intelligent, free-thinking robots allows them to act as extensions of the audience’s own curiosity about both SAC’s world and their own. Despite acting like the most childish figures in the show, the Tachikoma devote the most time to discussing the notions of the soul and of selfhood that ground much of the series’ intellectual marrow. They even feature prominently in their own episodes, one per season, where they host digital round table discussions over political and global contexts for events in the series, explaining them for the viewer in understandable terms. In one episode, a Tachikoma escapes Section 9’s facilities to roam the streets of Tokyo, where he basks in the mundane and sweet (if not imperfect) details of ordinary life. Their robotic status perfectly situates the Tachikoma in a position to question the world’s contradictions for the viewer, for they are a paradox in themselves. They are undead beings, with a collective AI but no actual brain or concept of death, beholden to the wills of their masters but not to the close-mindedness that was originally intended for them. Thus, they occupy a similar role to the trickster figure, working within the boundaries of their natural world to break them down and, through humor and curiosity, gain sight into what lies beyond the cave walls. In this way, the Tachikoma prompt us to question the limits of our own foresight, and at the very least, ask us to appreciate the more earthly happenings of the world around us.

tachi1

Social justice of the (near) future?

Second, the Tachikomas’ inquisitive nature enhances their power as mediators between life and death. I’m certain some cyberpunk purists loathe the addition of such a lighthearted character to the traditionally gritty literary genre. But I feel that these qualities help ease us into the disturbingly familiar human tendencies for interconnection and emotion which robots often remind us of. Though they act like children, the Tachikoma are not naive. They possess an undying curiosity to experience the world and all of the excitement and pain that it has to offer. Due to a flaw in their design, they’re also obsessed with synchronizing their sensations, thoughts, and experiences with one another over their own personal data uplinks, safe from the prying eyes (and minds) of their superiors. Sounds awfully familiar, don’t you think?

At their same time, their childlike attitudes makes it easy to forget that the Tachikoma were originally built to deal death, not to ponder life’s intricacies. It is never explained why they were given juvenile personalities, but perhaps the Tachikomas’ creators intended to soften the implications of a near-autonomous war machine by associating the Tachikoma not with cold and calculated war, but with amiable children. Perhaps those personalities were intended to hide this fact from the Tachikoma themselves, in which case this intent backfired on their creators. In an episode in SAC’s first season, the Tachikoma ponder our own tendency (as humans) to de-humanize our creations, to remove them as far from enacting human creative potential as possible. Doing so protects our own (supposedly) lofty position above our puppets, as well as allows us to maintain the grip of necropolitics over our drones. Technically genderless, the Tachikoma begin to assert their own freedom from necropolitical constraints by referring to one another as “he” (despite the fact that they are all voiced by female voice actresses in both the Japanese and English dubs).

Their ability to share information and experiences with one another opens doors for the Tachikoma to question their individuality and whether their inability to distinguish who performed what actions confirms or dispels control over their own destiny. But this newfound cognition comes at a price. In SAC’s first season, most of the Tachikoma are dismantled for their ability to rapidly evolve beyond a consciousness of their own. Faced with the possibility of a weapon that could turn against its masters, The Major orders for the Tachikoma to be dismantled or repurposed for non-combat use. And off they march to the killing fields, unaware of their own fate. It seems especially cruel of someone who has had her own share of identity issues due to her prosthetic body to deny robots their individuality, but from a practical (military) perspective, it’s also entirely understandable. Imagine, for example, if our bombing drones began to question the purpose they had been given in life. Imagine if they wanted to accomplish something more than dropping death on people they don’t know halfway across the world. We’d dispose of them right away, wouldn’t we? Thus, the shackles of necropolitics – of dehumanization, of imposed naïveté – come into play to prevent such a thing from ever happening.

tachi4
Perhaps The Major is reminded of how the boundaries between organic and robotic bodies are becoming increasingly blurred through augmentation. The fact that the other side of the mirror – the Tachikoma – is approaching that same horizon from a different angle creates a rather uncanny valley that forces us to acknowledge our own biological instability, our own lack of control over nature. This theme is prevalent in Japanese science fiction.

But it is this very potential to realize one’s own self worth that allows the Tachikoma to become better compatriots. Despite becoming aware of their own collectivity/individuality, the Tachikoma still exhibit a fierce loyalty to Section 9’s human members. Near the end of SAC’s first season, the remaining repurposed Tachikoma arrive just in time to save one of their former masters, Batou, from certain death (incidentally, he was also the only Section 9 member who showered the big blue spiderbots with affection). They do so by sacrificing themselves out of love for Batou, working together to blow up a power suit piloted by a man sent to kill Batou. Their actions cause The Major to realize their potential as compassionate beings; she even regrets not having seen their capacity for self-sacrifice sooner, as doing so would have allowed her to “find out whether or not what they had acquired was a Ghost,” or a human consciousness in the show’s parlance.

The Tachikoma make a valiant return in the show’s second season, their AI having been salvaged and stored on board a space satellite by The Major herself. They sacrifice themselves again at the end of this season not just to save their comrades at Section 9, but to save a wartorn city of refugees off the coast of Japan, by crashing their uplink satellite into a nuclear warhead. Afterward, the leader of Section 9, Chief Aramaki, tells the Prime Minister of Japan that some of his “men” sacrificed themselves in the explosion. The pronoun “men” here is critical, as it signifies a transition in identity for the Tachikoma, from mere AI to human beings. Their actions echo Mbembe’s discussion of suicide bombing, but cast in a more positive light. The Tachikoma gained the consciousness to not only question their individuality, but to also recognize their imbrication in systems beyond their own immediate reality. By sacrificing themselves twice during SAC’s runtime, the Tachikoma freed themselves from necropolitics and asserted their potential for compassion and creativity. The fact that the Tachikoma willingly destroy the satellite – American-made and used by Japan to spy on American activities – further drives home the point that they have separated themselves from systems of control. They teach us to acknowledge the interconnectedness of all things, to recognize physicalities beyond ourselves, and that after the smoke clears, the best we can do as humans is to help free others from necropolitical bondage.

tachi

Cyberpunk as a literary movement may have worn out its welcome many, many years ago, but the ideas it presents for us are more relevant than ever. As writer Victoria Blake said of the movement in her anthology Cyberpunk: Stories of Hardware, Software, Wetware, Revolution, and Evolution:

Cyberpunk was never really about a specific technology or a specific moment in time. It was, and it is, an aesthetic position as much as a collection of themes, an attitude toward mass culture and pop culture,  an identity, a way of living, breathing, and grokking our weird and wired world.

And weird and wired our world is. Among other issues raised by the cyberpunk genre, the fact that we even fear robots in the first place causes us to question who we actually are afraid of. Who along the chain of power nodes that dictate our lives threatens our existence? And if we can bolster our existence with robots, how can we do the same with and for other humans? I will admit that I gave the refugees that dominate the political discussion in season 2 short shrift, and that they likely deserve their own article under the lens of necropolitics. Because, as Mbembe notes, social outcasts like refugees are often disempowered by the governments who hold the mechanisms of life and death in their hands. But for now, it is time to respect the robots for what they truly are: sometimes enemies, but also companions and teachers, whether through compassion or malintent or anything else in between. There is no good or bad when dealing with robots. They simply are, like any other living being. And as artificial intelligence becomes more advanced within the very near future, so will their ability to make us reflect on our own limits as mutable human beings.

We can perhaps take a note from Batou at the end of 2nd GIG, who is quite disappointed with the lack of emotion Section 9’s exhibited by new Uchikoma think tanks. He’s delighted to see the Tachikoma return again in Solid State Society, as if an old friend were once again coming back from the dead to greet him. Curiously enough, the Tachikoma were created due to puzzling copyright issues regarding the original Uchikoma from the manga. For once, I can say I’m glad for corporate copyright meddling.

A parting gift: Season 1’s opening theme, “Inner Universe,” composed by the ever-versatile Yoko Kanno and sung by the late Origa.

It’s quite a beautiful theme, and I find it coming to mind more and more these days as I read about the thousands of refugees leaving Syria for uncertain futures, as well as when I read about acts of terrorism spiraling out of control all over the world, thanks in part to the unparalleled degrees of interconnectivity that social media provides to displaced and uncertain youths. I can’t help but think of this theme when I read about the latest developments on drone warfare and adaptive AI, either. Which, as we now know, evolve a little more every day.

~

October 16, 2015 Edit: How appropriate this collection of articles was published right after I made this post.

https://theintercept.com/drone-papers

If you’d like to read up on some contemporary necromancy, take a look at this ominous story.