AI goes primetime: HBO's Westworld poses questions about AI risks in its dystopic Wild-West themed carnival ride.

Duo Behind Westworld Discuss Future of AI

The husband-and-wife team behind HBO’s sci-fi serial Westworld expressed their concerns for the future of AI recently at the Electronic Entertainment Expo 2018 (E3) in Los Angeles. The video game industry’s largest and most eagerly anticipated annual conference, E3 has become a virtual mecca for gaming enthusiasts and techies, drawing crowds from across the gaming, entertainment and technology scenes for an action-packed line-up of product launches, demos, panels and convention booths.

Among the host of panel discussions about the future of gaming, on Tuesday June 12th E3 featured Westworld co-creators Lisa Joy and Jonathan Nolan in a discussion with journalist Tom Bissell on the influence of video games on Westworld.

While not about video games per se, Westworld’s premise shares much in common with popular video games. Loosely based on a Michael Crichton-directed b-movie of the same name from the 1970’s, Westworld details a fictional Wild West theme park in the near future populated by AI-powered robots, all but indistinguishable from real-life humans to the people who pay exorbitant sums of money to live out their darkest fantasies in a massive, loosely scripted game of sorts.

Much like today’s video games where real humans interact in virtual worlds onscreen with other players and with programmed enemies or bystanders—known as non-playable characters, or “NPC”s in gaming slang—in Westworld humans physically interact with a vibrant, carefully orchestrated society of programmed NPCs, known as “hosts” in the show.

The E3 panel focused on these similarities, delving into a thought-provoking discussion on how the narrative structures, themes, experiences, and ethical implications of a player’s decisions of the fictional park in Westworld are similar to those of “sandbox” or “open world” video games such as the Grand Theft Auto and Red Dead series, where players can create their own adventures in minimally structured worlds offering nearly boundless choices and an alluring—sometimes disturbing—degree of ethical ambiguity.


Above: Trailer for Westworld, now in its second season.

Near the end of the panel, however, Bissell asked Joy and Nolan about the future and risks of AI, a persistent theme in show when, early in the series, things go awry and the the massive tech conglomerate in control of Westworld starts to lose control of its hosts. The brief but illuminating discussion on AI ethics that ensues reveals how even a world as precarious and morally fraught as Westworld might be, in Nolan’s opinion, a best-case scenario in our future threatened ruled by tech companies and algorithms gone awry. No strangers to discussing the profound and  potentially perilous technological implications of Westworld in media appearances, Joy and Nolan offered their analysis of the future of AI.

Bissell opened the discussion on AI, observing that grim fears of an omnipotent AI or Westworld-like robot dystopia seem far off compared to AI’s more humble developments in the present. Before turning the conversation over to Joy and Nolan, he suggested that while the technology will eventually herald “really weird” advancements “way beyond our capacity to imagine,” that at the moment AI seems somewhat narrow and underwhelming, alluding to Google’s recent demonstration of the company’s voice assistant purportedly chatting up a salon to set up an appointment.

Joy and Nolan, however, expressed more concern than Bissell, noting that while AI is indeed a nascent technology at present, it can be very dangerous even without manifesting any far-flung doomsday risks.

Nolan conceded that he shared some of Bissell’s skepticism, admitting that “we’re a long way out from what the show portrays, which would be called AGI or sort of a general level of intelligence,” however he doesn’t find any solace in the technology’s slow pace. In spite of the darkness of his fictional world, Nolan says that his show’s anything-goes theme park might seem rosy compared to AI risks: “I don’t think the future looks like Westworld; I think we will be lucky if the future looks like Westworld. And that’s what keeps me up at night.”

Nolan added that he sees artificial superintelligence (ASI) as less dangerous than the risk-fraught terrain of the coming “interregnum” between today’s crude AI systems and the more advanced intelligences that might make for a better future, saying that “we shouldn’t be scared of artificial intelligence, we should be scared of artificial stupidity.”

According to Nolan, society is ceding too much control to today’s far-from-perfect algorithmic systems, at peril of ignoring the ethical risks of the technology:

“We’re headed into the gap in which we’ve allowed algorithmic intelligence to drive more and more of our experiences and our lives. The data is starting to take control, but the data has no fucking conscience. […] We talk a lot about artificial intelligence, [but] we don’t talk a lot about artificial morality or artificial sanity, which I think is actually something we should be a little more concerned about.”

Referring to the risks of today’s first models of self-driving cars as an example, Nolan says “we’re in this moment in which we want the car to take over, but the car is like ‘I’m not ready yet, I haven’t learned enough yet’ and that’s the gap.”

Joy shared similar fears of AI’s near-future, but rather than expressing concerns of an all-powerful or—in Nolan’s view—imperfect AI , she fears that the coming age of algorithms will only highlight humanity’s flaws and foibles, emphasizing the traits in human nature that are most “terrible and dark and awful.” In particular, Joy is worried about data’s ability to prey on and weaponize distinction between groups, a prescient concern in the wake of the Cambridge Analytica scandal which saw vast amounts of Facebook users’ data employed by the data firm to exploit the psychological profiles of millions of people and potentially influence American elections.

“Maybe AI is not as complex as we thought, but I don’t necessarily know that humans are as complex as we’d like to think we are either. I think that we are subject to hacks psychologically, and those hacks can be somewhat basic. Sometimes they tap into the most elemental and lovely drives of us. […] It’s easy to take information aggregated en masse and parse it in certain ways that allows the most nefarious aspects of tribalism to defeat and understanding of commonalities and nuanced discourse that would really do us well.”


Above: Joy and Nolan discuss Westworld at E3. The AI-specific discussion begins here, near the panel’s end.