THIS CAUTIONARY TALE REMINDS US TO STAY HUMAN.
Artificial Intelligence permeates our daily lives. Siri and Alexa talk to us. Ask Google any question and it finds the answer(s) within fractions of a second, sometimes before we’ve finished asking the question.
Our devices also request confirmation that we are human with a reCAPTCHA task. As humans, we click and comply.
The documentary film, Do You Trust This Computer (DYTTC), directed by Chris Paine, explores the development of intelligent machines—from “artificial intelligence” (AI), “the new AI” (machines that learn/deep learning), and “Super AI,” or SAI (super artificial intelligence)—when machine intelligence surpasses human intelligence, creating its own independent thought and action and causing unprecedented change to human life.
SAI has not happened yet. The impacts, and scale of the impacts, are unknown, but he DYTTC filmmakers explore what it could be like living in a world where AI/SAI continues to be integrated into our lives with little regulation or ethical guidance.
For several days following its release in April, the film streamed for free from a Facebook link. I watched it at night, alone. Something I wished I hadn’t done, because it scared the existential daylights out of me. Not that I wished I hadn’t watched it. I’m glad that I did: I learned more than I ever thought I’d care to know. I just wished that someone had watched it with me, so we could have discussed it afterwards.
A few days later, the universe provided, not just someone, but with the one to talk to. I walked into The Water Lily Café and there he was, Thaddeus Wadleigh, the film’s cinematographer and fellow Topangan, standing in front of the pastry case. Tall, with a bespectacled, boyish face, Wadleigh met my zealous gush about the film with understanding and agreed to meet.
THE MAN BEHIND THE CAMERA
Passionate about photography, Wadleigh, 53, first fell in love with the camera his dad gave him at the age of six, a Rolleiflex Twin Reflex.
“The idea that you could see an iris open and close, that you could see the shutter and correspond it to the alchemical process of how long it sees the light…it was like magic, total magic,” he said. For a boy raised in the high desert of northern New Mexico, a camera was a lot more interesting than kicking around a rock.
Wadleigh channeled his curiosity about the world through a lens, which flowed into a diverse career working in the camera and film departments of feature films, such as Legends of the Fall and Honey I Shrunk the Kids, to shaping the look of a film as cinematographer on many independent film and documentaries, including Who Killed the Electric Car and Revenge of the Electric Car, also with Chris Paine.
Most recently, he lensed two films with director Kirby Dick that uncovered the epidemics of rape and sexual assault in colleges, universities, and the military—The Hunting Ground (2015) and An Invisible War (2012). They have recently completed production on The Bleeding Edge, a forthcoming documentary to be released later this year, about the advances in technology applied to medical devices and the implications for human health.
WHO IS PAYING ATTENTION TO AI?
The impetus for DYTTC was an open letter penned by a group of the world’s leading, influential AI thinkers, following a symposium in Puerto Rico in January 2015. The attendees, including Bill Gates, Elon Musk, MIT professor Max Tegmark, and many others, jointly signed the letter expressing the need for AI oversight and restrictions. To date, more than 8,500 people have since signed it.
“The irony is that [the symposium] was only three and a half years ago, when the idea of a self-driving car was totally foreign to everyone,” said Wadleigh. Now, the self-driving car is part of our thinking, vocabulary, and, in beta stage, roadways.
The filmmakers interviewed several of the attendees, and other inventors, filmmakers, philosophers, professors, innovators, and authors who offer a fairly dark viewpoint on the future of AI, from the apocalyptic or, at least, the very concerned
They also talked to a couple of intelligentsia—Rana El Kaliouby, who is working on face recognition software to help kids on the autism spectrum to recognize social cues that they’d otherwise miss, and scientist, futurist, and Google’s head of engineering Ray Kurzweil, who highly anticipates SAI and its positive effects on intelligence, creativity, and pretty much all human qualities.
As for Wadleigh, he found that “one of the most interesting things about the process of making the film, was hearing all of these incredibly intelligent people talk about how blindsided humans are when it comes to new technology. We trust it and embrace it far too easily. We don’t question it. All of a sudden, we looked back and said, “What the f*@k were we doing?’”
What started as an exploratory discussion on film, turned into a cautionary tale.
“We thought it was going to be easier to demonstrate what AI [and SAI] was really going to look like,” Wadleigh said, “but the film became the exact opposite—we don’t know how long it will take, what it’s going to look like, when it will happen.”
BEGIN THE BIG DATA
A little more than sixty years ago, in 1956, the field of artificial intelligence took root at Dartmouth College in New Hampshire. Scientist and mathematician Alan Turing invented the machine that decoded German naval messages and helped the allies prevail in WWII. (Benedict Cumberbatch portrayed Turing in the 2014 film, The Imitation Game.)
1997 was the year that Russian World Champion chess player Gary Kasparov became the first human to lose a chess match to a computer, IBM’s Deep Blue.
Another IBM machine, Watson, trounced its human competition in 2011, on a live television broadcast of the game show “Jeopardy.” At the same time, Watson beat the two human contestants who had won more games than any others in the history of the show.
A year before that in 2010, Google became the first browser to log one billion unique visitors on its website and grew into the most powerful computing platform in the world. In one of the DYTTC interviews, writer Tim Urban (Wait But Why) describes Google as “the toenail of a giant beast in the making.”
We all contribute to the growth of the beast by interacting with Facebook and Twitter and a continually expanding stream of entertainment, games, and apps. To manage the growth, we need to think about how we interact with distant algorithms and what information about ourselves we spew out into computers in the form of data.
In the film, Polish psychologist and data scientist Dr. Michal Kosinski said that the AI running Facebook’s newsfeed had a mission to “maximize user engagement and it achieved that. Two billion people spend, on average, an hour a day interacting with AI that is shaping their experience.”
From our interactions, algorithms are learning our most sensitive information about our personalities, intelligence, political views, sexual orientation. We leave huge digital footprints and soon the computer will know all of this about us just by scanning our profile pictures. Kosinki points out that, for people who live in countries that are not open minded and free, this knowledge in the wrong hands could be a death sentence.
Screenwriter Jonathan Nolan says in the film, “Facebook is building an elegant, mirrored wall around us, a mirror that we can ask, ‘who’s the fairest of them all?’ and it will answer, ‘You, You!’ time and again. It slowly begins to warp our sense of reality, warp our sense of politics, history, global events, until determining what’s true and what’s not true is virtually impossible.”
A particularly prescient part of the film documents a presentation given by Alexander Nix, CEO of Cambridge Analytica, who explains the company’s methods of collecting data on social media and crafting nuanced messaging for a highly segmented target demographic on, for example, gun rights or campaign messaging. This segment was shot before the British firm was accused of stealing the data of more than 87 million Facebook users, or of hacking and influencing the outcome of the U.S. 2016 Presidential election.
SOME LEMONADE WITH YOUR DOOMSDAY STEAK
According to the film’s producers the website has attracted 14 million visitors, with 1.4 million viewings of the film. Comments on the film’s IMDB page range from “A must watch for everybody on this planet” to “populist fear mongering and not much else.”
In the process of making the film, Wadleigh said what he came to understand is “We just don’t know. We thought it was going to be much easier to demonstrate what AI was really going to look like. But the film became the exact opposite. We don’t know how long it will take, what it’s going to look like, and when it will happen.”
He used the analogy of the predictions before the first detonation of a nuclear bomb. People believed it would crack the planet in half. It caused horrific, cruel, and massive devastation, but the planet did not break apart at the core. So, it’s possible that the future impacts of AI don’t match more dire predictions posited in DYTTC. Yet there is work to be done to ensure that the AI pendulum swings in the direction of beneficial uses and the safeguarding of human life from bad actors who would use AI to ill intent, including the use of autonomous weapons.
MIT Professor Max Tegmark says in the film, “If it’s going to take 20 years to figure out how to keep AI beneficial, then we should start today. Not at the last second, when some dudes drinking Red Bull decide to flip the switch and test the thing.”
Futurist Kurzweil, 70, is optimistic and taking a daily regimen of vitamins and supplements, trying to keep himself alive until 2029, when he predicts that machines will reach human level intelligence, an event that he welcomes as an improvement of our species.
Wadleigh isn’t embracing that idea fully just yet. He uses personal safeguards in the form of apps and good old parenting to limit the amount of time that his two teenage children spend on a computer.
Do You Trust This Computer took two and a half years to shoot. Of the process, Wadleigh said, “Documentaries evolve. The worst thing is to go in with an idea and try to make the film fit the idea. You go in with an idea and the idea changes and you feel lost and think, ‘Wow, what a dumb idea. That was just idiotic,’ and then you look at it and go, ‘No, maybe there is something here.’ And you have to follow that for a while and trust it. Once you do that, you find out what the real film is.”
His advice for all of us is to “keep yourself human by practicing human traits like speech and communication and empathy and love for one another. Those are the most complicated things for anybody. They’ll be, in my opinion, the last things machines will be able to do.”
DYTTC can be streamed from the website for $3.99. The film launches later this summer on iTunes, and Amazon, with special event/festival and Theatrical screenings TBA. The European premiere was June 2.
RESOURCES
The late Stephen Hawking cautioned in a 2014 Huffington Post blog to remain engaged: “So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. Although we are facing potentially the best or worst thing ever to happen to humanity, little serious research is devoted to these issues outside small non-profit institutes…. All of us—not only scientists, industrialists and generals—should ask ourselves what we can do now to improve the chances of reaping the benefits and avoiding the risks.”
While AI isn’t fully explainable within the scope of this article, a primer on the subject from the late AI pioneer Professor John McCarthy at Stanford University can be found at: http://jmc.stanford.edu/artificial-intelligence/what-is-ai/index.html
Learn more by visiting, bookmarking, and getting on the e-mail lists of the following institutions that are working to safeguard humanity and guide us through the changes that are happening now and that will happen in the future.
- Do You Trust This Computer (http://doyoutrustthiscomputer.org/)
- The Future of Life Institute (https://futureoflife.org/team/)—The participants of the January 2005 convening in Puerto Rico founded the The Future of Life Institute to “catalyze and support research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges.”According to the FOLI website, “The three big results of the conference were: 1) Worldwide attention to our Beneficial AI Open Letter, which garnered over 8600 signatures from some of the most influential AI researchers, some of the most influential scientists in other fields, and thousands of other AI researchers and scientists from around the world. 2) The creation of our Research Priorities Document, which became the basis for our beneficial AI research grant program. 3) Elon Musk’s announcement of a $10 million donation to help fund the grants.”
- OpenAI (https://blog.openai.com/)—The charter of AI states that they seek to build safe Artificial General Intelligence (AGI), and ensure that it leads to a good outcome for humans. The group collectively believes that unreasonably great results are best delivered by a highly creative group working in concert.
- Artificial Intelligence: A Model Approach (http://aima.cs.berkeley.edu/)—The Web site for the book contains implementations of the algorithms found in the book in several programming languages, a list of over 1000 schools that have used the book, many with links to online course materials and syllabi, and an annotated list of over 800 links to sites around the Web with useful AI content.
- The Machine Intelligence Research Institute (https://intelligence.org/)—MIRI conducts foundational mathematical research to ensure smarter-than-human artificial intelligence has a positive impact.
- Center for the Study of Existential Risk at the University of Cambridge
- (https://www.cser.ac.uk/)—CSER is dedicated to the study and mitigation of risks that could lead to human extinction or civilisational collapse.
- Future of Humanity Institute at the University of Oxford (https://www.fhi.ox.ac.uk/)