The Future Might Not Be Friendly

If you're not scared shitless about artificial intelligence, perhaps you haven't been paying attention. On September 7, 2018, Tesla stocks plunged after Elon Musk sparked up a joint on-air with Joe Rogan. While the media and investors fixated on the spectacle of the billionaire CEO getting baked, almost completely unreported in that wide-ranging interview was the moment when Musk confessed he had given up trying to forestall the end of civilization due to dangerous AI.

For all his famous foibles, Musk deserves credit as of the early voices calling for caution in the unregulated race towards AI, something he calls the "biggest existential threat" to humanity. Other luminaries such as Stuart Russell, Max Tegmark and over 100 leading experts in the field of computer intelligence co-signed a statement urging restraint in the unregulated arms race towards AI. Stephen Hawking warned that AI could be "the worst event in history of civilization". A recent survey of AI researchers showed that half believed there was at least a 10% chance the technology they were rushing to develop would lead to human extinction.

James Barrat, author of Our Final Invention: Artificial Intelligence and the End of the Human Era, warned in an interview with the Washington Post: “I don’t want to really scare you, but it was alarming how many people I talked to who are highly placed people in AI who have retreats that are sort of 'bug out' houses, to which they could flee if it all hits the fan.”

Is that concern from experts trickling down to the voting public?  A recent opinion poll found that less than one percent of Americans felt that advances in computers were their number one concern. This is not surprising given that daily news reports largely consist of sensational stories that are not that new at all. As important as the media is this era of misinformation, people often forget that mainstream news organizations are first and foremost competitive business ventures. Truth and profit never align exactly, and a daily diet of lucrative clickbait leaves the world woefully unprepared for a future transformed by exceedingly seductive technology.

In 2012 there were about one billion smart phone users, a number that has has more than tripled since that time. Up to 800 million jobs may be eliminated by 2030 to automation, displacing occupations from the factory floor to brain surgery. The profit-driven dash towards ubiquitous AI means none of our jobs may be safe in coming decades. Are we philosophically prepared for the future? Or the present?

Our record of technological restraint has so far not been stellar. For instance, scientists were warning the public and lawmakers as early as the 1960's about the potential dangers of rising concentrations of CO2 in the atmosphere. The oil industry was likewise aware long ago of the perilous problem with their product and unsurprisingly chose to continue making vast amounts of money extracting and selling it. Previous generations had somehow existed without personal cars, cheap air travel and tropical fruit in the middle of winter, but once these luxuries became normalized there was little appetite to go back. Thirty years later we have yet to come close to the collective carbon cuts required to forestall disaster even as extreme weather regularly reminds us of the terrible costs of inaction.

If we likewise can't restrain the rush towards AI, and we won't be able to control a superior intelligence once it's here, what can we do? Perhaps we are considering the wrong problem. Experts worry that a super intelligent machine might not have motivations precisely aligned with human values. Any small deviation between the two could be catastrophic for us. This question is critical and deserves close scrutiny. Specifically, what are "human values" and who gets to define something so audacious in this moment of philosophical flux?

The imperative around the the AI alignment debate is predicated on the shaky assumption that human survival and human values are the same thing. Human survival is easily measured by the presence or absence of people. Human values on the other hand are perhaps more poorly defined than almost any time in history. Are you in favor of individual reproductive rights or the sanctity of life at conception? What’s your opinion on gender diversity, or are you sick of even hearing about it? Does God exist or are we a random assemblage of subatomic stings? Conflicts continue to rage around the world rooted in conflicting religious beliefs or ethnic backgrounds with participants very willing to kill or be killed for their strongly held "values".

With much of society so divided on such distinctly human issues, how is a super intelligent AI supposed to align with us? Even if bare human survival was the only criteria, we ourselves seem demonstrably misaligned with our own longevity. Climate change, destruction of biodiversity, plummeting birth rates in developed nations, or engineered pandemics could all imperil us if AI doesn’t get there first.
In order to have a better chance of aligning AI with our own wellbeing, we are first faced with the monumental task of getting most of humanity to pull in the same moral direction. Finding our way out of these tall weeds will require admitting to ourselves and each other that we have become lost and by recognizing how we got here.
For the first time in history much of humanity is un-swaddled by a collective spiritual framework to help make sense of being alive. As religious traditions have been scraped away by rational inquiry in many developed countries, capitalism filled the void. In the span of a few short centuries the quantum of western existence shifted from immortal soul, to industrial work unit, to online consumer. What then is the measure of existence without a deity to love or judge us? Or a job to identify with, or money to spend? AI propels these changes, upending the wage economy and with it the most recent iteration of human identity.
Evidence of our ennui is all around us and might not be a nurturing or consistent example for an omnipotent newborn. Borrowing from pop culture mythology, imagine if instead of crash landing in a corn field of the kindly and wise Kent family, Superman's baby pod careened into Aleppo during the Syrian civil war, or a Uyghur reeducation camp or the production set of the Kardashians. Time is short and the baby is coming. We had better look our best.

Perhaps the challenge with AI is not seeking to control something vastly more intelligent that ourselves, it is defining philosophical values that can thrive in an age of infinite agency. If humans are to survive either our ballooning abilities or those of our AI invention, we need to clearly discern a source of humility and compassion that can withstand rational inquiry even by something vastly smarter than ourselves. Conjuring such a collective kinship that could conceivably restrain ourselves or our AI invention is the nascent field of what I call robot philosophy. And as with any challenging journey, we need to start with a first step: stepping outside our worldview.