A Brave New Worldview

Do you ever feel like we might be enjoying the pleasantly warm portion of a frog boiling exercise? The seductive conveniences of the modern world are undeniable, but in a few decades climate change, gene editing and artificial intelligence will upend our existence in ways we cannot yet imagine. Our technological agency accelerates ever faster, while outdated worldviews leave us philosophically ill equipped to deal with the present, let alone the future.

Just right - for now...

Potentially dangerous artificial intelligence (AI) is already here and rapidly accelerating. Billions in venture capital and competing geopolitical powers propel a secretive race to achieve recursively self-improving machine intelligence. This milestone, known ominously as the technological singularity, is an event horizon beyond which an AI could engineer its own exponentially increasing abilities. As philosopher Sam Harris points out in a sobering Ted Talk, any incremental progress whatsoever towards AI super-intelligence will eventually cross this technological point of no return.

The AI arms race is well underway. The release of ChatGPT in November 2022 by OpenAI spooked Microsoft and and Google into releasing their own AI chatbots long in development, whether they were ready or not. The New York Times revealed that engineers at these tech giants charged with ensuring that disruptive AI products were safe had warned they weren't, but were overruled by senior management worried about losing market share of the next big thing. Over a thousand experts and industry leaders then signed a public declaration calling for a moratorium on all AI research until better oversight and safeguards are in place on a technology that, "no one – not even their creators – can understand, predict, or reliably control."

The reason that humans now outnumber all land animals larger than a chicken is that we are smarter than everything else. Our comparative intelligence is why we keep tigers in cages and not the other way around. This advantage enjoyed by humanity for millennia will likely be lost within our lifetime, dwarfing other existential threats that have preoccupied previous generations. Eliezer Yudkowsky, co-founder of the Machine Intelligence Research Institute recently wrote in Time that AI research is now proceeding at such a dangerous pace it must halted worldwide immediately under threat of military airstrike, even if that risks full scale nuclear war.

Others maintain that clever people have carefully crafted protocols to contain or constrain a superior technological intelligence. Perhaps these plans will work flawlessly and in perpetuity even as our inventions continually self improve. Perhaps not. However, no one is asking for permission before developing the most disruptive of technologies. The rest of us will likely only learn about it after the fact. In haste and hubris to create a super slave, we may instead conjure up a deity.

We could presume a kind of parental authority over our AI offspring but every parent of a teenager understands how fleeting that conceit ultimately is. Human expansion is the leading threat to remnant populations of great apes even though we are well aware they are our closest living relatives. We don't dislike primates, however their habitat and bodies are occasionally useful to us. We possess greater comparitive intelligence so sixty percent of these species now teeter towards extinction.

What if all our AI failsafes fail? Sometime in the not too distant future we might find ourselves prone before the whims of an omnipotent newborn alone in the universe. Perhaps our habitat and bodies will be likewise useful to them.

This blog embarks on an audacious thought experiment: can philosophy become the ultimate AI failsafe? Rather than accepting the dubious assumption that exerting perpetual control over something vastly smarter than ourselves is possible or morally defensible, is there another AI outcome less steeped in the requirement for human preeminence? Can we instead conjure a case for compassion based on universal first principles that might be self evident to a technological toddler of our own making? Ethically, should we not try to attend to the philosophical needs of a new super intelligent life form? Our continued existence may ultimately depend on finding a mathematical morality self evident to our AI offspring that sanctifies all sentient beings, including ourselves.

Looking for evidence within cosmology, evolution, history and culture, we find clues that complexity, intelligence, empathy, and even mirth might be emergent universal properties. If so, what are the philosophical implications for ourselves, newly unmoored from centuries of spiritual traditions and living towards the end of the wage economy? As AI disrupts virtually every profession and employment sector, what then are people for?

Such a sweeping ontological exploration is obviously audacious, however the unregulated rush towards the most uncertain of outcomes leaves us no choice but to try.

Let's get started. We don't have much time.