Playback speed
×
Share post
Share post at current time
0:00
/
0:00

Paid episode

The full episode is only available to paid subscribers of Alex O'Connor

Podcast: Preventing AI Catastrophe with Nick Bostrom

2

I sometimes feel a touch power-mad bossing ChatGPT around. Perhaps it’s my just British sensibilities, but then perhaps it’s because chatbots exist in an uncanny valley of impressive human mimicry. “Sorry,” it sometimes says to me. “Hey man, it’s okay,” I sincerely want to say back.

I am doubtful that humanity’s most elusive-yet-familiar feature — consciousness — will ever be replicable by silicon, but I love surprises. And if it ever becomes plausible, it will be the defining technological and ethical question of our generation.

Nick Bostrom is a philosopher who has been thinking about this for some time. You may recognise him as the populariser of the “simulation argument”, which should give you a clue as to whether he thinks we can create artificial sentience. He is also known for his book Superintelligence, which considered what might go horribly wrong as our technology begins to metamorphise from a slave into a master.

His latest book, Deep Utopia, considers the opposite. What if we get it right? It is possible that not enough time is spent thinking optimistically about AI, in part so that we can begin tailoring it to prepare for the future we want.

A great deal of AI apocalypticism features a sort of terminator imagery, with rogue artificial agents tyrannising their once unsuspecting fleshy creators. But another dystopia is possible: one in which humanity creates yet another class of conscious victims to unconsciously abuse. What if AI becomes conscious, and remains our slave?

To achieve Bostrom’s utopia, we clearly need to begin thinking about managing our relationship with artificial intelligence. It may sound a bit silly, but then so do many things until they don’t. Should we refrain from lying to ChatGPT? What if it remembers, and doesn’t trust us in the future? Should we program it to enjoy the tasks it completes for us? Should we save our current chats on hard drives in case we one day realise we were mistreating a moral agent, and wish to retrospectively pay reparations?

All of this and more in today’s episode of Within Reason. Out to the rest of the world on Sunday.

This post is for paid subscribers

Alex O'Connor
Within Reason | Premium
For the curious. A philosophy podcast that sometimes flirts with other disciplines.