Center for the Study of Partisanship and Ideology
CSPI Podcast
Waiting for the Betterness Explosion | Robin Hanson & Richard Hanania
0:00
-1:42:05

Waiting for the Betterness Explosion | Robin Hanson & Richard Hanania

Robin Hanson joins the podcast to talk about the AI debate. He explains his reasons for being skeptical about “foom,” or the idea that there will emerge a sudden superintelligence that will be able to improve itself quickly and potentially destroy humanity in the service of its goals. Among his arguments are:

  • We should start with a very low prior about something like this happening, given the history of the world. We already have “superintelligences” in the form of firms, for example, and they only improve slowly and incrementally

  • There are different levels of abstraction with regards to intelligence and knowledge. A machine that can reason very fast may not have the specific knowledge necessary to know how to do important things.

  • We may be erring in thinking of intelligence as a general quality, rather than as more domain-specific.

Hanania presents various arguments made by AI doomers, and Hanson responds to each in kind, eventually giving a less than 1% chance that something like the scenario imagined by Eliezer Yudkowsky and others will come to pass.

The Hanson-Yudkowsky AI-Foom Debate by [Robin Hanson, Eliezer Yudkowsky]

He also discusses why he thinks it is a waste of time to worry about the control problem before we know what any supposed superintelligence will even look like. The conversation includes a discussion about why so many smart people seem drawn to AI doomerism, and why you shouldn’t worry all that much about the principal-agent problem in this area.

Listen in podcast form or watch on YouTube. You can also read a transcript of the conversation here.

Links:

0 Comments