Robin Hanson joins the podcast to talk about the AI debate. He explains his reasons for being skeptical about “foom,” or the idea that there will emerge a sudden superintelligence that will be able to improve itself quickly and potentially destroy humanity in the service of its goals. Among his arguments are:
We should start with a very low prior about something like this happening, given the history of the world. We already have “superintelligences” in the form of firms, for example, and they only improve slowly and incrementally
There are different levels of abstraction with regards to intelligence and knowledge. A machine that can reason very fast may not have the specific knowledge necessary to know how to do important things.
We may be erring in thinking of intelligence as a general quality, rather than as more domain-specific.
Hanania presents various arguments made by AI doomers, and Hanson responds to each in kind, eventually giving a less than 1% chance that something like the scenario imagined by Eliezer Yudkowsky and others will come to pass.
He also discusses why he thinks it is a waste of time to worry about the control problem before we know what any supposed superintelligence will even look like. The conversation includes a discussion about why so many smart people seem drawn to AI doomerism, and why you shouldn’t worry all that much about the principal-agent problem in this area.
Eric Drexler, Engines of Creation
Eric Drexler, Nanosystems
Robin Hanson, “Explain the Sacred”
Robin Hanson, “We See the Sacred from Afar, to See It the Same.”
Articles by Robin on AI alignment: