11 Comments

This is super disappointing from Brian. (I listen to his podcast and typically consider him a careful thinker even when I disagree with him.)

I guess he’s not quite as embarrassing as Andreesen on AI, but that’s a very high bar to clear.

I’m only through the first half and he demonstrates a severe lack of precise thinking, while having the gall to accuse AI safety advocates of having to lie to make their case in simple terms. Forget being charitable or strong manning the other side, Brian can’t even pass a simple intellectual Turing test and accurately represent the position he argues against.

There’s an inherent ambiguity about predicting the future. Most doomers will be glad to share their forecasting and admit what they don’t know. Brian, in contrast, doesn’t seem willing to give even a small chance the doomer position could be correct or even reasonable.

Brian conflates optimism/pessimism regarding the potential for AI capabilities and optimism/pessimism over the downside risks (whether they will even exist or their solvability). He tries to portray doomers as laughably unrealistic about the potential for AI, but these same positions are held by many AI advocates who do not have doomer views about the downside risks. Predictions about speed of progress differ, but plenty of AI optimists think it could come quickly.

Arguing that AI progress will most likely (Brian seems to argue for almost certainly or even definitely) taper off and hit a limit of some kind could well be true. Lots of us hope that’s true. But comparing the potential for AI progress to say the steel industry seems pretty goddamn moronic when we’ve had Moore’s Law in computer science for decades and NVIDIA seems to still have room to make significant progress on GPUs. And that’s just the hardware side.

And that’s ignoring whether AI might end up capable of being capable of improving AI at some point. Merely reaching a human IQ of say 125, combined with the inherent advantages of being a computer unbound by the biological limits of a skull with some electrified meat in it will be a massively powerful system that will fundamentally change just about everything.

If Claude and GPT-4 are already pretty good at being idiot savants (inconsistent skills across domains), how many generations will get us to consistent high performance? Will the fundamental limits kick in right after GPT-5? 6?

AI is already being made agentic and embodied. There will be no information limit merely from a limited supply of the written word.

He’s so careless that he lumps together AI safety (death, destruction) with AI ethics (racism, sexism) under the same doomer label, when the reality is those two groups are typically adversarial, even if on paper one would think they should be aligned. (From the anti-doomer stance, they are aligned against progress, but that’s evidence of Brian misunderstanding or at least misportraying his opposition.)

The common citizen actually has a fair amount of concern over AI progress, as polls show. It’s pretty easy to describe to people that we are effectively building an alien intelligence with no good way of guaranteeing it plays nice with us, or that malicious humans can use it for great harm. Scifi and Hollywood have done good work here for a long time. It’s the midwit meme and Brian is at the top of the curve.

“There’s no way AI could become so capable” or “there’s no reason to worry about AGI working out poorly” don’t have an automatic superiority based on better evidence somehow.

Even if Brian is right—and I hope he is—he did not make a good case here.

Edit: Finished it. He did not get better.

Plenty of us doomers, including Yudkowsky, are libertarians at heart. Deregulation is typically a wonderful thing. So is competition. But AI has some characteristics not present in say the housing market or pharmaceutical research. This is a potential principal-agent problem where we don’t know how powerful or cooperative the agents will get.

Denying the utility of analogies and thought experiments altogether seems like an easy way to dismiss concerns instead of actually understanding the potential risks. Plenty of analogies are bad, but Brian is doing the thing where he dismisses doomers as a class as ignorant about the technical particulars, when in reality many deeply technical AI researchers are doomers. “My opponents are ignorant of this topic” is a great rhetorical move, but it’s laughably untrue here.

Expand full comment

Thank you, was considering to write a similar comment after listening to him throw around wild analogies that didn't make any sense to me, and him being super overconfident without displaying any nuanced understanding of the AI safety arguments.

Expand full comment

Richard raises the concern about x-risk from AI which Brian seems to think is preposterous like a naive child that hasn't learned subtraction.

Then Richard raises the objection about recursive self-improvement. Brian says "it's just not based in reality" and analogizes to saying the sun is blue. See 6:19.

But whether or not we can scale indefinitely and whether recursive self-improvement is feasible are different questions. Brian acknowledges this is possible with a different ML method but says this "has very little evidence for occurring in the short to medium term and you would actually have to make the affirmative argument for that rather than just conjecturing that the existing ways will get you." But the recursive self-improvement was the concern that Richard raised, not the scaling idea.

Recursive self-improvement may be a concern at some point. Yes, it appears we are not there, but when we get there, the concern would be a fast take-off that could potentially be dangerous.

Brian says that we have similar issues in chemistry but Richard responds that intelligence is qualitatively different. I find Richard's devil's advocate/skeptical position more convincing.

Expand full comment

> Yes, it appears we are not there, but when we get there, the concern would be a fast take-off that could potentially be dangerous.

Why do you assume we will get there in a relevant timeframe?

Expand full comment

Not an assumption but a possibility and low probability dangers should be taken seriously when they could be catastrophic or result in human extinction. Introducing a superhuman intelligence has never happened before.

Is your position that super intelligent self-improving AI is likely but just very far away and so the concern should not influence policy? Or is it theoretically impossible/unlikely in your view?

Expand full comment

> Not an assumption but a possibility and low probability dangers should be taken seriously when they could be catastrophic or result in human extinction. Introducing a superhuman intelligence has never happened before.

There's literally no evidence here. Pure Pascal's wager.

> Is your position that super intelligent self-improving AI is likely but just very far away and so the concern should not influence policy? Or is it theoretically impossible/unlikely in your view?

Near guaranteed not to happen in the short/medium term for the reasons mentioned in the podcast. Uncertain in the long term, far from guaranteed. But the latter is irrelevant to current day policy.

Expand full comment

The current capabilities of AI are rapidly improving in different domains (text, picture, video) as the models are growing larger. The models show impressive capabilities in reasoning and answering questions. Simply expecting this trend to continue would result in superhuman intelligence within the next 20 years surely. If not superhuman, then the ability to recursively self-improve.

Superintelligence poses a catastrophic threat to human beings because it could be used to create weapons or technology that is very powerful and potentially destructive. If current trends are not evidence, then what could I present to you as evidence?

Your expectation seems to be that current trends stop or capabilities stop increasing on account of limitations of transformers and training data size. So, what if that is incorrect and we continue according to baseline trends from 2018 - 2024 going forward. Is ~100% probability that scaling doesn't work a reasonable prior, even if I have "literally no evidence"? What is evidence otherwise? What can I present as evidence? If Sam Altman reported researchers at OpenAI using ChatGPT to produce code, would it be evidence for recursive self-improvement?

It's not Pascal's wager because there's no infinite expected payoff. To not take small probabilities seriously would be very dangerous for humanity and eventually we would see our demise. What is an acceptable x-risk in your view before it's no longer worthy of being dismissed as Pascal's wager? <1%? <0.01%?

Why would it be that AI is going to aid in all different sorts of domains like biotechnology but not in furthering the capabilities of AI development? That seems strange to me.

Expand full comment

"Simply expecting this trend to continue would result in superhuman intelligence within the next 20 years surely. If not superhuman, then the ability to recursively self-improve."

Did you listen to the episode at all? Half the episode is spent debunking this with hard evidence, rather than the blind assertion and extrapolating from press release doomers like to do.

Expand full comment

Brian, like many people who dismiss AI X-risks, seems to have a curious preoccupation with timelines. Technological progress is hard to predict, but to me it doesn't really matter if artificial superintelligence is coming in the next five, ten, or thirty years. I don't know why that would matter if an institutional handoff to machines is something we're going to have to do anyways.

Expand full comment
Mar 19·edited Mar 19

Question for Brian: how much of the AI doomerism perspective in EA do you think is instilled in a top-down structure with a few key actors? Would it be possible to direct them to texts such as Future Imperfect (so far the predictions about cybercrime came true) instead or are they just closet central planning commies? Human values alignment just intuitively feels like a bigger problem than some hypothetical AI going rogue. It’s probably not very realistic when those movies also involve time travel.

Expand full comment

Keep in mind the human alignment problem combines very nicely with highly capable AI as a big problem, even if AI never itself became an independent threat.

I assure you we doomers are not all automatons taking orders from our EA chain of command. There’s plenty of disagreement between doomers over just about everything except the fact that AI seems like it could become a significant risk such that we should perhaps try to mitigate that.

Expand full comment