> Not an assumption but a possibility and low probability dangers should be taken seriously when they could be catastrophic or result in human extinction. Introducing a superhuman intelligence has never happened before.
There's literally no evidence here. Pure Pascal's wager.
> Is your position that super intelligent self-improving AI is likely but just very far away and so the concern should not influence policy? Or is it theoretically impossible/unlikely in your view?
Near guaranteed not to happen in the short/medium term for the reasons mentioned in the podcast. Uncertain in the long term, far from guaranteed. But the latter is irrelevant to current day policy.
"Simply expecting this trend to continue would result in superhuman intelligence within the next 20 years surely. If not superhuman, then the ability to recursively self-improve."
Did you listen to the episode at all? Half the episode is spent debunking this with hard evidence, rather than the blind assertion and extrapolating from press release doomers like to do.
> Yes, it appears we are not there, but when we get there, the concern would be a fast take-off that could potentially be dangerous.
Why do you assume we will get there in a relevant timeframe?
> Not an assumption but a possibility and low probability dangers should be taken seriously when they could be catastrophic or result in human extinction. Introducing a superhuman intelligence has never happened before.
There's literally no evidence here. Pure Pascal's wager.
> Is your position that super intelligent self-improving AI is likely but just very far away and so the concern should not influence policy? Or is it theoretically impossible/unlikely in your view?
Near guaranteed not to happen in the short/medium term for the reasons mentioned in the podcast. Uncertain in the long term, far from guaranteed. But the latter is irrelevant to current day policy.
"Simply expecting this trend to continue would result in superhuman intelligence within the next 20 years surely. If not superhuman, then the ability to recursively self-improve."
Did you listen to the episode at all? Half the episode is spent debunking this with hard evidence, rather than the blind assertion and extrapolating from press release doomers like to do.