6 Comments

I got through to the part where you ask them to explain Dall-E and gave up. The first portion of the conversation was pretty tough to get through. There are a number of bald assertions there which don't make sense. One which was repeated (I think, I'm doing my best to turn what they said into a coherent statement) was that deep neural nets don't generalize outside of their training data. This is untrue unless heavily qualified, indeed unless heavily qualified to the point of meaninglessness. The remarkable thing about deep neural nets is the extent to which they _do_ generalize beyond their training data given that they are usually massively overparameterized.

I gave up when they said that Dall-E 2 was trained in an adversarial fashion. This is just not true in any coherent sense at all. I guess what they are thinking is that Dall-E is a generative adversarial network (GAN), but it's just not. The way Dall-E 2 works is that tons of captioned image data is scraped from the web, and two models are trained simultaneously. One is a model that turns text into vectors in some high dimensional space, called embeddings. The other model that is trained is a decoder, a model that takes an embedding and turns it back into an image. The goal is for the decoder to take the embedding of the caption and produce the image. This is done using a method called diffusion, which is pretty simplistic actually, but is not adversarial in any way. There are additional details, but this is the basic idea, and it has nothing to do with training an adversary.

I'll conclude by saying this: almost everyone on any side of this debate is massively overconfident. We don't have anything like a coherent theory of intelligence, either for human beings or machines. We don't know how the brain works. We don't know what it is about the brain that makes people intelligent. We don't know what that sentence even means (i.e. what is intelligence?). We don't know why neural networks trained using stochastic gradient descent generalize so well. We don't have a coherent theory of how more capable models will impact the economy.

The only thing which I feel even slightly confident of saying is that, if it were really clear that some piece of code were going to displace human beings on a large scale in the short- or medium-term future, there would be observable economic impacts now. Certain firms would see their valuations skyrocket. Interest rates would be really high. People publicly writing about how the millennium is imminent would be showing how serious they are by putting their money where there mouth is, and going way long on Google and OpenAI while borrowing a ton of money.

Expand full comment

I'd like to see dialogue on the mind/body connection, especially in relation to developing healthy wellness work through movement to deal with "gender dysphoria," which is really body dissociation having a sexual identity component, usually obsessive. The Ai train of thought is often a form of tunnel vision which does not compute the importance of connecting, weaving, threading and strengthening our human need to be confident and comfortable with our own unique human body. The body is the environment of the brain; the world is the environment of the whole of mind/body together. I know this inside and out as a retired professional dancer, retired early grade teacher, and trans widow, that is, ex-wife of a man (tech exec, of course) who ideates he is a female persona. Tech and trans ideology evolved parallel chronologies, and de-emphasize the importance of the physical sense of self. Ute Heggen youtube channel has much to say about all of it. The tech world as a mostly male entity must be seen as contributing to the mother erasure of "trans."

https://www.youtube.com/watch?v=c99jaMY8rXQ

Expand full comment

Based on this interview, I think the most accurate way to understand the opinion presented here is that these men are constantly being asked by businessman who have heard of AI if you can use AI for something that it isn't good at, and they have to explain that no you can't. I think this is really reasonable, because laymen especially in the business world often have over inflated ideas of what AI can do. And the book is about why it can't do that.

I think their opinion can also be reconciled with arguments made by those who think AGI is possible, as well. Basically, they present an argument that current AIs could not become AGIs. I think AGI-believers also think this, but with the crucial difference that they think it is not unlikely that the method by which we train current AI technology could produce a, substantially different, AGI. Perhaps it would even exhibit "drivenness"—presumably, to be a dangerous AGI, it would have something like goals. This is, as I understand it, the "sharp left turn".

Expand full comment
Feb 1, 2023·edited Feb 1, 2023

These guys are very short-sighted, I wasn't impressed. They label lots of things as impossible simply because they havent been done yet. This is foolish when considering the recent pace of progress in AI and I predict their views will age very poorly.

Expand full comment

Wish you asked them to define all the technical terms they were throwing around like 'ergodic'.

Expand full comment

I agree that it’s practically impossible to model or understand the human brain. But, I think it’s plausible that AI could evolve general intelligence, given the ability to interact with the world in general ways and physically reproduce - an artificial analog to biological evolution. Perhaps this is discussed later and I haven’t gotten there yet.

Expand full comment