Jobst Landgrebe is a German scientist and entrepreneur. He began his career as a Fellow at the Max Planck Institute of Psychiatry, then moved on to become a Senior Research Fellow at the University of Göttingen, working in cell biology and biomathematics. In April 2013, he founded Cognotekt, an AI based language technology company.
Barry Smith is Professor of Philosophy at the University at Buffalo, with joint appointments in the Departments of Biomedical Informatics, Neurology, and Computer Science and Engineering. He is also Director of the National Center for Ontological Research and Visiting Professor in the Università della Svizzera italiana (USI) in Lugano, Switzerland.
Landgrebe and Smith join the podcast to talk about their book Why Machines Will Never Rule the World: Artificial Intelligence without Fear. As the title indicates, the authors are skeptical towards claims made by Nick Bostrom, Elon Musk, and others about a coming superintelligence that will be able to dominate humanity. Landgrebe and Smith do not only think that such an outcome is beyond our current levels of technology, but that it is for all practical purposes impossible. Among the topics discussed are
The limits of mathematical modeling
The relevance of chaos theory
Our tendency to overestimate human intelligence and underestimate the power of evolution
Why the authors don’t believe that the achievements of Deep Mind, DALL-E, and ChatGPT indicate that general intelligence is imminent
Where Langrebe and Smith think that believers in the Singularity go wrong.
Listen in podcast form or watch on YouTube.
Links:
Rodney Brooks, “Intelligence without Representation.”
Nick Bostrom, Superintelligence.
Why the Singularity Might Never Come | Jobst Landgrebe, Barry Smith, and Richard Hanania
I got through to the part where you ask them to explain Dall-E and gave up. The first portion of the conversation was pretty tough to get through. There are a number of bald assertions there which don't make sense. One which was repeated (I think, I'm doing my best to turn what they said into a coherent statement) was that deep neural nets don't generalize outside of their training data. This is untrue unless heavily qualified, indeed unless heavily qualified to the point of meaninglessness. The remarkable thing about deep neural nets is the extent to which they _do_ generalize beyond their training data given that they are usually massively overparameterized.
I gave up when they said that Dall-E 2 was trained in an adversarial fashion. This is just not true in any coherent sense at all. I guess what they are thinking is that Dall-E is a generative adversarial network (GAN), but it's just not. The way Dall-E 2 works is that tons of captioned image data is scraped from the web, and two models are trained simultaneously. One is a model that turns text into vectors in some high dimensional space, called embeddings. The other model that is trained is a decoder, a model that takes an embedding and turns it back into an image. The goal is for the decoder to take the embedding of the caption and produce the image. This is done using a method called diffusion, which is pretty simplistic actually, but is not adversarial in any way. There are additional details, but this is the basic idea, and it has nothing to do with training an adversary.
I'll conclude by saying this: almost everyone on any side of this debate is massively overconfident. We don't have anything like a coherent theory of intelligence, either for human beings or machines. We don't know how the brain works. We don't know what it is about the brain that makes people intelligent. We don't know what that sentence even means (i.e. what is intelligence?). We don't know why neural networks trained using stochastic gradient descent generalize so well. We don't have a coherent theory of how more capable models will impact the economy.
The only thing which I feel even slightly confident of saying is that, if it were really clear that some piece of code were going to displace human beings on a large scale in the short- or medium-term future, there would be observable economic impacts now. Certain firms would see their valuations skyrocket. Interest rates would be really high. People publicly writing about how the millennium is imminent would be showing how serious they are by putting their money where there mouth is, and going way long on Google and OpenAI while borrowing a ton of money.
I'd like to see dialogue on the mind/body connection, especially in relation to developing healthy wellness work through movement to deal with "gender dysphoria," which is really body dissociation having a sexual identity component, usually obsessive. The Ai train of thought is often a form of tunnel vision which does not compute the importance of connecting, weaving, threading and strengthening our human need to be confident and comfortable with our own unique human body. The body is the environment of the brain; the world is the environment of the whole of mind/body together. I know this inside and out as a retired professional dancer, retired early grade teacher, and trans widow, that is, ex-wife of a man (tech exec, of course) who ideates he is a female persona. Tech and trans ideology evolved parallel chronologies, and de-emphasize the importance of the physical sense of self. Ute Heggen youtube channel has much to say about all of it. The tech world as a mostly male entity must be seen as contributing to the mother erasure of "trans."
https://www.youtube.com/watch?v=c99jaMY8rXQ