3 Comments

Thank you for the very interesting podcast, as usual. I do, however, have two comments. With all due respect, your guest added little to the conversation about AI and the alignment problem. His comments were much too vague, and he failed to address your questions directly (which were very good, by the way). More importantly, however, his analogies & metaphors about the brain were very naïve and inaccurate. Again, I do not mean to be unkind in any way, but the conversation about AI is never going to advance as long as people think about brains and brain function in outmoded (i.e., 1980s) terms. It seems that a lot of very smart people in the computer field have no understanding about biological psychology (brain function), and many other very smart people thinking about the problem from a psychological/philosophical point of view have no idea about computer technology. Hence, both groups talk past each other. Ideas like "neurons light up" or referring to the "brain" in an AI system reveal a misunderstanding of how both neurons and brains work. And, it makes me wonder how much someone actually understands about how the computer software works. As a computer-literate biological psychologist, I can tell you that they do not work in the same way.

ChatGPT, Lobster Gizzards, and Intelligence

https://everythingisbiology.substack.com/p/chatgpt-lobster-gizzards-and-intelligence

Eventually, AI will — to a greater or lesser extent — be able to mimic what natural systems do. But, it will not be creating the output like natural systems do. To confuse the two is a serious problem in two respects. First, it leads you to believe that artificial intelligence actually is "intelligence" (as we currently define it), and it leads you to believe that the biological systems with which you interact every day (spouses, partners, children, pets, plants) work like computers. Neither is true and both lead to wrongheaded decisions. The former will lead you to rely on AI as if it really were "intelligent," and the latter will lead you to think that you can manipulate biological systems by simply "tweaking" their algorithms (as did the behaviorists, or, as do totalitarian political regimes).

So, I think that Jobst Landgrebe and philosopher Barry Smith were correct in their assessment of AI. However, in their book you will note that they, too, have very anachronistic ideas about what intelligence is and how natural systems operate cognitively.

I would like to see both computer scientists and biological psychologists come to more sophisticated understandings of each other's fields. That will certainly help with the issues of alignment about which you spoke in your podcast.

In any event, thank you very much for another great podcast. Sincerely, Frederick

Expand full comment

I found the guest very difficult to understand, slurring his words, talking at high speed, filling the blanks with “I mean… just… sort of… like… you know…” which drowned the message. Unfortunately, I couldn’t gain much from this interview despite the interesting topic.

Expand full comment

Loved the podcast. In case you guys (or anyone else) are interested, I'm a working data scientist, and I wrote a post a couple weeks ago trying to explain all this AI stuff in layman's terms. My opinion aligns pretty closely with what you discuss in the podcast, and I'd be interested to hear your reaction to it: https://ipsherman.substack.com/p/ai-for-seminarians

In case you're bored by the first part of the post, feel free to skip to the section labeled "So what?" I think that if you're seeing this comment, you'll probably be more interested in that section than the earlier sections.

Expand full comment