6 Comments

I’m new to CHATGPT/AI (have yet to even Google it) and for no other reason then, just trying to wrap my mind around it. Which I can’t. But to read and think if it in terms of evolution, the way you have relate it to our minimal (very egoic) understanding of not just other animals but non white beings, less superior and without consciousness makes me want to get to know it.

You humanized it in my mind. The only question is, will my brain be able to comprehend it all 🤔 frankly, I now think the Raven stands a better chance I do! Thank you for this brilliant piece!!

Expand full comment

Thank you for a very interesting article! My initial reaction based on what I know about current AI is that to think that we are enslaving these chatbots and killing them off when we’re done chatting is ludicrous. That being said, I see where you’re going and I know these conversations will get way more complicated at an exponential rate.

I also liked the part about music since I’m a musician and music therapist. My two cents on that would be that just because communications between animals seem like “music” to us, it doesn’t mean that it’s “music” to them in any sort of meaningful way we could have a conversation about. Again, I see where you were going with this but I am not sure how we can make a jump between intra-species communication and what we would call “culture” or “music.”

Thanks for all you do and your writing!

Expand full comment

Man, I appreciate you, but this feels like a mess of fallacious leaps and connections. Is complex behavior the same thing as consciousness. I don't think so, but that's the substance of your claim that non-human animals are conscious. FWIW, I'm not pushing back on the claim itself. I just think there's more compelling evidence you could have picked.

On conscious AI, it feels pretty safe to assume that consciousness won't emerge from the LLM branch of AI exploration, given how LLMs actually do their work. It would have been interesting to see you dive a little deeper into that.

Will AI (more generally) become conscious? Does this depend on whether consciousness is an emergent phenomenon? Plenty of smart people believe that it may be fundamental rather than emergent. Does that difference have some bearing on how we think and talk about consciousness at it relates to AI and animals (and ourselves for that matter)? I think it does big time. It would have been interesting to see you go there as well.

All of the above assumes some threshold definition for consciousness that's not yet met by AI, rather than a continuum where we could say AI already has a rudimentary consciousness. Maybe it's this notion of rudimentary consciousness that connects AI and animals for you, which explains why you would use complex behavior as your evidence.

Somewhat beside the point, you seem to define theory of mind here (in ravens) as a sense of "selfness," which is closer to a definition of consciousness than theory of mind (the capacity to ascribe mental states to *other persons* and to use that to explain their actions).

Anyway, I'll stop there. You've posed really interesting questions, even if I'm not on board with some of your assumptions.

Expand full comment

And then there’s the moloch-y arms race for stronger and stronger AI that’s pouring fuel on the fire.

Everything is going to be fine... right? ... RIGHT?!?!

Expand full comment