Discussion about this post

User's avatar
Gaelan's avatar

This brings up SO MANY questions...

I find some comfrot in the fact that we're not entirely sure what AGI is or what will happen when it comes about. Also, how can we make informed decisions about what AGI will do when we're not entirely sure what human consciousness is? (It might be a field we tune into that's also the substrate of reality). If AGI is just perfect human thought, what will it know about love, compassion, or the interconnectedness of all things? Will its universe be limited to the vast trove of data and everything that can be inferred from it? Will it "think" really, really well, whatever that means? While impressive, that's hardly god-like, and it doesn't seem to be many orders of magnitude beyond where we're currently at with AI. If it will be used to make new weapons like intercontinental rail guns, bioweapons, remote mind control, etc., we're already going down that route without it. The fact that it has so many unknowns behind it gives me hope that it would also make a violent apocalypse less desirable by providing many avenues for avoiding it: carbon sequestration, nitrogen extraction for fertilizers, clean energy, etc.

And on AGI and consciousness, what is its/our purpose? Is it to procreate? Conquer time and space? Live happily in peace? Now that humans have mostly dominated the world, instead of building technology to like limitless energy, or mitigating the impact of supervolcanoes and asteroid impacts, we're still trying to dominate each other so a small subset of our population can control even more resources. For the past ~75 years the only thing preventing a species-ending apocalypse has been mutually assured destruction. Can we really rely on our current leaders to continue this or will AGI rally humanity around a better shared purpose?

If every country is currently acting with this same imperative towards AI then it's a crapshoot as to how this unfolds. It feels similar to the race to develop the atomic bomb, but also different in this a multipurpose tool rather than solely a weapon. I REALLY hope that as AI advances it makes it obvious how absolutely ridiculous our current political and economic systems are.

Expand full comment
Carlos Salazar's avatar

Yes, centralization is a feature, not a bug, of information technologies, especially for LLM-driven AI architecture. However, the emerging "Free Energy, Active Inference" AI architecture could potentially consolidate as a decentralizing force, establishing an AI polarity where centralization and decentralization poles co-exist. Perhaps this could ignite a productive polarity dance.

Biological and Artificial intelligence Path Dependency:

Human Nervous System  | Machine Digital Architecture

“3D Embodied Network” | “2D Disembodied Network”

650 million years of evolution | 89 years of evolution

We master ongoing adaptation and learning; we excel at generalization, adaptability, and creativity. Digital machines excel at precision, scale, and consistency.

Why should this matter to us?

Our human neural physiology operates as an embodied 3D network that seeks correspondence with reality by uncovering causation. We cope, survive and thrive in nature… making us inherently truth-seeking and wired to discover or innovate when pushed into unknown territory.

In contrast, LLMs driven AI operate entirely in digital space, cut off from the physical world. They cannot verify anything against reality, they can only recombine linguistic patterns from their training data... they are not either "truth-maximizers" nor "free energy minimizers".

Expand full comment
9 more comments...

No posts