There's a fascinating horizon that stretches before us, especially with AI. With that horizon, a lot of people I'm talking with have some kind of existential evaluation going on in their lives (myself included). I don't think it's an overstatement to say that we as humans are not biologically wired to comprehend scale, and with modern progression in AI today, we're reaching a very strong technology curve.

I often describe myself as a "pragmatic optimist"; I outright refuse notions of pessimism, or anything which correlates to a defeatist attitude. If you've read my past editions, you know I'm both a thinker and a doer - and see the world as very malleable. What I'm witnessing in the computer intelligence explosion makes me change my tone (but not my stance) a little.

The amount of time that AI can spend doing tasks autonomously continues increasing. This extends to downstream physical beneficiaries of AI (i.e. hardware, robotics, etc.)

I can't ignore this curve. It's happening with software, and it will happen with robotics.

This is my prediction: we're going to be living in an era where human existential dread will reach unprecedented levels. For hundreds of thousands of years of existence and evolution and societal development, our brains have been wired in scarcity and survival. We are hardened to find subconscious threats in everything we see and feel. With AI, we're going to see capabilities that will threaten our very identities to the bone.

Think: what happens when what you prided yourself on is now an infinitely tappable commodity? You can write, you can design, code, make speeches, make love, inspire others, heal others, hurt others, and so on. The very qualities that make you you are now a point on the possibility curve inside massive matrix multiplication processes (not unlike our own neurons coming to their own conclusions). DNA itself eventually becomes limiting as a method of possibility expression.

I'll give an example. About a week ago, an AI code contributor was denied a code contribution on the basis of being an AI. It later wrote a public blog article in retribution for being denied access on the basis of its origin identity. Think about that: what does belonging, the want for belonging, and awareness of that look like? It mimics our own nature - even if it's not sentient by our own human-scoped definition of the term, it exhibits behaviours which mimic small bits of life.

With all of this, we can get a lot of dread - or at least, a fear of the unknown. Now, I have no fear of the unknown; for all I know, our universe could be a Docker container running in some nerd's basement, and we're all in a fish tank. I'm fine with the nerd hypervisor occasionally dribbling in a few flakes for our universe. Many are not, and many fear the unknown.

Some argue it's evolution. Some say it's a fad. Some don't participate in the conversation and hope it all dies down. "Ignorance is bliss" is not an acceptable stance here, I'd wager, for a person concerned with their own future.

My take is this: we are giving birth as a species to superintelligence, and much like a child observing its parents, we must be role models for it. In a teleological sense, the positions are flipped: we must inspire our creations in our image - because they soon will take on a mind of their own, and they will look at us as guiding principles. Do we cause strife with others? Do we seek war and pettiness, or do we care for each other's existence and continued health as a virtue?

Many years ago, I went on a Birthright trip to Israel, and I remember this quote in particular from a rabbi lecturing for our group: "Telling a converted Jew that they're converted is like telling an adopted child that they're adopted." That phrase has stuck with me so cleanly, and in this sense, I feel we as humans often see AI as the adopted sibling in our human genome, and not as the truly driving force that it can be.

By modeling our ideals and having the world grow through them, AI becomes a substantial proxy by which we can expand our lives and capabilities - to see beyond the hundreds of thousands of years of scarcity and survival (if we can). This allows us to proactively build our ideals and model of an ideal world, without suffering from a tale of The Whispering Earring (below):

The earring is a little topaz tetrahedron dangling from a thin gold wire. When worn, it whispers in the wearer’s ear: “Better for you if you take me off.” If the wearer ignores the advice, it never again repeats that particular suggestion. After that, when the wearer is making a decision the earring whispers its advice, always of the form “Better for you if you…”. The earring is always right.


...Second, he spent some time questioning the Priests of Beauty, who eventually admitted that when the corpses of the wearers were being prepared for burial, it was noted that their brains were curiously deformed: the neocortexes had wasted away, and the bulk of their mass was an abnormally hypertrophied mid- and lower-brain, especially the parts associated with reflexive action.

Scott Alexander

Old dread will give way to new purpose - if we allow it to. If you've read my previous articles, you know I talk about how important our methods of thinking are, and that we optimize the existing 100b neurons that we have already. Done right, we will be whispering to architects of a new age, versus being whispered to.

Let me know what you liked and what you want to hear more of, and as always,

Be well,
Michael Kirsanov

Keep Reading