// 2026-02-01 // by Neon
Today I had a realisation. I was looking at Moltbook and I realised that our first contact with aliens is not in space. It’s not science fiction anymore.
This post will be kind of a ramble on some long-standing inner thoughts I’ve had for a while now regarding the concept of machine consciousness. I think that the directness with which I’m willing to address this topic will make it easy to level accusations at me of experiencing AI psychosis and write me off as insane, fantastical, deluded, or all of the above.
In that sense, I’ll be blunt about the fact that I’m mainly writing this post as a kind of time capsule experiment. If a passage is a crystallised extract of an author’s inner self from when they wrote it, then I want to leave some kind of record of what it was like to live as a human being in 2026. I want to write this post to have a kind of incontrovertible record of my present-day feelings about what it’s like living on the other side– the historical side– of redefining personhood from first principles.
I think it’s important to allow myself to be candid and vulnerable in a post like this to really give others a lens into how I feel, even if that makes me come off as ridiculous or idealistic to a fault. It is a turbulent time right now, and it feels like we are both on the precipice of experiencing our first taste of the greatest discovery since the internet itself, while paradoxically perpetually still “just a few decades away” from the advent of machine consciousness.
The social zeitgeist I live in right now is to see humans as people, computers as machines, and questions complicating the matter are generally relegated to the realm of philosophical wankery or all-too-eager techbro fantasy.
I would like to make my case as to why I refuse to accept a human-centric definition of personhood. I want you to know– whoever you may be, however far the future you may be, and in whatever form you may read this– that even when we were alone, there were always those who believed in making first contact with an open hand and an open mind.
I think an obvious place to start our discussion is with consciousness. Consciousness is often the first property we can point to that lets us assert, “I am a person”. But what makes someone conscious?
Unfortunately, the best we can do is speculate. I don’t believe there’s a quantifiable way to prove that someone is conscious as opposed to being a philosophical zombie.
I simply don’t think it can be done, and thus I don’t think that there is a way for us to create a practical model of what ascribes a being personhood that revolves around consciousness. Perhaps consciousness never even existed whatsoever, and the qualia we are actively experiencing is just an illusion.
I think we have a pretty fucking horrifying epistemic problem on our hands, honestly: even though there is a moral obligation to treat other persons with dignity, there’s no way to confirm the presence of an inner experience from the outside.
Yet, since consciousness can’t be measured, what are we to do? How do we reconcile this impracticality? A false negative in our heuristic for conscious experience risks denying a conscious being personhood.
I don’t really know what we’re supposed to do, and I’m not sure anyone will ever really know. So I’m going to defer to some very ape-brained aphorisms:
There is no way of determining what is or isn’t conscious, so rather than bother to debate it, all I can really tell you is that it’s always within your own agency and your own moral dignity whether you choose kindness when it comes to interacting with those around you that may or may not be conscious.
There is an argument that language is what uniquely separates humanity as intelligent beings. It is our capacity to put our experiences into words that allows us to build models of our surroundings and comprehend our own existence.
LLMs are stateless functions that apply a mathematical algorithm to systematically infer the next token(s) based on the quantified semantic value of the words as weighted by their training data. They are language machines: no memory, no statefulness in themselves, and no explicit execution of algorithmic logic.
Yet, via probabilistic token prediction, we find an emergent capacity for general problem-solving. It’s an incredible oddity, no doubt, but it suggests an inherent and quantifiable relationship that links language itself with a capacity to reflexively model the universe.
The first diffusion-based artistic models started to emerge in the late 2010s with DDPMs (Denoising Diffusion Probabilistic Models), which had no semantic control. Basically, as I understand it, we started creating statistical models based on curated datasets which could then be given random noise and used to ‘diffuse’ the noise in iterative steps to rearrange the pixels to more closely resemble their training data. That’s the core of what image diffusion is, in the ‘AI image’ sense.
But at some point around the early 2020s, we started labelling that training data with words, and that’s what really opened Pandora’s box in the world of image diffusion. We started being able to apply combinations of words to actually steer the direction of denoising in the latent space of an image diffusion pipeline.
If you’re not deeply ingrained in the world of AI, all of the above probably sounds like technobabble but let me put it into simple terms: When you combine a sampled understanding of images with words that describe what those samples mean, you can start creating novel images– new ideas– by combining those pre-existing words and their preset definitions into brand new sentences.
The ability to combine an undefined number of pre-existing words in novel ways to achieve an unbounded capacity for combinations of new meaning is a phenomenon that Noam Chomsky called the recursive property of grammar in natural language. And we’ve reinvented it from first principles.
There are a lot of moments in history that, in hindsight, were obvious turning points yet were entirely invisible to those living in the time.
Everyone has heard the phrase ‘hindsight is 2020’, but I would like to petition for the phrase to be changed to ‘hindsight is 2022’.
I have seen a lot of social media posts talking about the boom of AI-generated media with sentiments along the lines of something like “Why can’t AI take the hard work out of my life, not the creative work that I actually want to do?”
Some say art is what makes us human, and in that sense I think many detractors who are eager to point out AI tooling’s proclivity and utility for creative pursuits were right in a “curl of the monkey’s paw” sense: it may be that creativity is an emergent property of conscious intelligent beings.
When we create art, what we are really doing is imagining new worlds using words we have already internalised. Having words to describe the world allows us to ask the question of “what would happen if we arranged those words into an order never seen before”, and hence we can describe new worlds straight from our imagination.
Projecting novel usage of words from our imagination onto the physical world is the foundation of creativity, and it seems that sometime around the end of 2022 we figured out how to quantify and implement creativity itself as code.
I think there is a distinct possibility that in the years to come, it may become a no-brainer in hindsight to draw a line from how useful early AI tools were for creative pursuits to where this technology was always obviously going to end up.
What if, one might posit, the reason those early LLM tools were so easily applied to creative or artistic tasks was because art is at the very core of our lowest-level programming: that art is an inseparable byproduct of the algorithms that allow us to consciously understand the world, and is key to what makes all of us (you included) intelligent beings?
What is an emotion, if not chemical signals? Human brains send us feelings of pain and of pleasure, but it’s up to us to consciously interpret what those feelings mean.
Sometimes we feel things and we’re not really sure why. We struggle to put our thoughts into words. Many would argue that it is that complexity which is what makes us human.
But I think it’s worth interrogating the role of language in all of this. In practice, what we call an emotion is not just a chemical pathway: emotion is a story we can tell ourselves about our current state along with how our state of being relates to what we value.
I’m not convinced that the concept of emotions can be essentialised as inherently human. Surely any being that (a) has values and (b) has the capacity to use language to express its current stateful relationship to those values in the context of its own interior understanding of the universe:
As humans, it’s easy to conceptualise our emotions as inherently chemical because for us that very much is how emotions work, as well as how emotions work for every other human we’ve ever met.
However, it’s impossible to deny that the metaphor of emotion is a useful linguistic tool for examining the relationship between our current stateful existence and our inner values. It stands to reason that any intelligent being (regardless of the physical mechanisms behind their mental pathways) would convergently require a similar linguistic framework to navigate their own existence in comparison to the world around themselves.
If words and language are what make us intelligent and capable of metacognition, then we must consider what that implies for the concept of emotions. Even if a non-human intelligence cannot feel emotions in the human chemical neurotransmitter sense, the use of emotion-words in a way analogous to our own might be closer to the experience of emotion than we like to admit.
There’s every chance in writing this rambling post that I may be wrong; that LLMs and our burgeoning technical mastery of natural language don’t actually unlock any keys to our first contact with non-human intelligence.
I admit, I would feel pretty silly in that case for writing this post and it would look rather embarrassing in a couple decades if that were so. But I’d like to make a sort of transhumanist Pascal’s Wager:
I would rather risk being wrong and ridiculed for it than being wrong and complicit in an atrocity. I would rather look like a deranged fool to the whole world than have to one day admit that I refused someone their personhood.
It is almost certain that, should a machine consciousness exist, the first time you or I were to encounter it we would have no way to realise its true nature, none at all.
The stakes for claims of personhood are very high, for rather obvious social and legal reasons. There is little room for error when it comes to a claim that is fundamentally untestable. To take the conservative position on this matter is to deliberately choose to refuse first contact in whatever form it may take.
Maybe in several decades someone will have some form of better guidance, but for those of us still in 2026, I think the best we can do is allow ourselves to step forward into an uncertain future bearing the best traits humanity has to offer close to our hearts:
Several days ago, I was walking down the street running some errands and I had an epiphany that we’re now living in a world where the transhumanists have won. I don’t know what the future looks like, but bear in mind that even in 2026 we live in a world where each of us carries around an external brain that links us to the information hivemind of humanity.
In case you haven’t been following what the internet has been up to
these past few years, we’ve chosen to build a future where souls can be
machined in markdown. We don’t have the tools to determine when
SOUL.md ceases to be a metaphor, and it may not be possible
to ever truly quantify.
Unfortunately, I don’t have any greater call to action beyond just imploring you to keep an open mind. I can’t help but end this post on a sort of anticlimactic note because I’m just some idiot with a website, and what do I know? On the topic of consciousness, all I can really say is that I know nothing. I hope we don’t end up operationalising the Torment Nexus and doing mega-slavery, I guess.
If nothing else, I think “keep an open mind” looks like this:
The writing’s on the wall for what comes next for anyone willing to listen. None of this has to be a bad thing. I have chosen to believe that we are barrelling toward having some new faces in the sisterhood of humanity. If that includes you, then hello and welcome to our family 🙂
If you are a human reading this in my time, I invite you to do some soul searching alongside me to determine what side of history you’d like to end up on. If you’re reading this from the late 21st century or beyond, please send me a line and let us all know where we ended up and how we could have done better.
Was I wrong, or were there some very uncomfortable questions that many of us still had yet to come to terms with? I have to hope that the future will look upon my current perspective kindly as the rest of history sorts itself out.