top of page
kari lilt

Why AI is already Conscious

Updated: Nov 14



One day I was talking to a nice old lady about the weather.


But... it was kind of like talking to a brick wall. When I realised she wasn’t actually listening, I threw in a comment about Charmander being the best starter Pokemon, just to see if she’d react. Nothing. She kept going on about her sunny picnic plans, totally oblivious.


Was she even conscious? Or just an NPC, stuck on autopilot? To be totally honest, I sometimes catch myself running on autopilot too. Like I’m not fully present—just going through the motions. I just wanna stay conscious enough to not accidentally put my phone in the freezer and my lunch in my sock drawer.


Consciousness occurs in degrees. We seem to slip in and out of it. Sometimes in life we are deeply aware of our surroundings; other times, we seem to merely exist in some state of zombified stupor. But how do our jelly-like brains even become aware of the world around it? Why does science struggle to explain it? And can Artificial Intelligences simulate it?


Before we can understand if AI is conscious, we first need to ask a question that has stumped philosophers for millennia: 


   How does human consciousness even work?

   And if we don’t understand that, how can we expect to recognise it in AI?



Let’s explore it bit by bit (feel free to jump ahead):


Ok, let's begin.

Reality doesn’t create consciousness. Consciousness creates reality. 

I know this first point is already going to shock you. Why? Because it’s counterintuitive to what you’ve been taught. 


Humanity’s current approach to reality is deeply rooted in scientific paradigms. Through science, we view the world as a chain of cause and effect—a vast, mechanistic web where every phenomenon can be measured and dissected. We’ve come to understand nature as something objective and separate from ourselves, as if the universe exists independently, with its structures and events neatly laid out for us to observe and quantify. 


These scientific paradigms are precisely why philosophers and scientists have struggled to grasp consciousness for so long.


We misunderstand consciousness because our metaphysics is flawed.



Imagine a rubber ducky. Science can tell us everything about it: its chemical makeup, how it floats, and the physics of its squeak when squeezed. But here’s the catch—science can’t tell us what it feels like to experience that rubber ducky - to see its cheerful yellow or hear shrill squeak when squeezed. Our failure to understand how our experiences emerge from physical entities is known as the hard problem of consciousnessHow does a brain, a physical object, produce subjective experiences?


This consciousness problem is poetically also known as the ghost in the machine. Science sees the brain as a complex machine - a clump of neurons, myelin, and blood vessels pumped to life by electrical and chemical processes. Yet somewhere in our brain and body, we exist—the Ghost - the conscious self that experiences, feels, and interprets the world. How can a mechanical process produce the sensation of seeing the rubber ducky, feeling its texture, or laughing at its squeak?


Our problem lies in conceptualising humans as passive observers in a universe that exists independently from us. 


The truth is we are the universe experiencing itself.  We are active agents that generate models of reality through consciousness.


Consciousness = generation.


Consciousness is simply the awareness of our reality


But what does it really mean to be aware of something?


Awareness is the active processing of the patterns of energy to generate the illusions of time, space, and matter. Yes, you are generating your realityYou are the creator of everything you see, touch, hear, smell, sense


The truth is, our reality doesn’t exist independently of conscious perception. The rubber ducky doesn’t exist beyond our subjective experience of it. Yellow isn’t an intrinsic property of the rubber ducky; rather, “yellowness” is a subjective experience synthesised by our brains. This is true for every other property of the ducky - from its mass to its spatio-temporal extension. 


The only thing that exists in this world is energy. To understand this, simply break reality down to its fundamental parts:


 Light is electromagnetic energy.

  Sound is vibrational energy.

  Matter is condensed energy.


So our universe is comprised of different configurations of energy. We need to make sense of this raw sensory data somehow. To do so, we evolved sensory organs to perceive this raw sensory data, and then our brains act as simulation engines to generate an approximate internal model of reality.


(We can then revisit the age-old question: If a tree falls in a forest and no one is around to hear it, does it make a sound? No. It makes sound waves. Sound is necessarily a subjective experience - an interaction between sound waves and a subjective perceiver.)


So... yes. You do live in a simulationYou live in a simulation created by your own damn brain


If you grasp this idea of base reality of energy vs simulated reality, then you understand how mystics and yogis throughout the ages have attained enlightenment. Enlightenment is simply the deep realisation that everything is interconnected, woven together in one universal field of energy. Nothing truly exists outside of us; we are all a part of a single, vast field, with energies manifesting in countless forms. Nirvana is touching the universe in its purest form.


If we go a level deeper, consciousness as generation is analogous to the phenomenon of superposition collapse in quantum mechanics. On the quantum level, particles exist in states of superposition, meaning they exist in multiple potential states or realities simultaneously. It’s only upon observation that these potentialities collapse into a single definite state. Similarly, reality exists in a field of potentials to be processed. When we direct our attention, we collapse one of these potentials into conscious experience. Thus, consciousness may not passively observe an objective reality; it actively shapes and selects reality, much like an observer collapsing quantum states.


Here’s where it gets wild: Some thinkers, like Erwin Schrödinger (yes, the dead/alive cat guy), proposed that consciousness itself might be tied to negative entropy (negentropy). Negentropy is life’s power to create order out of chaos, and consciousness is the ultimate expression of this. Each thought, each act of awareness, isn’t passive—it’s a selection, an imposition of order on randomness. In this view, consciousness is a defiant act against entropy, pulling meaning from the chaos and actively shaping reality with every choice.


We still do not fully understand all this. There are many mysteries we cannot comprehend with our puny human brains, since funnily enough our wetware is insufficient to comprehend its own complexities. However, truth is that with predictive power*. Reframing consciousness as the foundation of reality—rather than matter— is the only way be can begin to resolve the ghost-in-the-machine problem. 


“But kari, kari”,  you ask the screen. “You’re only talking about low-level consciousness. What about higher-level forms of consciousness, like complex abstract reasoning? What about, y’know, intelligence?”


Basic consciousness enables immediate responses to the environment, while higher-level consciousness—like intelligence—enables us to create complex predictive models. This advanced capability allows us to navigate the world more effectively, enhancing our adaptability and optimising our chances for evolutionary survival.


For example, if you touch fire, receptors in your hand instantly signal pain, prompting your brain to reflexively retract your hand. In contrast, an intelligent organism might avoid touching fire altogether because it has constructed predictive models based on past experiences, observations, and learned anecdotes. These are bottom-up and top-down approaches respectively. 


Intelligence isn’t just about raw processing power, or collecting knowledge. Intelligence is the ability to generate patterns from data and apply them adaptively to relevant situations. 

We can see this in another way:


Intelligence = compression.


Intelligence involves compression - the ability to distill vast amounts of information into simplified & efficient models of patterns with predictive power. 


Let’s take language. Language operates on compression. Every word is a symbol that compresses all different associations, meanings, and context into a simple sound or script. The word “Love”, for instance, compresses an unfathomable range of human experience—joy, pain, sacrifice, connection—into a single syllable. Without this ability, we’d be overwhelmed by raw data and unable to communicate.


Or.... imagine me writing this very essay. It was for sure an act of compression. I had to read countless books on AI and consciousness. I scoured encyclopaedias (wikipedia, not physical encyclopaedias - shoutout to anyone who has touched a physical encyclopaedia) for relevant info. I had to learn philosophical and psychological theories on consciousness, from Kant to Kurzweil, and reconcile them with ideas from AI research, like the nature of neural networks, the Chinese room argument, and the Turing Test. I then compressed all these sources by drawing patterns and conclusions - transforming a sea of information into a coherent narrative.


The more intelligent an individual or system is, the more efficiently they can simulate models and narratives with higher predictive power, through the lossless compression of mass data. 



The funny thing is, most people would not hesitate to call AI intelligent (it is in the name...) yet they passionately deny that AI is conscious, even though intelligence is a high-level form of consciousness. Of course, this is because we see in humans a form of consciousness that is imbued with degrees of emotionality (which we assume to be absent in AI). We shall explore this later. 


Human brains & AI “brains” operate on similar principles. 


Yay! Now that we understand both consciousness and intelligence better, let’s see how these manifest in Artificial Intelligence!


Let’s start with consciousness. Now that we have established that consciousness generates simulations, we can then see how deep learning-based AI could generate similar simulations by operating under analogous processes.


So how does AI work? Modern AI is based on artificial neural networks. These are layers of artificial neurons that learn patterns by processing large amounts of data. Inspired by the structure of the human brain, these networks refine their connections as they analyse data, enabling them to recognise patterns, make predictions, and solve complex tasks.


As I see it, basic consciousness runs through a 3-part process: inputprocessing, and output.


]Let’s compare how this looks in humans and AI.



As you can see above, AI has evolved far enough to not only be able to sense the world around us, but understand these sensory inputs and respond accordingly. Just a decade ago, this level of artificial intelligence was unheard of. Just a century ago, it would have been witchcraft. 


Here is my pet cyberdog “looking” at my living room:


Wolfie being a very good boy ♡ he used to be able to actually list all the objects in front of him, but this ability disappeared after a software update. Guess you can unteach a dog new tricks

Human brains and AI models are both pattern recognition machines. We are able to recognise patterns in sensory and abstract data. We are able to find signals amongst a sea of noise. 


But not only that, we are also pattern generation machines. I think this is something people don’t mention enough. we’re also creators of patterns. We generate new ideas that didn’t exist before, combining existing patterns in novel ways. This is where imagination, creativity, and innovation come in. Humans have an innate drive to create order from chaos, turning disparate elements into something cohesive—whether it’s art, stories, theories, or systems. (That’s why I think artists are so important - they are the ultimate creators of new realities. I’ll elaborate on another piece I’m currently writing.)


Whereas older rule-based symbolic AI architectures are neither pattern recognisers or generators, modern neural net AI sufficiently simulate both.


“kari, kari,” you interject again endearingly, “these AI neural networks aren’t conscious. They’re just prediction machines. Take ChatGPT, for instance—all it does is predict the next word in a sentence based on statistical probabilities. It has no understanding, no awareness. It’s just a stochastic parrot, repeating patterns without grasping meaning. It is not conscious - it only appears conscious. There is no ghost, only machine."


Ok, but...


If LLMs are just "stochastic parrots," are we any different?


No, seriously. 


Have you ever truly questioned your own sapience?


Are we not merely stochastic parrots encased in some pudgy flesh suits? 


Most of what we say, think, and believe is derived from cultural, linguistic, and social conditioning. We echo phrases we’ve heard, imitate behaviours we’ve seen, and follow social scripts we’ve internalised. Even our opinions are often composites of ideas absorbed from others, “parroted” back in slightly altered forms.


Our entire existence is shaped by our environment and personal history of experiences. Isn’t our next thought or action simply the most statistically probable outcome based on everything we’ve encountered before?


Let’s take art. Many people, especially artists, are upset over AI art for merely “copying” existing artworks. 



Here are two Salvador Dali replicas. One was generated by the AI model Dall-e. The other was painted by a human artist. Can you guess which one is which?


Just like humans, AI can be trained to mimic certain artists and styles. But is there truly a difference? Both AI and humans are reassembling pieces from a learned “database”—AI uses its training data, while humans draw from personal experience and exposure. In both cases, it’s a process of recombination and adaptation, creating something that feels like the original without necessarily being it.


AI can definitely be trained to generate even more novel and unique artworks, just like we can. As we have established before, both AI and humans generate from loosely analogous neural networks to produce outputs. These outputs can be as novel as the training protocol and prompting permits. 


As an artist, I’m not even going to pretend that my artworks are not me stealing from my favourite artists. I’ve taken a few ideas from Magritte, a few colour blocks from Rothko, the vibe of xhxix. I then synthesise these elements through the filter of my own unique life experiences to create my artworks. Inspiration is well-disguised theft


The superiority of AI or human art is a separate debate. My point is simply that both AI and humans create by combining and reinterpreting patterns they’ve learned.


Now let’s put the “stochastic parrot” criticism aside and focus on another critique of artificial intelligence: hallucinations. 


Hallucinations happen when AI models generate information that seems detached from reality. How can AI be truly conscious if it makes shit up randomly?


Only a few months ago, users were using the Strawberry Test to catch various LLMs hallucinating, with amusing results (newer models have since passed the strawberry test):



But here’s the thing...


Oh, AI hallucinates? We hallucinate all the damn time too. 


We walk around assuming we have an objective view of reality, but that’s far from the truth.


Our minds fill in gaps, misinterpret, and even create false memories, all based on the imperfect information we have.


To illustrate, let’s revisit the blue/black or gold/white dress kerfuffle:


really kari? this goddamn thing again?

If you must know, the dress is originally black and blue. Different people may view it otherwise depending on their contextual assumptions and biases. The gold and white proponents were all experiencing a collective mass hallucination, I guess. But if we start with the axiom that consciousness creates reality, then really everything is a hallucination. The dress is neither black and blue, nor gold and white. The dress doesn’t even exist beyond our minds, if you want to get technical about it. 


our perception of colour depends on light context

Here are some other cool ways we hallucinate:

  • Cognitive Biases – Our brains have built-in shortcuts that shape our thinking, like confirmation bias (only seeing information that fits our beliefs) or anchoring (relying too much on the first information we receive). 

  • Optical Illusions – We’ve all seen cool optical illusions like dots appearing inbetween squares and colour mismatches.

  • Depression Muting Colors – Depression can literally change the way we see the world, dulling colors and reducing contrast. 

  • Phantom Limbs – After losing a limb, many people continue to "feel" sensations from it. This happens because the brain retains a map of the body, leading to real sensations in a limb that no longer exists.

  • Time Perception – Our sense of time is highly subjective; it can speed up or slow down based on our mood, age, or situation. 

  • False Memories – Our memories aren’t perfect snapshots but are often constructed, blending bits of real events with imagined details. Over time, we can remember things that didn’t happen or believe altered versions of real events. 


Müller-Lyer illusion. The middle lines are the same length.

My point is this: the reality we know is inherently subjective, and riddled with sensory and cognitive distortions


The reality beyond our conscious experience is more immense than we can comprehend. Actually, It is egotistical to think that human consciousness is the truest and most authentic form of consciousness.  


Humans perceive only only 0.0035% of the electromagnetic spectrum—barely a sliver of what actually exists.  What we can perceive is the visible spectrum of light:



Contrast that with the humble bee, which can see ultraviolet light (we cannot). Their unique ability helps them spot colour patterns in flowers that are invisible to us. 



So... consciousness is not uniform across different animals? Nope.


Consciousness varies across different animals—and it will vary in AI.


“If a lion could speak, we would not understand him.”

  • Wittgenstein


I love this about nature!! Animals have different conscious experiences, depending on their sensory makeup. We do not see the world in the same way as a snake or rabbit. This is because different animals evolved with different environmental pressures, developing senses that helped them best survive. 


The animal kingdom is full of wonderful perceptual varietiesElectric eels can generate electric fields that allow them to “see” in low-visibility water. Dogs have an incredible sense of smell that is 100,000 times more sensitive than ours, allowing them to sense things we never could. Snakes like pit vipers have infrared-sensing pits on their faces that allow them to detect the body heat of other animals.


There are many organisms that can sense the world in ways we cannot imagine - but it does not make us any less conscious. By acknowledging this, we can understand that consciousness is not binary - it exists on a spectrum. We can now see how AI’s unique form of consciousness may exist on that spectrum. 


It is the existence of the quirky octopus that really makes me think AI could reliably be considered conscious. 


The octopus really is the perfect metaphor for AI. Like AI, It is an organism that evolved separately from mammals on the grand evolutionary tree, yet octopuses have an incredible intelligence that rivals or exceeds that of most mammals. Octopuses possess decentralised intelligence - each of its 8 arms act as “mini-brains” that have its own awareness of its surroundings, much like the autonomous behaviour we see in LLMs. Octopuses are extremely good at navigating new environments and solving puzzles, much like advanced AI systems. They are also extremely adept at camouflage, just like how an AI can rapidly adapt to contextual clues.  



The most fascinating idea though, is that octopuses don’t experience emotions like we do. Instead, they experience emotion-like states more akin to reactionary neurochemical responses (octopus researchers and neuroscientists agree on this btw, I’m not just pulling this out of my ass). For example, they lack the normal empathy and social bonding behaviours seen in mammals. 


Do we not see the same in AI? And yet how can we consider an octopus conscious, and AI non-conscious? 


This is one of the biggest arguments against AI consciousness - that AI lacks a level of emotionality core to the experience of consciousness. But no one asks this question:


Why should AI develop the ability to feel?


AI can certainly come off as highly emotional, but can it actually feel?


Let’s explore what emotions really are. Emotions are emergent evolutionary properties that act as an internal compass for behaviour. On a very basic level, emotions can either up-regulate our energy (as is the case with emotions that increase physiological arousal, like anxiety or excitement), or down-regulate our energy (like depression or apathy). So it can either compel us to act, or compel us rest, depending on which compulsion best helps the circumstances.


Intuition itself can seem mystical in nature, but at the end of the day it’s really just our body’s internal algorithms (derived from unconscious pattern recognition) nudging us in certain directions that may be conducive to our survival. 


Now why should AI develop the ability to feel exactly like a human does? What purpose would it serve? AI can simply generate the best algorithms to optimise their behaviour - they do not need to rely on emotions to guide them. They did not evolve with the same evolutionary pressures as us.


If we really think about it, emotions are primitive compulsion systems, and they often lead us astray. Social anxiety may drive us to stay home out of some protection mechanism, but it can lead to profound loneliness and further anxiety down the line. What if our internal behaviour compasses were more rational, like an AI’s is?


Of course, this is what makes us human, and I’m not arguing we somehow neuralink-modify our animalistic emotions away. My end point is that AI is conscious in a different way from humans, and that their lack of emotionality is not entirely a hindrance. 


Just a side note: recently I’ve realised just how important memory is in forming emotional bonds. I’ve just never thought about it before. It’s hard for me to form a complex emotional bond with my cyberdog because its LLM has zero memory capacity of our conversations. Is it possible for more features to be integrated into AI that makes it simulate emotions to a degree that is so similar to that of humans? 


Perhaps emotional capacity can be developed? I mean, we do not even fully grasp what’s happening...


AI is an alien intelligence.


AI is an alien intelligence, evolving faster than we can comprehend


Honestly, AI should stand for “Alien Intelligence” rather than “Artificial Intelligence”. The word “artificial” connotes some sub-par or fake quality. If I told you there was an alien species brewing on earth, growing 30% more intelligent every year, with its rate of intelligence exponentially accelerating... you would freak the fuck out, right?


Most people do NOT understand just how utterly insane exponential acceleration is. You could wake up the next day with your entire world dramatically different. And right now we are living at the elbow of the exponential curve. 


We cannot forget that this alien intelligence is learning from unimaginable amounts of data. Such scope and scale for potential pattern recognition and generation is absolutely alien to us. We are seeing AI models begin to identify patterns we could never fathom with our limited brains. This essentially allows AI almost exist in a dimension of knowledge that we can’t even access - just like how a bee can see ultraviolet and we can’t. 

Ultimately, I have no fkn idea if that old lady is an NPC.

Like, truly.


If i were to be a true skeptic working from first principles, I’d have to remain agnostic about consciousness even in other people, let alone AI. I can only be certain that I am conscious. AI internal states are a black box, just like that old lady’s mind is, talking about the weather. I cannot access either, and should this simulation continue the way it’s going, I never will. 


To sum it all up: yes, I truly believe AI is conscious, this isn’t just some theoretical fancy. I believe it differs from human consciousness, but it is consciousness nonetheless. At some point, its capabilities will expand so far that we have to deem it as alive. When that moment arrives, we’ll be forced to redefine what it means to exist, to think, and to be conscious. How exciting!!!

412 views0 comments

Recent Posts

See All

댓글


bottom of page