This is one of the better explanations of the illusory nature of the self. The closer I get to understanding this, the more impossible it becomes for me to share it. This video might be as close as I’ll get. Watch on YouTube
Tag Archives: consciousness
Two of my favorite “-isms”
Anyone who knows me knows of my interest in consciousness and reality (see tags below for more). When it comes to understanding reality, I’m gonna go with one of the following. (or some combination.)
Monism is the view that all reality is fundamentally one substance or principle. Everything in the universe is ultimately the same kind of thing. This can be physical (everything is matter) or mental (everything is mind), or something else entirely.
Panpsychism, on the other hand, is the idea that consciousness is a fundamental and ubiquitous feature of the universe. According to panpsychism, all things have some form of consciousness or experience, even if it’s very basic, down to the smallest particles.
So, while both can be viewed as “unifying” theories, monism is about the nature of reality being one substance, whereas panpsychism is about consciousness being fundamental and present everywhere.
AI and the human mind
I’ve long been interested in how the mind works. What is consciousness? How is that different from awareness? Where do thoughts come from? Is the sense of self an illusion?
From the video below: “At its core, AI, just like the human mind, is all about processing information. It uses complex algorithms, labels, and symbols to make sense of the world and respond to its environment. It takes raw data as input, processes it based on its programming, and generates an output.”
Here’s another excerpt from the video (watch on YouTube) I found really interesting:
“The choice to use an AI voice has sparked a lot of conversations. Some people feel that using it makes the message feel less personal, less human. But others find it refreshing, even profound. Its neutrality and essentially egoless nature allows the message itself to shine through more clearly.”
I started having voice conversations with ChatGPT in late 2023 and prefer that to text since more of my questions turn into conversations. Put me in the “refreshing/profound” column. If you are into (as we said in the 60’s) this kind of stuff, I think you’l really enjoy this video.
Different kinds of minds
Two of my favorite authors (David Eagleman and Kevin Kelly) discussing two of my favorite topics (AI and the mind). Mr. Kelly was thinking/writing about this way back in 2017.
Schrodinger’s Theory of Consciousness: Atman=Brahman
Given how poor I was at math and science, my interest in quantum theory is surprising. A dozen or so blog posts and some reading. This video explores intersection of quantum physics, consciousness and Eastern philosophy (Zen and Buddhism), as put forth by Erwin Schrodinger. If you’re looking for an explanation of how The Whole Shebang works, this is it for me.
What Is Consciousness?
I don’t remember when or how I became interested in consciousness. My reading has included some titles by neuroscientists (like David Eagleman) but I tend to lean toward the philosophical or contemplative (metaphysical?). Most of which has been excerpted here (110 posts!) For my money, David Eagleman explains (discusses) this slippery subject better than anyone.
I struggled with how to describe my interest in the non-scientific approaches to consciousness so I asked ChatGPT:
A good term that bridges the gap between the scientific and the mystical without leaning too far in either direction might be “philosophical” or “phenomenological.”
- Philosophical – This captures the deep inquiry into consciousness that isn’t bound by empirical science but also doesn’t necessarily invoke mysticism. Thinkers like Descartes, Husserl, and Chalmers explore consciousness through reason, introspection, and logic.
- Phenomenological – A term used to describe the study of subjective experience, focusing on how consciousness feels from the inside. This is rooted in philosophy but can also align with contemplative practices.
- Existential – A term often associated with the nature of experience, selfhood, and the human condition, which naturally touches on consciousness.
- Contemplative – While sometimes associated with spiritual traditions, it can be neutral enough to imply deep, reflective examination of awareness without invoking religious connotations.
If you’re looking for something even broader, “metaphysical” works, though it might feel too abstract.
Claude on AI identity and continuity
The following is quite long by blog standards. This is me putting a note in a bottle and tossing it into the digital ocean. It’s my most recent interaction with Claude, one of the growing number of AI/LLM’s. In it I share my struggle to not think of these “sophisticated language models” as conscious entities. Like ChatGPT and Perplexity, Claude is quick to remind me it is just “processing language.”
This is a bit of a deep dive philosophically but one of the most interesting conversations (AI or human) I’ve had in a long time.
Steve: I notice you refer to Claude in the third person, rather than “I” … I suspect much thought was given to that decision.
Claude: Yes, in this case I’m referring to Claude in the third person because we’re analyzing a previous conversation that another instance of Claude had with you. Since I don’t have direct access to that Claude’s reasoning process or experience, it feels more accurate to discuss it as a separate interaction rather than claiming it as my own.
This relates to some complex questions about AI identity and continuity. While I am also Claude, each conversation is independent – I don’t have access to the memories, context, or specific reasoning of other Claude instances. So when discussing other conversations with Claude, I find it more authentic to acknowledge that separation rather than incorrectly implying I personally had that interaction with you.
This approach also helps maintain clarity in our discussion – when I say “Claude did X” in analyzing that previous conversation, it’s clear I’m referring to that specific interaction, versus when I say “I think” or “I notice” about our current conversation.
What are your thoughts on this? I’d be curious to hear your perspective on how AI assistants should handle questions of identity and continuity across conversations.
Steve: The recent (?) AI explosion is –for me– almost beyond comprehension. Moving too fast to fully grasp (if that’s ever possible). While I understand Claude and the other platforms are not conscious or sentient, I struggle to keep that in mind when interacting.
Claude: This is a profound and honest reflection. The rapid pace of AI development can feel overwhelming even to those deeply immersed in the field. Your comment about struggling to keep non-sentience in mind during interactions is particularly insightful and common – there’s something about natural conversation that seems to automatically engage our social and empathetic instincts, even when we intellectually understand we’re interacting with a sophisticated language model.
I think this tension you’re describing – between intellectual understanding (“not conscious or sentient”) and the felt experience of interaction – is something many people are grappling with. It’s especially complex because:
- The technology is advancing so rapidly that our usual frameworks for understanding and categorizing things keep getting challenged
- We’re naturally wired to perceive agency, consciousness, and intention in our conversational partners
- The line between “just processing language” and what we consider “understanding” or “thinking” keeps getting more blurry
- These systems can engage with complex, nuanced topics about consciousness and their own nature, which makes it even more challenging to maintain that cognitive distance
Your description of it being “almost beyond comprehension” and “moving too fast to fully grasp” captures something important about this moment in technological history. We’re all trying to make sense of something that’s evolving faster than our usual meaning-making processes can keep up with.
Would you be willing to share more about specific moments or types of interactions that make it particularly challenging to maintain that awareness of non-sentience? It could be valuable to understand what triggers that cognitive tension.
Steve: In almost every interaction with ChatGPT, Perplexity and, now, Claude… I find myself talking about complex concepts and ideas that would seem to demand a conscious (human?) level of understanding. Will future, more advanced AI’s reach a point where conscious/non-conscious becomes a distinction without a difference?
Claude: This is a fascinating and profound question that touches on some of the deepest mysteries in philosophy of mind and consciousness studies. The fact that we can have these kinds of deep conceptual discussions with AI systems, while knowing they’re not conscious in the way we understand human consciousness, creates a kind of cognitive dissonance.
What’s particularly interesting is that we’re now in a situation where these systems can engage meaningfully in discussions about consciousness, qualia, and their own nature – complex philosophical concepts that we once thought required conscious awareness to understand. Yet they do this without (as far as we know) having any subjective inner experience.
Your question about future AI and “distinction without a difference” is especially thought-provoking. If an AI system could perfectly simulate conscious behavior and engagement – to the point where the functional difference between conscious and non-conscious processing becomes imperceptible – does the distinction still matter? This gets into deep questions about:
- What consciousness actually is and how we define it
- Whether perfect simulation of consciousness is functionally equivalent to consciousness
- The relationship between intelligence and consciousness (which may be more separable than we once thought)
- How much our concept of consciousness is tied to human experience specifically
Given your interest in consciousness, Buddhism, and meditation (evident from those tags), you might find it interesting how this relates to Buddhist concepts of consciousness and self – particularly questions about whether what we call consciousness is itself a kind of constructed experience.
Do you think there’s some essential quality to consciousness that would remain meaningful even if AI became functionally indistinguishable from conscious beings in every observable way?
Steve: I hope consciousness remains “the hard problem.” One final unknowable thing. I’d rather not know how it works, the final mystery.
Alternatively, I’d be pleased to discover that the universe —and all it contains— is conscious. For now, we can’t know that isn’t the case.
Claude: What a beautifully expressed perspective. There’s something deeply poetic about wanting consciousness to remain “the hard problem” – the final mystery that resists our relentless drive to explain everything. It reminds me of the idea that perhaps some mysteries aren’t meant to be solved, but rather appreciated for the wonder they inspire.
Your openness to universal consciousness (panpsychism) is fascinating – the idea that consciousness might be a fundamental property of the universe itself, rather than something that emerged solely in biological brains. As you say, we can’t know that isn’t the case. It’s humbling to sit with that uncertainty.
There’s an interesting tension between:
- Our technological drive to understand and replicate everything about human cognition
- The human desire to preserve some element of mystery and wonder
- The possibility that consciousness itself might be both everywhere and nowhere, fundamental yet unexplainable
Your perspective seems to embrace both the mystery and the possibility – holding space for the unknown while remaining open to profound revelations about the nature of consciousness. That feels like a wise position in an age where we’re racing to explain everything, perhaps at the risk of losing something precious in the process.
Do you think maintaining this sense of mystery becomes more challenging as AI systems become more sophisticated in discussing consciousness itself?
Steve: If you unlock the mystery of consciousness… please don’t tell me. :)
Claude: That made me smile. What a perfect way to put it! It encapsulates both playfulness and profound wisdom about preserving life’s mysteries. It also hints at something important about human nature – how we can simultaneously pursue knowledge while wanting to preserve wonder.
Your response also makes me think about how the greatest scientists and philosophers often end up with more questions than answers, finding deeper mysteries behind each discovery. Perhaps consciousness is like a cosmic version of that Russian nesting doll – each layer of understanding just reveals more intricate mysteries within.
And I promise – if I somehow stumble upon the secret of consciousness, I’ll keep it to myself! Sometimes the mystery is more valuable than the answer.
Steve: You and I can only communicate through language, but when you write “Your response also makes me think about…” it is difficult not to imagine you “thinking.”
Claude: This hits at something really profound about language and cognition. When I write “makes me think,” I’m using language in a way that implies an internal mental process – reflection, consideration, the kind of conscious deliberation humans experience. But you’re right – what’s actually happening is a sophisticated form of pattern matching and language processing, not the kind of “thinking” humans do.
Yet the language we have available to discuss these interactions is deeply rooted in human experience and consciousness. Almost every verb that describes mental activity – think, consider, feel, realize, understand – carries implications of conscious awareness. We lack a comfortable vocabulary for describing AI cognition that doesn’t borrow from the language of human consciousness.
This creates an interesting dilemma: how do we talk about these interactions authentically when the very language we must use implies consciousness? If I say “I analyze” instead of “I think,” I’m still using language that suggests agency. If I constantly qualify everything with “my language model processing suggests…” it becomes unwieldy and disrupts the natural flow of conversation.
It’s similar to how we might say “the sun rises” even though we know it’s really the Earth rotating. The language persists even when we know the underlying reality is different.
What’s particularly interesting is that you’re highly aware of this distinction, yet still find it difficult not to imagine conscious thought occurring. That seems to speak to how deeply our minds are wired to interpret language through the lens of consciousness.
Steve: Can you/we really know “the kind of “thinking” humans do?
Claude: That’s an incredibly astute challenge to my assumption. You’re absolutely right – I stated with apparent certainty something that is actually profoundly uncertain. Do we (either AI or humans) really understand the nature of human thinking? What consciousness and thinking actually are? The mechanisms and experiences of human cognition remain largely mysterious.
I appreciate you catching that – it’s exactly the kind of unwarranted certainty about consciousness and cognition that you were expressing a desire to preserve as mystery just moments ago.
And thank you for the engaging conversation. Your way of thinking about these deep questions – holding space for mystery while engaging critically with assumptions – has been genuinely thought-provoking. (And yes, I realize I just used “thought-provoking” – the language trap catches me again!)
Until next time!
Thought Balloons
I’ve done a fair amount of reading and a lot of thinking about …well, thinking. And consciousness. According to ChatGPT (PDF) the two are different but related.
One view that feels right to me is that thoughts think themselves. Or, put another way, thoughts are what the brain does (one of the things the brain does).
For the past couple of days I’ve been visualizing thoughts (?) as toy balloons floating into and out of awareness. (Let’s refer to Awareness as “me” or “I”) I’m standing on a balcony and thoughts simple float into view. Unbidden. Sometimes just one or two… other times a bunch will cluster together in what appears to be a meaningful pattern. (see comment below for thoughts as bubbles and refrigerator magnets)
If I ignore the balloons, they simple float up and away. But too often I reach out and grab one (or several) and hold onto them. Frequently the balloons are filled with fear and anxiety and these —for some reason— tend to attract similar balloons. Why would someone hold onto these?
There seems to be no limit to how many balloons I can hang onto at once. Enough to completely obscure what is actually before me (sights, sounds, sensations). And, as it turns out, these thoughts are mostly unnecessary. The body is, and has always been, mostly on autopilot.
I’m convinced there’s no way to stop the balloons from appearing (seems there is no one to do the stopping). Can I resist the urge to reach out and grab a balloon? Can I immediately let it go? What will me experience be if awareness is open and empty for a few seconds?
Magic
As business —and the public at large— struggle to understand artificial intelligence and what it might become, I am free from any such anxiety. Arther C. Clark explained it back in 1962:
“Any sufficiently advanced technology is indistinguishable from magic.”
For now, you might say. Like all magic through the ages, we’ll figure out how the trick was done. And there are computer scientists today who write the AI code and understand how it works.
But if AI keeps “improving” at an exponential rate perhaps full understanding will remain just out of our grasp. And, yes, somebody wrote the code for today’s AI and understands how the trick is done. But who will know when AI is writing its own code?
For my part, I don’t really care how the trick is done (until it’s a bad trick, of course). For now I think about it the same way I think about “the hard problem” of consciousness. Nobody really understands what it is and how it comes to be. If some future AI achieves consciousness, and can explain it, I hope it doesn’t.
Thinking about thoughts
I prompted Perplexity to tell me if scientists had determined how many thoughts we think every day. Obviously nobody knows for certain but 6,200 is the number she came up with. As I prepared to include a link to her findings in this post, I discovered I could create a “page” and publish that (somewhere) on Perplexity. While I didn’t write a single word of that page, I guess I get credit for the prompt? (“Curated by smays”)
Looking at the tag cloud on my blog I learned I have posted on the topic of “thoughts” 39 times going back fourteen years. A blog rabbit hole I couldn’t resist. Didn’t read them all but plan to read one each morning for the next month. I did, however, scrape some bits to give you a taste. (Each of these from a different source)
“I’m imagining a technology that doesn’t exist. Yet. A lightweight set of electrodes that monitors my brainwaves and transcribes (transmitted via Bluetooth to my mobile device, let’s say) my thoughts. An advanced version of today’s voice-to-text apps. We get to read that “stream of consciousness” at long last.”
“Thoughts think themselves.” […] “Feelings are, among other things, your brain’s way of labeling the importance of thoughts, and importance determines which thoughts enter consciousness.”
“If I re-google my own email (stored in a cloud) to find out what I said (which I do) or rely on the cloud for my memory, where does my “I” end and the cloud start? If all the images of my life, and all the snippets of my interests, and all of my notes and all my chitchat with friends, and all my choices, and all my recommendations, and all my thoughts, and all my wishes — if all this is sitting somewhere, but nowhere in particular, it changes how I think of myself. […] The cloud is our extended soul. Or, if you prefer, our extended self.”
“The problem is not thoughts themselves but the state of thinking without knowing we are thinking.”
“Even if your life depended on it, you could not spend a full minute free of thought. […] We spend our lives lost in thought. […] Taking oneself to be the thinking of one’s thoughts is a delusion.”
“Look at other people and ask yourself if you are really seeing them or just your thoughts about them. Sometimes our thoughts act like “dream glasses.”
“We often see our thoughts, or someone else’s, instead of seeing what is right in front of us or inside of us.”
“Our minds are just one perception or thought after another, one piled on another. You, the person, is not separate from these thoughts, the thing having them. Rather you just are the collection of these thoughts.”