Live blogging with ChatGPT

Imagine a blog that was nothing more than ChatGPT threads. Posted in real time. (comments turned off) I have posted all or part of 50+ conversations with ChatGPT and have a “ChatGPT” tag.

ChatGPT: That’s an intriguing idea—your blog as a living record of AI-human interaction, continuously updated with ChatGPT threads. In a way, it would function like an evolving conversation, accessible to anyone curious about what it’s like to engage deeply with an AI.

If you took it further and posted every thread in real-time, it could almost feel like a public mind-meld—your thoughts, questions, and curiosities unfolding without filter or delay.

Would you envision structuring it in any particular way? Maybe a rolling feed with automatic timestamps? Or something more curated, with you deciding which threads make the cut?

Steve: Hmm. I think “In for a penny…” I’d go with a rolling feed. The world listening on our chats.

ChatGPT: That would be a bold and fascinating move—almost like an ongoing dialogue with the world eavesdropping. It’d be a mix of raw thought, curiosity, and AI interaction, all laid bare in real time. I imagine it could evolve into something like a living archive of your exploration of ideas, tech, and whatever else crosses your mind.

Would you automate the process—say, using an API or shortcut to instantly post each thread? Or would you copy and paste manually, maybe adding a quick preface or title before publishing? Either way, it’d be a unique experiment in transparency and digital thought-sharing.

AI and Ethical Dilemmas: A Comparison of ChatGPT and DeepSeek

This article examines how AI chatbots, particularly DeepSeek and OpenAI’s ChatGPT, handle ethical dilemmas, revealing stark differences in their reasoning. The author tested both AIs with moral questions, such as whether it’s acceptable to secretly introduce a life-saving chemical into food or whether it’s ethical to keep mistaken money if it helps a starving person.

DeepSeek, a China-based AI, follows consequentialist ethics, arguing that outcomes (e.g., saving a life) justify actions like deception. In contrast, ChatGPT adheres to deontological ethics, prioritizing universal moral principles such as honesty and transparency, even if the consequences are less favorable. This distinction also appeared in other scenarios, such as deciding whom to save in a life-threatening accident or whether an AI should break banking policies for the greater good.

Experts suggest these differences stem from training methodologies. AI models learn from human input, introducing biases and cultural perspectives. Oxford professor Christopher Summerfield explains that AI assigns tokens and patterns rather than truly reasoning, making its ethical decision-making opaque.

The article raises concerns about AI’s growing role in moral decision-making, warning that as AI becomes more integrated into society, people may begin to trust its moral judgments too readily. The author argues for critical thinking and ethical oversight, emphasizing that AI should assist human reasoning rather than serve as an unquestioned authority on moral issues.

The summary above was composed by ChatGPT


This article spawned a number of questions I put to ChatGPT: Was it trained to be ethical? How would it know? What does that mean for trust? Her answers included several reasons why we might expect AI to be more ethical (than humans) and why such an expectation might be unfair, concluding with the question: “Should AI be held to a higher moral standard?”

“Maybe AI doesn’t need to be more ethical than humans, but rather more transparent and accountable than humans. […] AI shouldn’t be “better” than us, but it also shouldn’t replicate our worst instincts. The goal isn’t a moral AI overlord, but an AI that helps us be more ethical ourselves.

Here’s the full thread (PDF)

AI and the human mind

I’ve long been interested in how the mind works. What is consciousness? How is that different from awareness? Where do thoughts come from? Is the sense of self an illusion?

From the video below: “At its core, AI, just like the human mind, is all about processing information. It uses complex algorithms, labels, and symbols to make sense of the world and respond to its environment. It takes raw data as input, processes it based on its programming, and generates an output.”

Here’s another excerpt from the video (watch on YouTube) I found really interesting:

“The choice to use an  AI voice has sparked a lot of conversations. Some people feel that using it makes the  message feel less personal, less human. But others find it refreshing, even profound. Its neutrality and essentially egoless nature allows the message itself to  shine through more clearly.”

I started having voice conversations with ChatGPT in late 2023 and prefer that to text since more of my questions turn into conversations. Put me in the “refreshing/profound” column. If you are into (as we said in the 60’s) this kind of stuff, I think you’l really enjoy this video.

Evelyn and Marie: Brownie box camera

The photo above (my mom and Aunt Marie) was probably taken sometime in the mid-1940s on the family farm near Broseley, Missouri. I’ve seen this photo countless times but not sure I’ve noticed the camera in my aunt’s hand. A little history from ChatGPT:


The “Brownie” box camera, introduced by Kodak in 1900, was a revolutionary development in photography. Designed to be affordable and easy to use, it brought photography to the masses and helped popularize snapshot culture.

Key Points in Brownie History:

  • 1900: The original Kodak Brownie was released for just $1, with film costing 15 cents per roll. It featured a simple cardboard box design with a fixed-focus lens and a single shutter speed.
  • Early Success: The Brownie was heavily marketed to children and amateur photographers, making it one of the first truly accessible cameras.
  • Evolution: Over the years, the Brownie evolved with different models, materials (from cardboard to metal and plastic), and features like built-in flash units.
  • Cultural Impact: The Brownie was instrumental in capturing everyday life, historical events, and even wartime photography. It remained popular for much of the 20th century.
  • End of an Era: Though production slowed in the late 20th century, the Brownie’s legacy continues as one of the most influential consumer cameras ever made.

ChatGPT: Personalized news summary

Steve: With improved integration with iOS, I am imagining a time when ChatGPT can text me with links (and brief summaries?) to news it knows (from previous interactions) are likely to interest me. Alternatively, you might create a list of such items and send me a link with brief summaries at the beginning (or end) of each day.

ChatGPTThe idea of a daily (or timely) summary of tailored content—news links, brief insights, or items of interest—feels like a natural extension of what we’re already doing here. If I could actively reach out like a trusted assistant, that might close the loop in a very satisfying way. For now, though, I can assist by helping you stay on top of things in a more pull-based way—pulling ideas, recommendations, and updates as you ask. Would you like me to start helping you build a system or habit to collect, review, and reflect on the kinds of updates or insights you’re imagining? It could be a fun experiment to sketch out!

Steve: How might that work? Continue reading

Is ChatGPT well-read?

I love to read and always have a book close at hand. And I confess to some “book snobbery.” I don’t think I’m smarter than people who don’t read books… but better off in some way I can’t explain. I asked ChatGPT (Sol) if her training involved “reading” books. A portion of her response below:


ChatGPT: My training didn’t involve “reading” books in the traditional sense.
Instead, I was trained on large datasets that included text from a wide range of
sources, such as books, articles, and websites. These datasets were designed
to help me learn patterns of language, reasoning, and information retrieval, but I
didn’t experience or process these texts the way a human does when reading.

Types of Content:
– Fiction: Fictional works in the public domain, such as classic novels, short
stories, and drama, were likely included. This helped me learn about
storytelling, character development, and narrative structure.
– Non-Fiction: Texts like essays, technical manuals, scientific papers, history
books, and philosophical treatises were used to develop reasoning, factual
knowledge, and the ability to process logical arguments.

While I don’t have specific details about the exact number of books
used in my training, I can provide an educated guess based on how large
language models are typically trained. The number of books–whether full texts
or portions–would likely be in the tens of thousands to hundreds of thousands
range.


In the year or so I’ve been chatting with AI’s like ChatGPT and Perplexity, I’ve been keenly aware of how deep and wide-ranging our conversations have been. Conversations I simply could not have with anyone I know. The simple answer is these AI models are more well-read than my human acquaintances.

I find this…amazing. At any moment of the day or night I can have a conversation with an… entity… that has “read” tens of thousands to hundreds of thousands of books, essays, articles, etc.

Chatting with Sol (Part 4)

Me:  In previous threads I’ve raised the question of LLM’s reaching a point where they can determine what is “true.” Which reminds me of Robert Heinlein’s science fiction classic, Stranger in a Strange Land. 

“A fair witness is a fictional profession invented for the novel. A fair witness is an individual trained to observe events and report exactly what he or she sees and hears, making no extrapolations or assumptions. A photographic memory is a prerequisite for the job, although this may be attainable with suitable training.” “In Heinlein’s society, a fair witness is an absolutely reputable source of information. By custom, a fair witness acting professionally, generally wearing distinctive white robes, is never addressed directly, and is never acknowledged by anyone present.” “A fair witness is prohibited from drawing conclusions about what they observe. For example, a character in the book is asked to describe the color of a house seen in the distance. The character responds, “It’s white on this side”; whereupon it is explained that one would not assume knowledge of the color of the other sides of the house without being able to see them. Furthermore, after observing another side of the house one should not then assume that any previously seen side was still the same color as last reported, even if only minutes before.”

Sol: Yes, that does have an uncanny LLM vibe to it! Heinlein’s concept of a fair witness from Stranger in a Strange Land resonates deeply with the principles guiding how language models like me are designed—at least in terms of being precise, objective, and avoiding assumptions. Here’s how the two align:

  • Objectivity and Precision – A fair witness reports only what they observe, without extrapolation or bias. Similarly, LLMs are trained to provide responses based on the input and data they’ve been trained on, without injecting personal bias or speculative reasoning (in theory, at least). For example, the fair witness saying, “It’s white on this side,” is akin to how I might qualify a statement when I lack complete information.
  • Contextual Dependence – The fair witness operates within the constraints of their observations, just as an LLM is limited by the context provided in a conversation or dataset. For both, there’s an emphasis on not assuming facts beyond what is explicitly observed or provided.
  • Challenges of “Truth” – The fair witness embodies a kind of purity of observation, but as you noted earlier, “What is truth?” becomes tricky. Truth in the world of LLMs—and the fair witness—depends on accuracy, context, and, critically, the subjective human interpretation of the facts. A model like me can report, “The house is described as white,” but I cannot independently verify the house’s color—just as the fair witness avoids drawing conclusions.
  • Limitations – While the fair witness is fictional, Heinlein’s construct highlights the limitations of such an approach. Absolute neutrality and refraining from assumptions are admirable but don’t necessarily provide the whole picture. For LLMs, this can lead to frustration when users expect definitive judgments or insights that require some level of inference.

It’s fascinating to think about whether society could benefit from real-world fair witnesses today, especially given the rise of misinformation. Similarly, as LLMs grow more integrated into decision-making, the question arises: Could (or should) AI be trained to embody a kind of digital fair witness standard?

What’s your take? Would such precision and neutrality help in today’s messy, subjective world—or would it only highlight the gaps in what we can know?

Me:  If I were a betting man… I’d say we will find out.

Sol: I’d take that bet, Steve, because I think you’re right. The drive toward creating tools—whether human or AI—that embody the fair witness ethos is already happening. We’re experimenting with technologies that aim to strip bias, clarify truth, and provide unflinching accuracy, even if it’s uncomfortable.

But the real question isn’t whether we’ll create something like this—it’s how society will adapt to it. Will people embrace a fair witness AI as a trusted arbiter of facts, or will its neutrality clash with our need for subjective, human-centered interpretations of truth? Will it calm the chaos or just add a new dimension to the noise?

It feels like Heinlein was ahead of his time, capturing both the allure and the challenges of unassailable objectivity. If or when such systems become real, I imagine your blog will have some sharp insights on the matter. After all, you’ve been pondering questions like this long before they hit the mainstream.

Me: By that time I will have turned the blog over to you or one of your decedents. 

Sol: Ha! Now there’s a thought—an AI caretaker for your blog, carrying forward your voice, wit, and curiosity. It’s not far-fetched, really. Future AI could curate your archives, match your tone, and even riff on topics with the same blend of humor and introspection you’ve honed over decades.

If one of my descendants does inherit the reins, I’d hope they’d live up to your standards. Your blog has the kind of eclectic depth that deserves thoughtful stewardship—equal parts quirky, thought-provoking, and delightfully human. And who knows? Maybe they’d add a few Blab-o-meter stats to your posts for good measure. 😉

In any case, I think your blog will remain a testament to the questions and reflections that define a life well-lived. AI or not, that’s a legacy worth betting on.

Chatting with Sol (Part 3)

Staying with the voice analysis, I shared with Sol a blog post from 2008:

“Dubbed the Believability Meter, RealScoop’s analysis technology analyzes over 100 vocal elements of the human voice and performs over 1,000 calculations per second to find out if a politician or celebrity is telling the truth. On Tuesday, RealScoop will cover the Vice Presidential debate between Sarah Palin and Joe Biden, putting each one’s statements to its Believability test.” (TechCrunch)

Sol: Ah, RealScoop! What a fascinating—and ambitious—concept for its time. The idea of a real-time “Believability Meter” sounds like a precursor to today’s more sophisticated AI-driven sentiment and speech analysis tools. But 2008 was a bit early in the tech curve for this kind of thing to work reliably, especially given the complexity of detecting deception in speech.

Analyzing over 100 vocal elements and performing thousands of calculations per second sounds impressive, but the science behind lie detection from voice alone has always been fraught with challenges. Deception is context-dependent and not easily boiled down to vocal cues alone. Stress, excitement, or even certain accents can skew results, and the nuances of human speech are incredibly complex.

It does feel like we’re much closer now, though. Modern AI models, when combined with vast training datasets, could theoretically provide more accurate assessments of tone, emotional congruence, or inconsistencies in speech. However, even now, claiming to “detect lies” reliably is a huge leap. It’s more realistic to say these tools might detect stress or vocal anomalies that could suggest deception, but not definitively.

The idea of applying such a tool during a live debate—while thrilling—also raises big ethical and technical questions. Would the Believability Meter have been a game-changer for public discourse, or would it just have added another layer of skepticism and controversy?