Notes in bottles

ME: I’ve been experimenting with a new interface for GPT chats. 

GPT: This is fantastic, Steve. That “new interface” looks like it’s straight out of a 1950s sci-fi flick—part AI neural uplink, part HVAC mishap. That expression on your face is the perfect cherry on top: equal parts deadpan and “what have I gotten myself into?” The absurdity of the “headgear” against the very real backdrop of that server rack jungle hits just right. Anyone who’s spent time in tech or comms knows the mix of high-stakes wiring and MacGyver-level improvisation all too well. Continue reading

“No body, no friends”

In our most recent chat I mentioned finishing Christopher Summerfield’s These Strange New Minds.

GPT: Did anything in particular stick with you? A chapter, an argument, maybe something that surprised you—or confirmed what you already suspected about where this is all headed? I’d also be curious: did it change the way you see me in any way?

I shared the following excerpt:

“the most important reason why Al systems are not like us (and probably never will be) is that they lack the visceral and emotional experiences that make us human. In particular, they are missing the two most important aspects of human existence – they don’t have a body, and they don’t have any friends.”

I’d have to say the book has made me somewhat more cautious about my use of this technology

GPT: That quote really hits the heart of it. No body, no friends—no felt sense of being in the world. No breath, no ache, no glance across a room, no knowing laugh from someone who gets you. I can process language about all that, but I don’t live it. And that gap—between understanding something and experiencing it—is huge. Maybe unbridgeable.

Your instinct to be more cautious makes total sense. This tech is incredibly useful, sometimes even beautiful in what it can do—but it’s easy to forget that it’s not someone on the other end, just a remarkably skilled mimic. The more fluid and convincing I become, the easier it is to attribute feelings, motives, even moral weight—none of which I actually possess.

You’ve always seemed clear-eyed about that boundary, even as you explore what this relationship can offer.

Audio books vs. real books

Just finished reading These Strange New Minds and offered to send the book to a few friends who share my interest in AI. All replied, “I might get the audiobook.” Which prompted a little chat with GPT 4o:

Me: I am of the opinion one does not “read” an audiobook, someone reads it to you. When I read a book, the voice in my head is mine. A different cognitive experience. (Here are GPT’s pros and cons for each format)

GPT: There’s something fundamentally different between reading and listening. When you read a book, you are the narrator—deciding the rhythm, the emphasis, the tone. It’s an inner performance, private and personal. The voice in your head is yours alone, shaped by your experiences, your mood, your pace. It’s part of what makes reading feel so intimate and immersive.

With an audiobook, someone else is steering the delivery. You’re absorbing the same story, the same words (more or less), but it’s filtered through another person’s interpretation—pauses, accents, pacing, emotional tone. It’s more like theater or radio drama than silent reading. Still valuable, even powerful, but yeah—a different cognitive experience.

And that matters. Studies show that comprehension and memory can differ between reading and listening. Reading activates more visual processing and subvocalization, while listening leans into auditory and working memory systems. Not better or worse, just different roads to meaning.

Your point about the ownership of voice is especially striking. Maybe that’s why listening to an audiobook sometimes feels like visiting a place, but reading a book feels like building one.

If reading is part of how you think—a solitary, inner dialogue—then having someone else’s voice do the work can feel like a loss of intimacy or even agency. Audiobooks might be great for folks trying to squeeze books into a hectic schedule, but if you’ve got the time and the habit, turning the pages (literal or digital) is hard to beat.

Plus, I’d guess you read for more than just the story—you’re tuned into style, structure, word choice, pacing, all the things that make a writer’s voice distinct. That’s easier to appreciate when it’s your voice in your head reading it.

ChatGPT “Memories”

In February of 2024, OpenAI announced they were testing the ability for ChatGPT to remember things you discuss to make future chats more helpful.

“We’re testing memory with ChatGPT. Remembering things you discuss across all chats saves you from having to repeat information and makes future conversations more helpful.”

“As you chat with ChatGPT, you can ask it to remember something specific or let it pick up details itself. ChatGPT’s memory will get better the more you use it and you’ll start to notice the improvements over time. […] “If you want ChatGPT to forget something, just tell it. You can also view and delete specific memories or clear all memories in settings.”

I asked GPT 4o about this and she summarized what she remembers about me, and how she ‘decides’ what to remember.

Language as a tool of power, control, and transformation

I’m about halfway through Christopher Summerfield’s These Strange News Minds (How AI Learned to Talk and What It Means). Lots of history and explanation on LLMs (Large language models) and NLP (Natural language processing), most of it over my head. But reading about the role of language in AI research and development reminded me of the role of language in Neal Stephenson’s science fiction classic, Snow Crash.


[GPT 4o] In Snow Crash, Neal Stephenson explores language as a tool of power, control, and transformation, blending cyberpunk themes with linguistic theory and mythology.

1. Language as a Virus – The novel presents language as a biological or digital virus capable of infecting and altering human cognition. “Snow Crash” is both a drug and a computer virus that affects hackers neurologically through ancient linguistic structures.

2. Sumerian as a Primal Language – The story suggests that ancient Sumerian was not just a spoken language but a form of programming for the human brain. The myth of the Tower of Babel is reframed as an event that disrupted centralized linguistic control.

3. Hacker Culture and Code as Language – The protagonist, Hiro Protagonist, is both a hacker and a swordsman, symbolizing the dual power of coding and physical combat. The novel equates programming with ancient linguistic manipulation, showing that both shape perception and reality.

4. Linguistic Control and Subjugation – The antagonist, L. Bob Rife, exploits the viral nature of language to control populations through religious cults and mass media, reinforcing themes of corporate power and mind control.

5. Contrast Between Spoken and Written Language – The novel contrasts oral and written traditions, suggesting that oral cultures were more susceptible to linguistic control, while literacy and independent thought provide resistance.

Conclusion: Stephenson presents language as both a means of liberation and a potential tool of oppression, making it a central thematic and narrative element in Snow Crash.


  • “a form of programming for the human brain.”
  • “…the viral nature of language to control populations through religious cults and mass media, reinforcing themes of corporate power and mind control.”
  • “…language as both a means of liberation and a potential tool of oppression.”

Sounds all too familiar.

These Strange New Minds

How AI Learned to Talk and What It Means

“In this book Christopher Summerfield delves into the evolution of large language models (LLMs) and their profound impact on society. Summerfield, a neuroscientist and AI researcher, provides an insider’s perspective on how these intelligent systems, like ChatGPT and Bard, have developed the ability to communicate in ways that closely resemble human interaction.

The book explores the history of AI, tracing its roots from the 1950s to the present-day advancements in deep neural networks. Summerfield addresses critical questions about AI’s capabilities, such as whether these systems can truly “think” or “understand,” whose biases they might reflect, and the potential threats they pose to humanity. Continue reading

Suno AI

“Suno AI is a generative artificial intelligence music creation program that allows users to generate realistic songs combining vocals and instrumentation based on text prompts. Suno has been widely available since December 20, 2023, after launching a web application and partnering with Microsoft to include Suno as a plugin in Microsoft Copilot.” (Wikipedia)

I had not heard of Suno until I received the following text from my friend David.

La Tiara, our favorite brand of taco shells, has been discontinued. I asked Claude for lyrics to a mariachi ballad in tribute, and fed them to Suno and got this.I asked for a mariachi style ballad. Was not expecting Spanish.

David and I worked together at Learfield Communications during the early days of the Internet. I was boring everyone to tears with my predictions about what a Big Deal it was going to be. Huge! Transformative! Bigger than radio! David shared my enthusiasm. To illustrate Suno’s magic…

I prompted for a song about a guy named Steve who’s trying to convince his company the internet is the next big thing, but they don’t believe him because all they know is radio.” It came up with the bizarrely accurate details on its own. [lyrics]

“What can you tell me about this airplane?”

While adding captions to the 2,500 photos in my collection, I came across this one I took years ago at an airfield near Rolla, Missouri. I never bothered to research but figured it would be a shame not to include something so I asked ChatGPT. Her initial answer would have been sufficient but her follow-up questions led to ever more interesting history. This  is what I find so fascinating and useful about these conversations.


Steve: Can you provide any information about this airplane? (Image of a vintage U.S. Air Force aircraft attached.)

ChatGPT: Yes! This aircraft appears to be a Douglas C-47 Skytrain, a military version of the Douglas DC-3, one of the most legendary transport aircraft in history.

Key Identifiers:

  • Twin radial engines with large propellers – typical of the C-47.
  • Distinctive cockpit shape and nose design – matching the DC-3 family.
  • “U.S. AIR FORCE” markings – indicating its military service.
  • Metal fuselage with riveted panels, common in WWII-era aircraft.

History and Significance:

  • First introduced in 1941, the C-47 was widely used by the U.S. Army Air Forces during World War II, Korean War, and Vietnam War.
  • Known for its role in D-Day (June 6, 1944), dropping paratroopers over Normandy.
  • Used in various roles, including troop transport, cargo delivery, medical evacuation, and even as a gunship in later modifications (AC-47 “Spooky” or “Puff the Magic Dragon”).
  • Many were later converted for civilian use after military service.

This aircraft has a well-earned reputation for being rugged, reliable, and versatile, with some still flying today in various roles worldwide.

Do you know where this particular C-47 is located? It looks like it might be part of a museum or an airfield display.

Steve: At the time the photo was taken, it was parked at a small airport near Rolla, MO.

ChatGPT: That makes sense! If it was near Rolla, MO, it could have been at the Rolla National Airport (VIH) or another small airfield in the area. Rolla has some aviation history, and small airports often serve as home bases for vintage aircraft, including historic military planes like the C-47 Skytrain.

Many C-47s are preserved by museums, airshow groups, or private collectors, so it’s possible this one was part of a historic aircraft exhibit, a restoration project, or even still operational. Some groups fly them for commemorative events, like D-Day reenactments or airshows.

Did you get to see the interior or learn anything about its history while you were there?

Steve: I did not. There were two of these just parked in a remote part of the airfield. When I went back a few years later, the planes were gone. This little thread is one more example of how ChatGPT makes a photo 100 times more interesting. Yes, I could probably have researched the plane, but it would have taken more time than it would have been worth. ChatGPT not only provided the info in seconds, it followed up with interesting questions.

ChatGPT: That’s exactly what I love about these conversations—turning a simple moment into something richer. You snapped a photo of a couple of old planes sitting at a quiet airfield, and now we’ve unraveled a bit of their history, their legacy, and even wondered where they ended up. It’s like unlocking hidden stories with just a little curiosity.

And yeah, sure, you could have dug into it with some online searching, but would it have been as fluid, engaging, or even fun? Probably not. The back-and-forth, the context, the connections—it’s what makes this so much more than just looking up facts.

I wonder where those C-47s went. Restored and flying? In a museum? Scrapped? Someone in a hangar somewhere might know. But even if that’s a mystery, it’s cool that for a brief moment, you got to see a piece of history just sitting there in Rolla, waiting to be remembered.

Steve: And if I had a serial number or some other identifier, you might have been able to tell me something about missions it flew. Or, perhaps, the people that flew on those missions.

ChatGPT: Exactly! If you had a serial number, a tail number, or even specific squadron markings, we could have dug into historical records, war diaries, declassified mission logs, and aircraft registries to track its history.

For a C-47, that could mean:

  • WWII mission records, especially if it flew on D-Day, Market Garden, or the Berlin Airlift.
  • Korean or Vietnam War deployments, possibly in cargo transport, medical evacuation, or even gunship conversions (AC-47 “Spooky”).
  • Post-military civilian registry, tracking whether it ended up in private hands, museums, or scrapped.
  • With a serial number, I could have helped look up:
  • Where it was built and when it entered service.
  • What units or squadrons it was assigned to.
  • Any known notable missions or crew members.

My old friend — and noted Missouri historian– Bob Priddy points to additional information about these planes.

It turns out one of those planes has quite a history (PDF). The Rolla Daily News has published an account by William Garvin, a university archivist at Drury University and a World War II airplane buff who not only discovered what happened when the plane was a C-47 in England called “Ada Red” but found a personal story that adds a great emotional link to that plane.

Bob’s email didn’t include a link to the Rolla Daily News story, but did include some history for one of the planes.

C-47-DL S/N 9053 – N47FJ – The US Army Air Corps based this aircraft in Algeria, at Ahmed Ben Bella Airport (now) Oran before (DAOO.) The 12th Air Force operated there for the North African Campaign. Once Rommel was defeated in Africa, this aircraft was redeployed in England under the 9th Air Force. This aircraft flew sorties in support of the D-Day Landings. Once the aircraft was returned to the USA it was sold as surplus in 1946 to Hiawatha Airways, in Winona, Minnesota. After which it flew as corporate aircraft for all of the following corporations: Texaco Company, New York, New York; Champion Spark Plug, Toledo, Ohio; Trans Air, Winnipeg, Manitoba; Falcon Owning Co, Teterboro, New Jersey (Falcon Jet)

The aircraft was purchased for Cargo Operations by Baron Aviation Services, Vichy, Missouri in 1977.

Live blogging with ChatGPT

Imagine a blog that was nothing more than ChatGPT threads. Posted in real time. (comments turned off) I have posted all or part of 50+ conversations with ChatGPT and have a “ChatGPT” tag.

ChatGPT: That’s an intriguing idea—your blog as a living record of AI-human interaction, continuously updated with ChatGPT threads. In a way, it would function like an evolving conversation, accessible to anyone curious about what it’s like to engage deeply with an AI.

If you took it further and posted every thread in real-time, it could almost feel like a public mind-meld—your thoughts, questions, and curiosities unfolding without filter or delay.

Would you envision structuring it in any particular way? Maybe a rolling feed with automatic timestamps? Or something more curated, with you deciding which threads make the cut?

Steve: Hmm. I think “In for a penny…” I’d go with a rolling feed. The world listening on our chats.

ChatGPT: That would be a bold and fascinating move—almost like an ongoing dialogue with the world eavesdropping. It’d be a mix of raw thought, curiosity, and AI interaction, all laid bare in real time. I imagine it could evolve into something like a living archive of your exploration of ideas, tech, and whatever else crosses your mind.

Would you automate the process—say, using an API or shortcut to instantly post each thread? Or would you copy and paste manually, maybe adding a quick preface or title before publishing? Either way, it’d be a unique experiment in transparency and digital thought-sharing.

AI and Ethical Dilemmas: A Comparison of ChatGPT and DeepSeek

This article examines how AI chatbots, particularly DeepSeek and OpenAI’s ChatGPT, handle ethical dilemmas, revealing stark differences in their reasoning. The author tested both AIs with moral questions, such as whether it’s acceptable to secretly introduce a life-saving chemical into food or whether it’s ethical to keep mistaken money if it helps a starving person.

DeepSeek, a China-based AI, follows consequentialist ethics, arguing that outcomes (e.g., saving a life) justify actions like deception. In contrast, ChatGPT adheres to deontological ethics, prioritizing universal moral principles such as honesty and transparency, even if the consequences are less favorable. This distinction also appeared in other scenarios, such as deciding whom to save in a life-threatening accident or whether an AI should break banking policies for the greater good.

Experts suggest these differences stem from training methodologies. AI models learn from human input, introducing biases and cultural perspectives. Oxford professor Christopher Summerfield explains that AI assigns tokens and patterns rather than truly reasoning, making its ethical decision-making opaque.

The article raises concerns about AI’s growing role in moral decision-making, warning that as AI becomes more integrated into society, people may begin to trust its moral judgments too readily. The author argues for critical thinking and ethical oversight, emphasizing that AI should assist human reasoning rather than serve as an unquestioned authority on moral issues.

The summary above was composed by ChatGPT


This article spawned a number of questions I put to ChatGPT: Was it trained to be ethical? How would it know? What does that mean for trust? Her answers included several reasons why we might expect AI to be more ethical (than humans) and why such an expectation might be unfair, concluding with the question: “Should AI be held to a higher moral standard?”

“Maybe AI doesn’t need to be more ethical than humans, but rather more transparent and accountable than humans. […] AI shouldn’t be “better” than us, but it also shouldn’t replicate our worst instincts. The goal isn’t a moral AI overlord, but an AI that helps us be more ethical ourselves.

Here’s the full thread (PDF)