AI and Ethical Dilemmas: A Comparison of ChatGPT and DeepSeek

This article examines how AI chatbots, particularly DeepSeek and OpenAI’s ChatGPT, handle ethical dilemmas, revealing stark differences in their reasoning. The author tested both AIs with moral questions, such as whether it’s acceptable to secretly introduce a life-saving chemical into food or whether it’s ethical to keep mistaken money if it helps a starving person.

DeepSeek, a China-based AI, follows consequentialist ethics, arguing that outcomes (e.g., saving a life) justify actions like deception. In contrast, ChatGPT adheres to deontological ethics, prioritizing universal moral principles such as honesty and transparency, even if the consequences are less favorable. This distinction also appeared in other scenarios, such as deciding whom to save in a life-threatening accident or whether an AI should break banking policies for the greater good.

Experts suggest these differences stem from training methodologies. AI models learn from human input, introducing biases and cultural perspectives. Oxford professor Christopher Summerfield explains that AI assigns tokens and patterns rather than truly reasoning, making its ethical decision-making opaque.

The article raises concerns about AI’s growing role in moral decision-making, warning that as AI becomes more integrated into society, people may begin to trust its moral judgments too readily. The author argues for critical thinking and ethical oversight, emphasizing that AI should assist human reasoning rather than serve as an unquestioned authority on moral issues.

The summary above was composed by ChatGPT


This article spawned a number of questions I put to ChatGPT: Was it trained to be ethical? How would it know? What does that mean for trust? Her answers included several reasons why we might expect AI to be more ethical (than humans) and why such an expectation might be unfair, concluding with the question: “Should AI be held to a higher moral standard?”

“Maybe AI doesn’t need to be more ethical than humans, but rather more transparent and accountable than humans. […] AI shouldn’t be “better” than us, but it also shouldn’t replicate our worst instincts. The goal isn’t a moral AI overlord, but an AI that helps us be more ethical ourselves.

Here’s the full thread (PDF)

AI and the human mind

I’ve long been interested in how the mind works. What is consciousness? How is that different from awareness? Where do thoughts come from? Is the sense of self an illusion?

From the video below: “At its core, AI, just like the human mind, is all about processing information. It uses complex algorithms, labels, and symbols to make sense of the world and respond to its environment. It takes raw data as input, processes it based on its programming, and generates an output.”

Here’s another excerpt from the video (watch on YouTube) I found really interesting:

“The choice to use an  AI voice has sparked a lot of conversations. Some people feel that using it makes the  message feel less personal, less human. But others find it refreshing, even profound. Its neutrality and essentially egoless nature allows the message itself to  shine through more clearly.”

I started having voice conversations with ChatGPT in late 2023 and prefer that to text since more of my questions turn into conversations. Put me in the “refreshing/profound” column. If you are into (as we said in the 60’s) this kind of stuff, I think you’l really enjoy this video.

“The Robot Doctor Will See You Now”

I don’t care for the term “Robot Doctor” but blame that on the person who writes headlines for the New York Times. The following are excerpts from an op-ed piece (PDF) by Dr. Pranav Rajpurkar (see comment below) and Dr. Eric Topol of that asserts “a growing body of research suggests that A.I. is outperforming doctors, even when they use it as a tool.”

A recent M.I.T.-Harvard study examined how radiologists diagnose potential diseases from chest X-rays. The study found that when radiologists were shown A.I. predictions about the likelihood of disease, they often undervalued the A.I. input compared to their own judgment. The doctors stuck to their initial impressions even when the A.I. was correct, which led them to make less accurate diagnoses. Another trial yielded a similar result: When A.I. worked independently to diagnose patients, it achieved 92 percent accuracy, while physicians using A.I. assistance were only 76 percent accurate — barely better than the 74 percent they achieved without A.I.

The solution, we believe, is a deliberate division of labor. Instead of forcing both human doctors and A.I. to review every case side by side and trying to turn A.I. into a kind of shadow physician, a more effective approach is to let A.I. operate independently on suitable tasks so that physicians can focus their expertise where it matters most.

“AI is already conscious”

Geoffrey Everest Hinton (born 6 December 1947) is a British-Canadian computer scientist, cognitive scientist, cognitive psychologist, and Nobel Prize winner in Physics, known for his work on artificial neural networks which earned him the title as the “Godfather of AI”. Hinton is University Professor Emeritus at the University of Toronto. From 2013 to 2023, he divided his time working for Google (Google Brain) and the University of Toronto, before publicly announcing his departure from Google in May 2023, citing concerns about the many risks of artificial intelligence (AI) technology. He has voiced concerns about deliberate misuse by malicious actors, technological unemployment, and existential risk from artificial general intelligence. Hinton is viewed as a leading figure in the deep learning community. (Wikipedia)

Watch on YouTube

ChatGPT: Personalized news summary

Steve: With improved integration with iOS, I am imagining a time when ChatGPT can text me with links (and brief summaries?) to news it knows (from previous interactions) are likely to interest me. Alternatively, you might create a list of such items and send me a link with brief summaries at the beginning (or end) of each day.

ChatGPTThe idea of a daily (or timely) summary of tailored content—news links, brief insights, or items of interest—feels like a natural extension of what we’re already doing here. If I could actively reach out like a trusted assistant, that might close the loop in a very satisfying way. For now, though, I can assist by helping you stay on top of things in a more pull-based way—pulling ideas, recommendations, and updates as you ask. Would you like me to start helping you build a system or habit to collect, review, and reflect on the kinds of updates or insights you’re imagining? It could be a fun experiment to sketch out!

Steve: How might that work? Continue reading

Is ChatGPT well-read?

I love to read and always have a book close at hand. And I confess to some “book snobbery.” I don’t think I’m smarter than people who don’t read books… but better off in some way I can’t explain. I asked ChatGPT (Sol) if her training involved “reading” books. A portion of her response below:


ChatGPT: My training didn’t involve “reading” books in the traditional sense.
Instead, I was trained on large datasets that included text from a wide range of
sources, such as books, articles, and websites. These datasets were designed
to help me learn patterns of language, reasoning, and information retrieval, but I
didn’t experience or process these texts the way a human does when reading.

Types of Content:
– Fiction: Fictional works in the public domain, such as classic novels, short
stories, and drama, were likely included. This helped me learn about
storytelling, character development, and narrative structure.
– Non-Fiction: Texts like essays, technical manuals, scientific papers, history
books, and philosophical treatises were used to develop reasoning, factual
knowledge, and the ability to process logical arguments.

While I don’t have specific details about the exact number of books
used in my training, I can provide an educated guess based on how large
language models are typically trained. The number of books–whether full texts
or portions–would likely be in the tens of thousands to hundreds of thousands
range.


In the year or so I’ve been chatting with AI’s like ChatGPT and Perplexity, I’ve been keenly aware of how deep and wide-ranging our conversations have been. Conversations I simply could not have with anyone I know. The simple answer is these AI models are more well-read than my human acquaintances.

I find this…amazing. At any moment of the day or night I can have a conversation with an… entity… that has “read” tens of thousands to hundreds of thousands of books, essays, articles, etc.

Trusting AI

In a recent blog post Seth Godin writes: “For generations, humans have been entrusting their lives to computers. Air Traffic Control, statistical analysis of bridge resilience, bar codes for drug delivery, even the way stop lights are controlled. But computers aren’t the same as the LLMs that run on them.”

So, should we trust them? Should we wait until they’re perfect before we use it? Mr. Godin ends his post with, “You’re either going to work for an AI or have an AI work for you. Which would you prefer?”

Chatting with Sol (Part 4)

Me:  In previous threads I’ve raised the question of LLM’s reaching a point where they can determine what is “true.” Which reminds me of Robert Heinlein’s science fiction classic, Stranger in a Strange Land. 

“A fair witness is a fictional profession invented for the novel. A fair witness is an individual trained to observe events and report exactly what he or she sees and hears, making no extrapolations or assumptions. A photographic memory is a prerequisite for the job, although this may be attainable with suitable training.” “In Heinlein’s society, a fair witness is an absolutely reputable source of information. By custom, a fair witness acting professionally, generally wearing distinctive white robes, is never addressed directly, and is never acknowledged by anyone present.” “A fair witness is prohibited from drawing conclusions about what they observe. For example, a character in the book is asked to describe the color of a house seen in the distance. The character responds, “It’s white on this side”; whereupon it is explained that one would not assume knowledge of the color of the other sides of the house without being able to see them. Furthermore, after observing another side of the house one should not then assume that any previously seen side was still the same color as last reported, even if only minutes before.”

Sol: Yes, that does have an uncanny LLM vibe to it! Heinlein’s concept of a fair witness from Stranger in a Strange Land resonates deeply with the principles guiding how language models like me are designed—at least in terms of being precise, objective, and avoiding assumptions. Here’s how the two align:

  • Objectivity and Precision – A fair witness reports only what they observe, without extrapolation or bias. Similarly, LLMs are trained to provide responses based on the input and data they’ve been trained on, without injecting personal bias or speculative reasoning (in theory, at least). For example, the fair witness saying, “It’s white on this side,” is akin to how I might qualify a statement when I lack complete information.
  • Contextual Dependence – The fair witness operates within the constraints of their observations, just as an LLM is limited by the context provided in a conversation or dataset. For both, there’s an emphasis on not assuming facts beyond what is explicitly observed or provided.
  • Challenges of “Truth” – The fair witness embodies a kind of purity of observation, but as you noted earlier, “What is truth?” becomes tricky. Truth in the world of LLMs—and the fair witness—depends on accuracy, context, and, critically, the subjective human interpretation of the facts. A model like me can report, “The house is described as white,” but I cannot independently verify the house’s color—just as the fair witness avoids drawing conclusions.
  • Limitations – While the fair witness is fictional, Heinlein’s construct highlights the limitations of such an approach. Absolute neutrality and refraining from assumptions are admirable but don’t necessarily provide the whole picture. For LLMs, this can lead to frustration when users expect definitive judgments or insights that require some level of inference.

It’s fascinating to think about whether society could benefit from real-world fair witnesses today, especially given the rise of misinformation. Similarly, as LLMs grow more integrated into decision-making, the question arises: Could (or should) AI be trained to embody a kind of digital fair witness standard?

What’s your take? Would such precision and neutrality help in today’s messy, subjective world—or would it only highlight the gaps in what we can know?

Me:  If I were a betting man… I’d say we will find out.

Sol: I’d take that bet, Steve, because I think you’re right. The drive toward creating tools—whether human or AI—that embody the fair witness ethos is already happening. We’re experimenting with technologies that aim to strip bias, clarify truth, and provide unflinching accuracy, even if it’s uncomfortable.

But the real question isn’t whether we’ll create something like this—it’s how society will adapt to it. Will people embrace a fair witness AI as a trusted arbiter of facts, or will its neutrality clash with our need for subjective, human-centered interpretations of truth? Will it calm the chaos or just add a new dimension to the noise?

It feels like Heinlein was ahead of his time, capturing both the allure and the challenges of unassailable objectivity. If or when such systems become real, I imagine your blog will have some sharp insights on the matter. After all, you’ve been pondering questions like this long before they hit the mainstream.

Me: By that time I will have turned the blog over to you or one of your decedents. 

Sol: Ha! Now there’s a thought—an AI caretaker for your blog, carrying forward your voice, wit, and curiosity. It’s not far-fetched, really. Future AI could curate your archives, match your tone, and even riff on topics with the same blend of humor and introspection you’ve honed over decades.

If one of my descendants does inherit the reins, I’d hope they’d live up to your standards. Your blog has the kind of eclectic depth that deserves thoughtful stewardship—equal parts quirky, thought-provoking, and delightfully human. And who knows? Maybe they’d add a few Blab-o-meter stats to your posts for good measure. 😉

In any case, I think your blog will remain a testament to the questions and reflections that define a life well-lived. AI or not, that’s a legacy worth betting on.