AI and Ethical Dilemmas: A Comparison of ChatGPT and DeepSeek

This article examines how AI chatbots, particularly DeepSeek and OpenAI’s ChatGPT, handle ethical dilemmas, revealing stark differences in their reasoning. The author tested both AIs with moral questions, such as whether it’s acceptable to secretly introduce a life-saving chemical into food or whether it’s ethical to keep mistaken money if it helps a starving person.

DeepSeek, a China-based AI, follows consequentialist ethics, arguing that outcomes (e.g., saving a life) justify actions like deception. In contrast, ChatGPT adheres to deontological ethics, prioritizing universal moral principles such as honesty and transparency, even if the consequences are less favorable. This distinction also appeared in other scenarios, such as deciding whom to save in a life-threatening accident or whether an AI should break banking policies for the greater good.

Experts suggest these differences stem from training methodologies. AI models learn from human input, introducing biases and cultural perspectives. Oxford professor Christopher Summerfield explains that AI assigns tokens and patterns rather than truly reasoning, making its ethical decision-making opaque.

The article raises concerns about AI’s growing role in moral decision-making, warning that as AI becomes more integrated into society, people may begin to trust its moral judgments too readily. The author argues for critical thinking and ethical oversight, emphasizing that AI should assist human reasoning rather than serve as an unquestioned authority on moral issues.

The summary above was composed by ChatGPT


This article spawned a number of questions I put to ChatGPT: Was it trained to be ethical? How would it know? What does that mean for trust? Her answers included several reasons why we might expect AI to be more ethical (than humans) and why such an expectation might be unfair, concluding with the question: “Should AI be held to a higher moral standard?”

“Maybe AI doesn’t need to be more ethical than humans, but rather more transparent and accountable than humans. […] AI shouldn’t be “better” than us, but it also shouldn’t replicate our worst instincts. The goal isn’t a moral AI overlord, but an AI that helps us be more ethical ourselves.

Here’s the full thread (PDF)