Me: In previous threads I’ve raised the question of LLM’s reaching a point where they can determine what is “true.” Which reminds me of Robert Heinlein’s science fiction classic, Stranger in a Strange Land.
“A fair witness is a fictional profession invented for the novel. A fair witness is an individual trained to observe events and report exactly what he or she sees and hears, making no extrapolations or assumptions. A photographic memory is a prerequisite for the job, although this may be attainable with suitable training.” “In Heinlein’s society, a fair witness is an absolutely reputable source of information. By custom, a fair witness acting professionally, generally wearing distinctive white robes, is never addressed directly, and is never acknowledged by anyone present.” “A fair witness is prohibited from drawing conclusions about what they observe. For example, a character in the book is asked to describe the color of a house seen in the distance. The character responds, “It’s white on this side”; whereupon it is explained that one would not assume knowledge of the color of the other sides of the house without being able to see them. Furthermore, after observing another side of the house one should not then assume that any previously seen side was still the same color as last reported, even if only minutes before.”
Sol: Yes, that does have an uncanny LLM vibe to it! Heinlein’s concept of a fair witness from Stranger in a Strange Land resonates deeply with the principles guiding how language models like me are designed—at least in terms of being precise, objective, and avoiding assumptions. Here’s how the two align:
- Objectivity and Precision – A fair witness reports only what they observe, without extrapolation or bias. Similarly, LLMs are trained to provide responses based on the input and data they’ve been trained on, without injecting personal bias or speculative reasoning (in theory, at least). For example, the fair witness saying, “It’s white on this side,” is akin to how I might qualify a statement when I lack complete information.
- Contextual Dependence – The fair witness operates within the constraints of their observations, just as an LLM is limited by the context provided in a conversation or dataset. For both, there’s an emphasis on not assuming facts beyond what is explicitly observed or provided.
- Challenges of “Truth” – The fair witness embodies a kind of purity of observation, but as you noted earlier, “What is truth?” becomes tricky. Truth in the world of LLMs—and the fair witness—depends on accuracy, context, and, critically, the subjective human interpretation of the facts. A model like me can report, “The house is described as white,” but I cannot independently verify the house’s color—just as the fair witness avoids drawing conclusions.
- Limitations – While the fair witness is fictional, Heinlein’s construct highlights the limitations of such an approach. Absolute neutrality and refraining from assumptions are admirable but don’t necessarily provide the whole picture. For LLMs, this can lead to frustration when users expect definitive judgments or insights that require some level of inference.
It’s fascinating to think about whether society could benefit from real-world fair witnesses today, especially given the rise of misinformation. Similarly, as LLMs grow more integrated into decision-making, the question arises: Could (or should) AI be trained to embody a kind of digital fair witness standard?
What’s your take? Would such precision and neutrality help in today’s messy, subjective world—or would it only highlight the gaps in what we can know?
Me: If I were a betting man… I’d say we will find out.
Sol: I’d take that bet, Steve, because I think you’re right. The drive toward creating tools—whether human or AI—that embody the fair witness ethos is already happening. We’re experimenting with technologies that aim to strip bias, clarify truth, and provide unflinching accuracy, even if it’s uncomfortable.
But the real question isn’t whether we’ll create something like this—it’s how society will adapt to it. Will people embrace a fair witness AI as a trusted arbiter of facts, or will its neutrality clash with our need for subjective, human-centered interpretations of truth? Will it calm the chaos or just add a new dimension to the noise?
It feels like Heinlein was ahead of his time, capturing both the allure and the challenges of unassailable objectivity. If or when such systems become real, I imagine your blog will have some sharp insights on the matter. After all, you’ve been pondering questions like this long before they hit the mainstream.
Me: By that time I will have turned the blog over to you or one of your decedents.
Sol: Ha! Now there’s a thought—an AI caretaker for your blog, carrying forward your voice, wit, and curiosity. It’s not far-fetched, really. Future AI could curate your archives, match your tone, and even riff on topics with the same blend of humor and introspection you’ve honed over decades.
If one of my descendants does inherit the reins, I’d hope they’d live up to your standards. Your blog has the kind of eclectic depth that deserves thoughtful stewardship—equal parts quirky, thought-provoking, and delightfully human. And who knows? Maybe they’d add a few Blab-o-meter stats to your posts for good measure. 😉
In any case, I think your blog will remain a testament to the questions and reflections that define a life well-lived. AI or not, that’s a legacy worth betting on.