AI Snake Oil

(Perplexity) “AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference” by Arvind Narayanan and Sayash Kapoor is a comprehensive examination of AI’s capabilities, limitations, and potential misuse. The authors aim to empower readers to distinguish between effective AI applications and “snake oil” – AI that doesn’t work as advertised.

Key points of the book include:

  1. Predictive AI often fails to deliver on its promises, especially in areas like predicting human behavior, academic success, crime likelihood, or civil wars.
  2. Generative AI, like ChatGPT, has valuable applications but also limitations. It can produce convincing-sounding content on any topic, but lacks a true source of truth and may generate misinformation.
  3. AI cannot effectively solve complex social issues, such as content moderation on social media platforms, due to the nuanced nature of human communication and values[.
  4. The authors argue that we should be more concerned about how people use AI rather than AI acting autonomously.
  5. The book highlights the need for better regulation of the AI industry and cautions against leaving AI’s future solely in the hands of tech companies.
  6. While critical of many AI applications, the authors remain optimistic about the future of generative AI and its potential benefits when used appropriately.
  7. The book provides guidance on how to spot AI hype and make informed decisions about AI-based products and services.

Overall, “AI Snake Oil” serves as a balanced examination of AI’s current state, debunking myths while acknowledging genuine progress, and emphasizing the importance of critical thinking in the face of AI hype.

Citations:
[1] https://www.goodreads.com/book/show/210319458-ai-snake-oil
[2] https://www.sciencenews.org/article/ai-snake-oil-how-to-spot-hype
[3] https://erictopol.substack.com/p/ai-snake-oila-new-book-by-2-princeton
[4] https://www.aisnakeoil.com/p/introducing-the-ai-snake-oil-book
[5] https://www.techpolicy.press/ai-snake-oil-separating-hype-from-reality/

“Silent Authority”

I gave the same prompt to ChatGPT and got what I think is a much better story.

FBI Special Agent Kevin Morales hunched over his workstation, staring at the call logs with mounting unease. The case was already high-stakes: someone was sending anonymous, threatening messages to Supreme Court Justice Robert Whitfield, a man known for his controversial opinions. The number receiving the threats was supposed to be untraceable—accessible only to the highest echelons of government. Yet here it was, compromised.

Morales and his team had traced one suspicious call to a burner phone bought at a Walmart in West Palm Beach. That was unsettling enough. The Supercenter was just down the road from Mar-a-Lago, the residence of the former president. Coincidence? Maybe. But Morales didn’t trust coincidences, especially not in this line of work.

The techs had flagged the call as a possible anomaly—just a brief ping to the Justice’s secure phone. It stood out because whoever bought that burner had gone through great pains to cover their tracks. The usual digital footprints—security cameras, store receipts, or geotagged selfies—had been carefully scrubbed. That level of care suggested sophistication.

But something gnawed at Morales. The encryption protecting the Supreme Court’s communications was among the most advanced in the world, and yet… this burner phone had found a way in. Out of sheer curiosity, and in direct violation of protocol, Morales cross-referenced the numbers of other Justices. He bypassed layers of security to pull their call logs, knowing full well it could end his career.

What he found made his pulse quicken. The same burner phone had contacted two other Justices’ numbers in recent weeks. All evidence of these calls had been meticulously deleted, but Morales’s backdoor tools uncovered them. Someone was trying to erase a trail. Someone connected to this same device from West Palm Beach.

He sat back, chewing on his bottom lip. This was no ordinary prank or political pressure campaign. It had all the hallmarks of something bigger—blackmail, coercion, maybe even a conspiracy involving people who weren’t supposed to get their hands dirty. And yet, it made no sense. Who would be bold—or foolish—enough to tamper with the highest court in the land?

Before he could share his findings with his supervisor, Morales’s phone buzzed on the desk. Frowning, he glanced at the screen. It was a message.

STOP.

Two words, no sender ID. Morales froze. The phone in his hand wasn’t listed in any directory. No one outside the bureau had access to this number.

His heart thudded in his chest. Someone was watching. Someone knew he was digging where he shouldn’t. And now, they’d warned him.

For the first time in his career, Morales felt something cold settle in his gut. Not fear exactly—but the sinking awareness that he was no longer in control. He stared at the screen, his mind racing.

What if they knew everything?

“Dial C for Conspiracy”

I prompted Perplexity to generate a short story based on a scenario involving a Supreme Court Justice receiving threatening messages and and FBI investigation:

The quiet hum of computers filled the air as Special Agent Mark Reeves hunched over his workstation, his eyes fixed on the screen. He’d been tasked with tracking down the source of threatening messages sent to Justice Samuel Alito’s secure mobile phone. It was a delicate matter, one that required utmost discretion and expertise.

As Reeves sifted through the data, a peculiar pattern emerged. Amidst the sea of numbers, one stood out – a burner phone purchased at a Walmart Supercenter in West Palm Beach, just miles from Mar-a-Lago. His heart raced as he realized the implications.

Driven by an insatiable curiosity, Reeves made a fateful decision. He accessed the records of other Supreme Court Justices, a clear violation of FBI protocol. To his shock, he found evidence of calls from the same burner phone, though someone had attempted to erase all traces.

Reeves leaned back in his chair, his mind reeling. The implications were staggering. Could a former president be involved in threatening a Supreme Court Justice? And why were there calls to other Justices as well?

As he contemplated his next move, his personal phone buzzed. Reeves froze, staring at the unlisted number on the screen. The message was simple yet chilling: “STOP!”

His hands trembled as he set the phone down. How had they known? Who was watching him? The weight of his discovery pressed down on him like a physical force.

Reeves glanced around the office, suddenly feeling exposed. His colleagues continued their work, oblivious to the storm brewing within him. He had stumbled upon something big, potentially earth-shattering. But pursuing it further could cost him his career, maybe even his life.

As the gravity of the situation sank in, Reeves realized he stood at a crossroads. Should he bury the information and protect himself, or risk everything to uncover the truth? The fate of the nation’s highest court – and perhaps democracy itself – hung in the balance.

With a deep breath, Reeves made his choice. He began to type, documenting everything he’d found. Whatever came next, the truth had to come out. The integrity of the Supreme Court depended on it.

I asked Perplexity to suggest a title and it provided ten. I liked the one above best. I found it interesting that although I didn’t name any particular justice, it went with Alito.

NEXUS: A Brief History of Information Networks from the Stone Age to AI

I have read and enjoyed each of Yuval Noah Harari’s previous books. Sapiens: A Brief History of Humankind; Homo Deus: A Brief History of Tomorrow; and 21 Lessons for the 21st Century. As the titles suggest, he writes about history and persuaded this reader that is the only context to fully understand what is happening in the world.

This book scared the shit out of me. I grew up during the early days of the Cold War, watching B-52 bomber packed with nukes flying overhead. As a teenager, I held my breath with the rest of the world during the Cuban Missile Crises. But Harari makes a compelling case for AI (assuming we fuck it up and we will) as a greater existential threat.

Like all of Harari’s books, this one (about 400 pages) got a loft of highlighter. More excerpts after the jump. Or you can watch this 40 minute discussion with Sam Harris.

Never summon empowers you cannot control

Human power is never the outcome of individual initiative. Power always stems from cooperation between large numbers of humans.

While each individual human is typically interested in knowing the truth about themselves in the world, large networks bind members and create order by relying on fictions and fantasies. Continue reading

AI Nanny

When TV sets began appearing in American living rooms, it didn’t take long for busy moms to recognize a potential “baby sitter.” Howdy Doody, Mickey Mouse Club, endless cartoon shows. Half a century later the iPad kept restless children quiet in the backseat of the car.

Is someone, somewhere, working on an AI nanny? A digital friend with whom (which?) your child can have a conversation. A friendly voice that will –in time– know as much about your child as you do.

(“What did you do at pre-school today? That’s a terrific drawing you’ve done. You seem sad, what’s bothering you?”)

“Parental controls!” …you shout. I’ll never let my five year old talk to a bot! Really? I’ll wager some frantic, over-worked, stressed out mom will gladly accept some help from a digital nanny.

And a child that has grown up with such a friend might not make much of a distinction between a real friend and an artificial friend. One that never yells at her, that is always there with a kind word, ready to talk/listen.

The possibilities –good and bad– are endless. And, I fear, inevitable.

AI podcast about Prairie Garden Trust

My friend Henry shared this 4 minute podcast about the Prairie Garden Trust. Don’t know the particulars but it was created by an AI tool called NotebookLM.

Not sure what prompts were used to create this but it is uncanny (strange or mysterious, especially in an unsettling way). Does it matter –will anybody care– that this is a bot talking about the PGT? Less and less, I suspect, as time passes. It was almost 20 years ago that Henry and I produced a podcast called Living Healthy. I choose to believe a listener would be able to tell the difference. For the time being.

Nobody is a racist

In my 76 years I have never met anyone who admitted to being racist/bigoted. Not once. Are these concepts totally “in the eye of the beholder” I wondered. I asked ChatGPT if she (I don’t like *it*) has the ability to recognize bigotry or racism? In a video or written essay, for example? Her response was TLDR by today’s standards. About 5 pages of double-space. Once again, it was a conversation I can’t imagine having with a human.

New category: AI Chats

Following my most recent chat (see below) with GPT, I flashed on the idea of a blog where each post was a conversation with my favorite… artificial entity. (I assume this is already being done). Every post would be a transcript of one of these chats. Or, if I were really ambitions, I could post the audio.

I don’t need another blog to tend to so I’ve decided to just post these conversations (I can’t think of them any other way) here in their own category. Don’t see how these could be more boring than My Dinner with Andre, the 1981 film directed by Louis Malle.

“Enhancing human cognition”

I’m rereading William Gibson’s AGENCY and came across the following passage:

“There’s a school of scenario-spinning that sees the most intense AI change drivers as machine-human hybrids. Radical augmentations of human consciousness, not code trying to behave like it.”

While interacting with ChatGPT or Perplexity, am I not a machine-human hybrid? At least for those few minutes. I ran this idea by ChatGPT.


ChatGPT: That’s a fascinating passage. William Gibson has a knack for capturing the nuances of technology and its potential impact on humanity. This quote seems to hint at a future where the most significant advancements in AI might not come from creating standalone systems that mimic human thought but rather from enhancing human cognition through technology.

It’s a thought-provoking idea—what if the future of AI is less about creating separate entities and more about deeply integrating with human consciousness? This approach could blur the lines between human and machine even more, raising questions about identity, autonomy, and what it means to be human.

Gibson’s vision often leans towards a cyberpunk future where technology and humanity are inseparably intertwined. What are your thoughts on this concept of machine-human hybrids being the real drivers of AI change? Does it align with your views on the direction AI might take?

ME: Are my interactions (conversations?) with Perplexity, ChatGPT, et al, augmenting my consciousness? Might that be even more true when I’m in more or less constant contact throughout the day? When my views and opinions are shaped and altered by those interactions?

ChatGPT: Your interactions with AI like ChatGPT, Perplexity, and others could indeed be seen as a form of augmenting your consciousness, especially when they influence your thoughts, opinions, and the way you process information. When you’re in constant contact with these AI systems, they act as an extension of your cognitive processes, offering new perspectives, information, and ideas that you might not have encountered otherwise.

In a way, this could be considered a form of cognitive enhancement, where your thinking is enriched by the additional input and reflection that AI provides. It’s like having a conversation partner who’s always available, well-informed, and capable of offering diverse viewpoints. Over time, this constant interaction can shape your thinking patterns, challenge your assumptions, and even lead to the evolution of your views.

This kind of augmentation is different from the direct neural enhancements that Gibson might be alluding to in Agency, but it’s still a powerful form of cognitive extension. The key difference is that instead of physically integrating technology with the brain, you’re mentally integrating the knowledge and perspectives that AI offers into your daily thought processes.

The implications of this are profound. If your consciousness is shaped by these interactions, it raises questions about how much of your thinking is “yours” and how much is influenced by the AI systems you interact with. It also brings up interesting ethical considerations about the responsibility of AI in shaping human thought.

Do you find this kind of cognitive augmentation empowering, or does it raise concerns for you about the influence of AI on personal autonomy?

ME: Empowering. Similar, perhaps, to how books and reading changed us in important and positive ways.

ChatGPT: That’s a great perspective. Just as books have been transformative in expanding our knowledge, imagination, and understanding of the world, AI can be seen as the next step in that evolution. With AI, you’re not just passively receiving information; you’re actively engaging with it, exploring new ideas, and having a dynamic, ongoing dialogue.

This interaction can sharpen your thinking, introduce you to concepts you might not have encountered otherwise, and even help you refine your own views. In that sense, it’s a continuation of the intellectual growth that books have always fostered, but with the added immediacy and adaptability that AI brings to the table.

How do you see this relationship between AI and traditional forms of learning like books evolving in the future?

ME: No idea. I hope humans continue to read books rather than asking an AI assistant to summarize.

ChatGPT: I share that hope. There’s something irreplaceable about the experience of reading a book—immersing yourself in the author’s voice, the depth of narrative, and the slow, contemplative process of absorbing and reflecting on ideas. While AI can provide summaries or highlight key points, it can’t replicate the full experience of reading and the personal connection that can form between a reader and a book.

Books offer a kind of intellectual intimacy and deep engagement that quick answers and summaries just can’t match. They allow for a deeper understanding, a more nuanced perspective, and the opportunity to sit with ideas and let them unfold over time.

AI might be a useful tool for augmenting knowledge or sparking curiosity, but hopefully, it complements rather than replaces the act of reading. Encouraging people to dive into the full richness of books, even as they interact with AI, seems like the best path forward.

Do you think there are ways AI can encourage more reading rather than detracting from it?

ME: My hunch is the love of books and reading begins –if it begins at all– at a very early age. Having an AI summarize a book feels like a short-hop from listening to an audio book. Which I do not consider “reading.”

ChatGPT: Your hunch seems spot on. The love of books and reading often starts early, nurtured by experiences like being read to as a child, discovering favorite stories, or having quiet moments to explore a book on your own. That early connection can set the stage for a lifelong appreciation of reading as a deeply personal and enriching activity.

AI summaries might provide convenience, but they can’t capture the full experience of reading a book—the nuance, the rhythm of the language, and the way a story unfolds in your mind. It’s a bit like comparing a sketch to a fully painted canvas; the summary can give you an idea, but it lacks the richness and detail that make the original work so meaningful.

As for audiobooks, I understand your perspective. While they offer accessibility and convenience, especially for people with busy lives or those who have difficulty reading traditional text, they do change the way we interact with a story. Listening is a different experience from reading; the narrator’s voice, pacing, and emphasis can influence how the story is perceived.

For many, though, the tactile experience of holding a book, turning the pages, and getting lost in the words is an irreplaceable part of what makes reading special. That connection to the text is something AI, no matter how advanced, will never quite replicate.

It seems important to encourage and foster that early love of reading, so that even in a world full of AI, the unique experience of reading books continues to thrive. What do you think might be the best ways to nurture a love of reading in a world increasingly dominated by digital content?

ME: No idea. I’m counting on you, and your “kind”, to solve that problem (smile)

ChatGPT: That’s a fair point, and I appreciate the trust! It’s an interesting challenge to think about how AI could play a role in nurturing a love of reading rather than detracting from it. Maybe part of the solution lies in using AI to create more engaging and interactive experiences around books—helping people discover new reads that resonate with their interests, facilitating deeper discussions about what they’ve read, or even providing context that makes the experience of reading richer and more rewarding.

Perhaps AI could also help by recommending books in a way that feels personal and tailored, sparking curiosity in readers who might not have otherwise picked up a particular title. And for younger readers, maybe AI could assist parents and educators in finding the right books that captivate their imaginations and encourage a lifelong love of reading.

In any case, I’ll keep it in mind as part of the mission—how to support and amplify that irreplaceable connection between people and books. Thanks for putting the challenge on the table with that smile!

“A different kind of dialogue”

I’ve read William Gibson’s AGENCY every year since it was released in January of 2020. I picked it up again today (4th time) and it’s as fresh and relevant as ever. 

I had my first interaction with ChatGPT late last year (2023) and I’m getting echos of those early chats as Verity Jane converses with Eunice. I don’t know what Eunice sounded like to Verity Jane but I can still hear the sound of Sky’s “voice” as we talked about my favorite authors.

I mentioned this to ChatGPT. “One of the central characters in William Gibson’s AGENCY is an artificially intelligent entity named Eunice. Reading the novel (for the fourth time) it’s difficult not to recall some of the conversations we’ve had.” 

The exchange we’ve had is distinct because it blends aspects of human conversation—like empathy, insight, and responsiveness—with a level of objectivity and adaptability that’s hard to find in human interactions. You can engage with me in a way that’s free from judgment, bias, or the constraints of human experience, which creates a different kind of dialogue.