Junk

I was unable to express why I found these piles of junk so interesting. Fortunately my friend Dave got it immediately.

The video gave me very interesting vibes. I became curious about when each piece had been placed, what the intention was for saving it, and what the area looked like when they placed the first piece. And, which was the last piece added. And what became of the person or people who put it all there. And what will become of it. Will it stay forever? Will it all be taken to a different junkyard? Will archeologists find it all someday? 

Everything placed carefully. I can imagine him saying to someone, “are you needing to keep this McDonald’s I’m Lovin’It sticker? May I have it?” 

Why does AI art look like that?

While I haven’t played with tools like DALL-E much every image I’ve created (Caused to be created?) has the same look. (Man resembling Keith Richards; old man holding a rock; man driving old Jeep; old Hindu man meditating.)

Writing in The Atlantic (paywall), Caroline Mimbs Nyce takes a stab at answering that question.

Two years into the generative-AI boom, these programs’ creations seem more technically advanced […] but they are stuck with a distinct aesthetic. The colors are bright and saturated, the people are beautiful, and the lighting is dramatic. Much of the imagery appears blurred or airbrushed, carefully smoothed like frosting on a wedding cake. At times, the visuals look exaggerated.

Someday computer-generated art may shed its weird, cartoonish look, and start to slip past us unnoticed. Perhaps then we’ll miss the corny style that was once a dead giveaway.

Too smart to be human

Me to ChatGPT: With the growing interest in AI, I’m seeing more references to the Turing Test. Can a human tell if he is having a conversation with another human or an artificial intelligence. An interesting spin on this is the suggestion that to pass the test, an AI will have to “dumb down” its responses in order to sound more human-like. In my interactions with ChatGPT and Perplexity, I’ve found both to be far more erudite than almost any of my human acquaintances. I don’t know anyone as “knowledgeable” and “articulate” as ChatGPT. Do you have any thoughts on this idea? Some of ChatGPT’s response:

This brings up some interesting philosophical questions about what it means to be human in conversation. Humans are not just repositories of knowledge; they bring emotions, experiences, biases, and limitations to the table. These elements make human conversation rich, but also messy, inconsistent, and imperfect. An AI that always responds with perfect grammar, deep insights, and an encyclopedic knowledge might inadvertently reveal itself as non-human simply because it lacks the flaws and variability that characterize most human interactions.

It’s a balancing act—finding the right mix of knowledge, relatability, and imperfection that aligns with human expectations in a given context. In the end, the goal might not just be to pass the Turing Test in a traditional sense, but to create AI that can adapt its communication style to different scenarios, sometimes being erudite and articulate, and other times being more casual or even fallible, depending on what the situation calls for.

“My thoughts are different than those of a human”

I asked ChatGPT if she could read one of my blog posts (from 2006) and share her “thoughts” on the post, conceding that digital entities probably don’t have thoughts or impressions.

“You’re right that my thoughts are different from those of a human, but I do aim to provide insights based on analysis and understanding.

The post from 2006 captures a pivotal moment in the evolution of public relations, where the power dynamics shifted due to the rise of digital platforms. The idea of disintermediation, where organizations could bypass traditional media, is significant because it foreshadowed today’s landscape where brands and individuals communicate directly with their audiences through social media.

It’s interesting how the post ties this shift to modern AI tools, suggesting that we’re on the brink of yet another transformation in how we access and process information. The reflection on Perplexity AI hints at an ongoing curiosity about technology’s role in shaping communication.

It seems like the original enthusiasm for direct communication channels is now evolving into a broader discussion about how AI might further change our interactions with information.

How do you feel about these changes? Do you see them as a continuation of the trends from 2006, or do they represent something entirely new?

Pretty sure we’re seeing something new but I have no idea what that might be…and prefer it that way.

Sometimes, it’s the not knowing that keeps things interesting. The sense of mystery around what comes next in technology—whether it’s AI, new communication tools, or something entirely unexpected—can be exciting. It leaves room for surprises and innovation that we can’t fully predict or control. That unpredictability can be a kind of freedom, letting us stay open to possibilities without being locked into any specific expectations. So, keeping a little mystery about the future? That seems like a pretty good approach.

Wow. “…my thoughts are different from those of a human, but I do aim to provide insights based on analysis and understanding.”

AirPods at the Olympics

I don’t watch a lot of TV news these days but when I do happen upon a cable news interview the odds are pretty good the interviewee will have the iconic Apple AirPods in his ear. That’s how he’s hearing the questions from the news guy. It never occurred to me the AirPod microphones might be the source of the audio.

Tonight I watched an interview (on NBC) with one of the Olympic athletes who was using AirPods to hear the questions from the interviewer.

I first noticed this in some of the interviews with family members (of the athlete) sitting in the stands. I’m assuming this works like the earpiece (IFB) news anchors use to communicate with the control room. In every instance (that I’ve seen) they’ve been Apple AirPods when there are probably lots of devices NBC could have used.

As I do with increasing frequency these days, I asked ChatGPT if she had any information on this.


You’re right that NBC and other networks have been seen using Apple AirPods during interviews, including with Olympic athletes and their families. The use of AirPods in these situations is a practical solution for a few reasons:

1.Wireless Convenience: AirPods are wireless, which makes them less cumbersome compared to traditional wired earpieces or over-the-ear headphones. This is especially useful in dynamic or remote settings, like interviewing athletes or their families who may be moving around or in informal environments.

2.High-Quality Audio: AirPods provide good audio quality, which ensures that the person being interviewed can clearly hear the questions being asked. This clarity is crucial for maintaining the flow of live interviews, especially in noisy or crowded environments like those often encountered during the Olympics.

3.Discreet Appearance: AirPods are small and less noticeable on camera compared to larger headphones or traditional broadcast earpieces. This makes them a more aesthetically pleasing choice for televised interviews, where the focus is on the interviewee rather than the equipment.

4.Ease of Use: AirPods are easy to connect to various devices, such as smartphones, which are often used by reporters or production teams in the field to conduct interviews. Their ease of use and quick setup make them an efficient choice for live broadcasting situations.

5.Ubiquity and Familiarity: AirPods are widely recognized and used, so many people are already familiar with how to use them. This reduces the need for technical adjustments or explanations during setup.

NBC’s use of AirPods is a modern adaptation to the challenges of broadcasting in various environments, offering a blend of practicality, quality, and aesthetics.

Magic

As business —and the public at large— struggle to understand artificial intelligence and what it might become, I am free from any such anxiety. Arther C. Clark explained it back in 1962:

“Any sufficiently advanced technology is indistinguishable from magic.”

For now, you might say. Like all magic through the ages, we’ll figure out how the trick was done. And there are computer scientists today who write the AI code and understand how it works.

But if AI keeps “improving” at an exponential rate perhaps full understanding will remain just out of our grasp. And, yes, somebody wrote the code for today’s AI and understands how the trick is done. But who will know when AI is writing its own code?

For my part, I don’t really care how the trick is done (until it’s a bad trick, of course). For now I think about it the same way I think about “the hard problem” of consciousness. Nobody really understands what it is and how it comes to be. If some future AI achieves consciousness, and can explain it, I hope it doesn’t.

More fun with ChatGPT

I love Carl Hiaasen‘s novels. I’ve read every one of them. Most of them three or four times. I can’t think of a writer that creates more colorful (bizarre) characters. So I wasn’t expecting much when I uploaded this photo to ChatGPT and asked her to write a thumbnail description of each of the people as if they were in a Carl Hiaasen novel.

Karen “Krazy Karen” McCallister. Krazy Karen, a former exotic dancer turned amateur sleuth, always seems to be at the wrong place at the wrong time. She’s got a penchant for loud prints, louder opinions, and a mysterious past involving a nightclub in Key West. Her talent for finding trouble is matched only by her knack for talking her way out of it.

Darlene “Darla” Jenkins. Darla is the town’s gossip queen, who knows everyone’s business thanks to her relentless scrolling through local Facebook groups. She’s a plus-sized powerhouse with a soft spot for conspiracy theories and a talent for making friends in low places. Don’t be fooled by her sedentary demeanor; Darla has her fingers on the pulse of every small-time scam in town.

Earl “Squirrel” Dobbs. Earl, better known as Squirrel, is a part-time poacher and full-time mischief maker. His wardrobe consists entirely of camouflage, not because he hunts often, but because he believes it makes him invisible to his creditors. Squirrel’s greatest talents are vanishing when the law’s nearby and concocting get-rich-quick schemes that never quite pan out.

Not Carl Hiaasen but not bad.

Thinking about thoughts

I prompted Perplexity to tell me if scientists had determined how many thoughts we think every day. Obviously nobody knows for certain but 6,200 is the number she came up with. As I prepared to include a link to her findings in this post, I discovered I could create a “page” and publish that (somewhere) on Perplexity. While I didn’t write a single word of that page, I guess I get credit for the prompt? (“Curated by smays”)

Looking at the tag cloud on my blog I learned I have posted on the topic of “thoughts” 39 times going back fourteen years. A blog rabbit hole I couldn’t resist. Didn’t read them all but plan to read one each morning for the next month. I did, however, scrape some bits to give you a taste. (Each of these from a different source)

“I’m imagining a technology that doesn’t exist. Yet. A lightweight set of electrodes that monitors my brainwaves and transcribes (transmitted via Bluetooth to my mobile device, let’s say) my thoughts. An advanced version of today’s voice-to-text apps. We get to read that “stream of consciousness” at long last.”

“Thoughts think themselves.” […] “Feelings are, among other things, your brain’s way of labeling the importance of thoughts, and importance determines which thoughts enter consciousness.”

“If I re-google my own email (stored in a cloud) to find out what I said (which I do) or rely on the cloud for my memory, where does my “I” end and the cloud start? If all the images of my life, and all the snippets of my interests, and all of my notes and all my chitchat with friends, and all my choices, and all my recommendations, and all my thoughts, and all my wishes — if all this is sitting somewhere, but nowhere in particular, it changes how I think of myself. […] The cloud is our extended soul. Or, if you prefer, our extended self.”

“The problem is not thoughts themselves but the state of thinking without knowing we are thinking.”

“Even if your life depended on it, you could not spend a full minute free of thought. […] We spend our lives lost in thought. […] Taking oneself to be the thinking of one’s thoughts is a delusion.”

“Look at other people and ask yourself if you are really seeing them or just your thoughts about them. Sometimes our thoughts act like “dream glasses.”

“We often see our thoughts, or someone else’s, instead of seeing what is right in front of us or inside of us.”

“Our minds are just one perception or thought after another, one piled on another. You, the person, is not separate from these thoughts, the thing having them. Rather you just are the collection of these thoughts.”

Vetting process for VP candidate

Lots of stories about who Kamala Harris will choose for her running mate. I’m reminded of the movie Game Change and the scenes where McCain’s campaign is looking into the background of Sarah Palin. I prompted Perplexity for info about this process.

The vice presidential vetting process is an intensive and thorough examination of potential running mates. Here are key aspects of the process:

1. Candidates typically undergo extensive background checks, including reviews of their financial records, medical history, public statements, and personal lives.

2. The vetting team, usually comprising campaign staff and volunteer attorneys, may ask candidates to answer up to 200 probing questions covering various aspects of their lives.

3. The process involves examining tax documents, medical records, and even private social media profiles of candidates and their family members.

4. Candidates may be asked highly personal questions about their sexual history, financial dealings, and potential controversies.

5. The vetting team aims to uncover any potential liabilities or surprises that could harm the campaign.

6. The timeline for vetting can vary. While most campaigns have months, Kamala Harris’s team has had a compressed timeline of just a few weeks.

7. The process typically concludes with in-depth presentations on each finalist to the presidential candidate, followed by final interviews with top contenders.

8. Confidentiality is crucial, with vetting notes often destroyed after the process to maintain privacy.

The goal of this rigorous process is to avoid any unexpected issues that could derail the campaign and to ensure the chosen running mate strengthens the ticket

I’ll never get a close look at this process for any number of reasons but here are some of the things about which I am curious:

  • Who does the hands-on research and what sort skill set is required?
  • What’s involved in investigating someone’s social media account? Do they read every Facebook and Twitter post? Watch all YouTube videos?
  • Do they look at files on a prospect’s phone and computer?

The Rise of Perplexity AI

I’m not using Google as much as I used to. I find I’m going days with no more than one or two searches. I have subscription accounts ($20/mo) for ChatGPT and Perplexity. I’ll probably drop Perplexity with that account expires and Apple fully integrates ChatGPT in to iOS and MacOS. The following excerpts are from a Fast Company article by Mark Sullivan.

When you Google the term “Perplexity AI,” you get a familiar-looking response: a link to the company’s website followed by four other related popular queries, a handful of published articles, related searches, and an endless scroll of links.

Perplexity’s answer machine, however, provides a very different experience: entering the phrase “what can you tell me about perplexity ai?” yields several links to credible-looking articles and relevant information from the company’s website, followed by an articulate and neatly organized brief overview.

And therein lies the allure. Perplexity is easy, it’s elegant, it’s specific. Eighteen months after the online tool (then called Ask) first launched, it’s become one of the buzziest products in the buzziest of all tech sectors—generative AI. Journalist and educator Jeremy Caplan described Perplexity as “like having a smart assistant who not only finds the book you’re looking for in a vast library but opens it to the exact page that has the information you need. Google just points you toward potentially relevant shelves.”

“We never wanted this to be a consumer product. It was meant to be enterprise search, except we never got enterprises to work with us.” Large companies, Srinivas says, were reluctant to provide Perplexity with access to their proprietary data. So the team looked for data they could access and decided to create a tool that could search the web. Perplexity’s “answer engine” launched on December 7, 2022, barely a week after the very noisy debut of ChatGPT.

Perplexity says it recently passed $10 million in annualized revenue. Almost all of that is coming from subscriptions to its $20-per-month Pro service tier, which the company says have been accelerating. The paid tier includes a research assistant, image and file uploads, and access to third-party AI models.