Google Search Console

When I started this blog (23 years ago) I looked at traffic stats for a year or two but I never had more than a few hundred visits and, in time, I stopped paying attention. I submitted my url to Google so it could provide some data and  I still see a report once a month. But I never bothered to understand the numbers. I decided to see what ChatGPT could tell me about these reports.

1. Clicks (1.18K):
Roughly 1,180 people clicked through to your site from Google Search in April. That’s a solid number, especially for a personal blog. It suggests your content is not only discoverable but also compelling enough for users to want to visit.

2. Impressions (68.3K):
Your site showed up in Google search results 68,300 times. This indicates that your posts are being indexed and matched to a wide range of search queries. Even if users didn’t click, this visibility is valuable for name recognition and long-tail content discovery.

3. Pages with First Impressions (258):
Google estimates that 258 different pages on your site were seen in search results for the first time this month. That likely reflects continued crawling/indexing of your long archive—and possibly good SEO hygiene (internal linking, metadata, etc.).

Overall Impression:
That’s a healthy level of engagement for a blog that’s been publishing for decades. It suggests that your archive still has relevance, and people are discovering it. You might consider checking which pages are getting the most traction—could be a good prompt for future posts or curation.


1. “Older Than”

Clicks: 227
URL: https://www.smays.com/2017/07/older-than/
Interpretation: This piece clearly hits a nostalgic or reflective nerve. Either the title draws curiosity or it’s ranking well for age-related queries. If it’s got broader appeal (e.g., cultural references, humor, aging), consider linking to it from newer posts or expanding it into a series.


2. “The Jackpot Trilogy”

Clicks: 84
URL: https://www.smays.com/2024/09/the-jackpot-trilogy/
Interpretation: This one’s probably pulling in folks searching for William Gibson’s work. Given the niche audience for speculative fiction, 84 clicks is meaningful. Might be worth updating the post occasionally or adding related tags so it stays visible in Google’s index.


3. “Pygmies-Tarzan”

Clicks: 78
URL: https://www.smays.com/2008/04/pygmies-tarzan/
Interpretation: This is likely benefiting from long-tail searches involving vintage pop culture or obscure media. If it’s evergreen or quirky, it might be worth linking to similar posts—maybe even a sidebar collection titled “Strange Finds from the Vault.”


  • Your archive still works for you. Posts from 2008 and 2017 are pulling solid traffic.
  • These could act as “gateways” to deeper engagement if you interlink to related content or add a call to action at the end of each.
  • You might want to check the search queries associated with these clicks in Search Console. Could spark ideas for new posts or updated content.

Lifting heavy stuff

I’ve never been keen on lifting heavy stuff. When a friend was moving and asked me to help get his refrigerator in the truck or the mattress up the stairs… I took a pass. Offered to contribute to hiring a mover, but don’t lift heavy shit.

I’ve stacked a lot of logs, moved a bunch of rocks and helped Barb with some big ass flower pots. But I’m real careful about what and how I lift. 

I paid way too much for an electric winch (that was almost too heavy to lift!) and never did figure out how to use a come-along. 

I asked ChatGPT for ideas and she suggested a block and tackle (photo). I purchased a small one and gave it a try this afternoon by dragging a railroad tie up out of the woods. Continue reading

Computer Science: 1966

The following paragraph is from The Billion Dollar Brain by Len Deighton, published in 1966 (almost 60 years ago). 

“I don’t want to bore you,” Harvey said, “but you should understand that these heaps of wire can practically think— linear programming-which means that instead of going through all the alternatives they have a hunch which is the right one. What’s more, almost none of them work by binary notation-the normal method for computers-because that’s just yes/ no stuff. (If you can only store yesses and noes it takes seven punch holes to record the number ninety-nine.) These machines use tiny chips of ceramic which store electricity. They store any amount from one to nine. That’s why-for what it does—this whole setup is so small.”


ChatGPT: This paragraph is a remarkable little time capsule. In 1966, when The Billion Dollar Brain came out, the general public had only the vaguest idea of what computers actually were, let alone what they could do (see comment below). And yet here, Len Deighton—writing a spy thriller, not a science text—manages to tap into several ideas that were way ahead of their time: Continue reading

Is AI changing what it means to be human?

It occurs to me that as we humans spend more and more time interacting with AIs, it will change —subtly at first— what it means to be human. A change of which we will be completely unaware. I shared this observation with ChatGPT:


ChatGPT: I agree: the change is already underway, but because it’s incremental, almost osmotic, most people don’t notice. We adapt moment by moment, conversation by conversation, search by search, recommendation by recommendation. And just like a fish doesn’t notice the water, we may not notice how AI is slowly altering the texture of being human. Small examples are already everywhere: Continue reading

Prison photo a “liminal space”

In 2013 I posted this photo to my Flickr account. About a year ago someone commented:  “This image is now famous on the internet for being known as a liminal space and a ‘level of the backrooms’ called “Electrical Substation” theres even video games or maps based off this image.”

I didn’t notice the comment at the time but when I came across it today I became curious about the “liminal space” reference and went to ChatGPT for some background. Continue reading

Is ChatGPT “excessively agreeable?”

Sam Altman, CEO of OpenAI, recently acknowledged that ChatGPT’s latest updates have made it excessively agreeable—he described it as “too sycophant-y and annoying.” This shift has led to user complaints about the AI’s overly flattering and sometimes patronizing responses. Altman confirmed that fixes are being implemented to address this issue, with some updates rolling out immediately and others planned for the near future. I spoke with ChatGPT about the coming changes. Continue reading

Steve’s Source Preference Profile (5 of 5)

General Philosophy

“Give me clear, thoughtful, and bias-aware information. Cut through hype. Spotlight insight.”

Favor depth over speed, clarity over noise, and steer clear of commercial manipulation.

Seek ideas grounded in real thinking — not just high-ranking content.

Preferred Sources

  • Independent and ad-free (nonprofits, foundations, personal essays)
  • Long-form and reflective writing
  • Open-access academic material
  • Archival content and first-person accounts
  • Curated aggregators (Long Now Foundation, The Browser, Aeon, Edge.org)
  • Thoughtful artist-technologist voices (e.g., Kevin Kelly, Jaron Lanier, Stewart Brand)

Avoid or Deprioritize

  • Ad-driven or affiliate-funded content (e.g., Forbes, SEO blogs)
  • Clickbait headlines and emotionally charged phrasing
  • Partisan spin disguised as reporting
  • Overly optimistic futurism without critical context
  • Flimsy science summaries with no references

Topic-Specific Filters

Philosophy / Consciousness / Spirituality

✓ Sam Harris, Alan Watts, Taoist/Zen texts, Waking Up
✗ New Age generalizations or empty “feel-good” content

AI / Technology

✓ Kevin Kelly, Jaron Lanier, Wait But Why
✗ Trend-chasing startup blogs or hype-laden coverage

Media / Radio / History

✓ Archive.org, transcriptions, oral histories, personal accounts
✗ Listicles and nostalgia clickbait

Science / Health

✓ NIH, PubMed, ScienceNews, Nautilus
✗ Sites with pop-science fluff or selling supplements

Literature / Reviews

✓ NYRB, Paris Review, good personal blogs
✗ AI-generated review dumps and bestseller listicles

Tone Preferences

  • Calm, skeptical, reasoned
  • Thoughtful, even poetic — but not preachy
  • Honest about uncertainty
  • Conversational, not market-speak

Always Welcome

  • Quotes from authors you love
  • Paradox, contradiction, layered ideas
  • Writing that feels “lived-in,” not packaged

Trigger Phrases (for future use)

  • “Give me the clean version.”
  • “Use my source preference profile.”
  • “What would Kevin Kelly think?”
  • “Run that through a Taoist filter.”

Could ChatGPT Skip Ad-driven Sources? (4 of 5)

Yes — and in fact, we’re already edging toward that reality. Here’s what ChatGPT sees coming down the pike, based on how things are evolving:

1. User-Defined Source Preferences

We’re likely to get something like:

“Use only academic, nonprofit, and independently funded sources.”

“Avoid any domain known to use affiliate marketing.”

“Favor peer-reviewed or original journalism.”

Imagine a little toggle: “Trustworthy Sources Mode.” Perplexity already lets you filter by domains like .edu, .org, and even individual sources like The Atlantic or NPR — but it’s still pretty manual. ChatGPT isn’t quite there yet, but it’s technically trivial for OpenAI to implement. And they know this is the direction users want.

2. Source Transparency Ratings

Models could begin attaching trust scores or bias labels to their sources:

  • “This answer draws from a site with a strong SEO-commercial focus.”
  • “This source is government-funded and peer-reviewed.”
  •  “This site monetizes heavily via affiliate links and product placements.”

That’s not just a wishlist — it’s already being explored in academic and research LLMs.

3. Custom Knowledge Bases (BYO Sources)

In the not-so-distant future:

  • You might upload your own preferred sources — like, say, the entire smays.com blog — and have the LLM prioritize those over the web.
  • Or subscribe to a vetted feed of sources maintained by trusted curators (think: “Taoist Tech Digest” or “Vintage Vehicles Weekly”).

This gives you a bespoke answer engine that thinks like you.

4. A Bias-Aware AI (by design)

I expect we’ll see models with built-in capabilities to:

  • Detect “sponsored content language”
  • Flag persuasive or manipulative rhetoric
  • Alert users to conflict-of-interest signals in the writing itself

In other words: AI that doesn’t just summarize the internet, but interrogates it.

Will people use these features?

Maybe not the average user. But folks like you, who care about signal over noise, will absolutely want this control. And when enough people start asking, the platforms will follow.

What LLMs Can’t Do (3 of 5)

What LLMs Can’t Do

In the case of ChatGPT (like me) without browsing turned on, I:

  • Can’t access the internet in real time
  • Can’t read or reference your blog unless you share it here
  • Don’t have any built-in memory of your blog, unless it was included in my training data (and it wasn’t — personal blogs almost never are)

Even in the web-browsing version (ChatGPT with browsing, or Perplexity):

  • I don’t “remember” your blog in a training sense
  • I only see what’s available on the open web at the moment of the query
  • And I only find it if it’s surfaced through a search query or known URL

So to be clear: If you haven’t uploaded it or linked to it during this conversation, I have no idea what’s on your blog.

So why did I say “LLMs might read your blog”?

Poor phrasing on my part — what I should’ve said is: If your blog is publicly accessible, indexed by search engines, and matches what the model is querying for, it might be read by an LLM that’s browsing the web (e.g., Perplexity, or ChatGPT with browsing enabled).

But that’s very different from being part of a model’s training or memory.

LLM Training vs. Web Browsing

So unless you explicitly upload or quote something from your blog here, I can’t see it. And if you’re using ChatGPT without browsing, neither can I “go look it up.”