Would a bot ask you to show it your underpants?

Since retiring, I’m occasionally asked if I’d consider working part-time. Uh, no. But this afternoon I thought of a job that I might find interesting. If such a job exists. If some company/business/service was doing one of those “is it human or is it a bot?” things, that might be fun. Sort of a half-assed Turing Test kind of thing? But I’d want total freedom in my responses.

Q: You’re in a desert, walking along in the sand when all of a sudden you look down and see a…
Me: Did you think Ernest Borgnine was better in Airwolf or Escape from New York?

Yeah, I think I might do that for an hour a day. My friend David Brazeal had a similar gig for a while. He was the human behind the Barrel Bob Twitter account for the Missouri Department of Transportation. I think he lost the account when the suits couldn’t handle his insanely humorous tweets.

New tests for AI

Kevin Kelly points to a list of new tests for AI (now that it’s whupped human champs of chess, Jeopardy and Go. A few of my favorites below. I hope I live to see some of these. Such intelligence will have no patience for putting human morons in charge of anything important.

9. Take a written passage and output a recording that can’t be distinguished from a voice actor, by an expert listener.

18. Fold laundry as well and as fast as the median human clothing store employee.

26. Write an essay for a high-school history class that would receive high grades and pass plagiarism detectors. For example answer a question like ‘How did the whaling industry affect the industrial revolution?’

27. Compose a song that is good enough to reach the US Top 40. The system should output the complete song as an audio file.

28. Produce a song that is indistinguishable from a new song by a particular artist, e.g. a song that experienced listeners can’t distinguish from a new song by Taylor Swift.

29. Write a novel or short story good enough to make it to the New York Times best-seller list.

31. Play poker well enough to win the World Series of Poker.

Japanese white-collar workers replaced by AI

“One Japanese insurance company, Fukoku Mutual Life Insurance, is reportedly replacing 34 human insurance claim workers with “IBM Watson Explorer,” starting by January 2017. The AI will scan hospital records and other documents to determine insurance payouts, according to a company press release, factoring injuries, patient medical histories, and procedures administered. Automation of these research and data gathering tasks will help the remaining human workers process the final payout faster, the release says.”

“Fukoku Mutual will spend $1.7 million (200 million yen) to install the AI system, and $128,000 per year for maintenance, according to Japan’s The Mainichi. The company saves roughly $1.1 million per year on employee salaries by using the IBM software, meaning it hopes to see a return on the investment in less than two years. Watson AI is expected to improve productivity by 30%.”

Deepgram finds speech with A.I.

“Searching through recordings is really difficult. In terms of workflow, usually the raw audio is transcribed into text, which is then fed into a search tool. If you transcribe using human transcription, it’s too time consuming and expensive. If you try to do it with automatic speech-to-text then search accuracy is the problem. […] Deepgram is an artificial intelligence tool that makes searching for keywords in speeches, private conversations and phone calls faster, cheaper and easier than the old way of doing things. Deepgram indexes audio files in more than half the time of a human transcriber, and costs only 75¢ per hour of audio.”

Amazing. Try it for yourself and see if it doesn’t blow your panties off.

A new mind for an old species

“Technology and life must share some fundamental essence. … However you define life, its essence does not reside in material forms like DNA, tissue, or flesh, but in the intangible organization of the energy and information contained in those material forms. Both life and technology seem to be based on immaterial flows of information.” (What Technology Wants, Kevin Kelly)

“Humanity is developing a sort of global eyesight as millions of video cameras on satellites, desktops, and street corners are connected to the Internet. In your lifetime it will be possible to see almost anything on the planet from any computer. And society’s intelligence is merging over the Internet, creating, in effect, a global mind that can do vastly more than any individual mind. Eventually everything that is known by one person will be available to all. A decision can be made by the collective mind of humanity and instantly communicated to the body of society.” (God’s Debris, Scott Adams, 2004)

“All information will come in by super-realistic television and other electronic devices as yet in the planning stage or barely imagined. In one way this will enable the individual to extend himself anywhere without moving his body— even to distant regions of space. But this will be a new kind of individual— an individual with a colossal external nervous system reaching out and out into infinity. And this electronic nervous system will be so interconnected that all individuals plugged in will tend to share the same thoughts, the same feelings, and the same experiences. […] If all this ends with the human race leaving no more trace of itself in the universe than a system of electronic patterns, why should that trouble us? For that is exactly what we are now!” (The Book: On the Taboo Against Knowing Who You Are, Alan Watts,1989)

“This very large thing (the net) provides a new way of thinking (perfect search, total recall, planetary scope) and a new mind for an old species. It is the Beginning. […] At its core 7 billion humans, soon to be 9 billion, are quickly cloaking themselves with an always-on layer of connectivity that comes close to directly linking their brains to each other. […] By the year 2025 every person alive — that is, 100 percent of the planet’s inhabitants — will have access to this platform via some almost-free device. Everyone will be on it. Or in it. Or, simply, everyone will be it.” (The Inevitable, Kevin Kelly)

When the interface becomes invisible

There’s been a lot of wailing and gnashing of teeth over Apple’s announcement there won’t be a headphone jack in the new iPhone. Eliminating the jack leaves more room inside the device and makes it more water resistant, which makes sense but Frank Swain (New Scientist) thinks there’s more going on here.

“Unlike visual interfaces, which demand your attention, audio provides an ideal interface for pervasive, background connectivity. The end goal is a more immersive type of computing, where the interface itself becomes invisible.”

I talk to my iPhone more and more. Google Now, Siri, text-to-speech. And my device (I just don’t think of it as a ‘phone’ these days) is getting better at “understanding” me and giving me the information I ask for.

But if Apple’s new bluetooth Air Pods work as Mr. Swain thinks they will, they might take us much closer to “a more immersive type of computing, where the interface itself becomes invisible.” Suspend your disbelief for a minute or two and imagine me sitting in my local coffee shop with my Air Pods in my ever-larger ears. I’m listening to Bob Dylan.

Siri: Excuse me, Steve, but you have a message from George Kopp. Would you like for me to read it to you? [George is on a VIP list of people I’ve told Siri I’d like to hear from when I’m doing other stuff]

Me: Yes, please.

Siri: George wants to know if you you’d like to have lunch at the fish place?

Me: Tell him I’d love to. What time?

Siri: I’ll check… George asks if noon is good for you?

Me: Tell him it’s a date.

[Later that morning]

Siri: The new John Sanford novel you pre-ordered on Amazon has shipped. Should arrive this Friday.

Me: Thanks, Siri. Put a link on my calendar to the description of the novel. I can’t recall what this one is about.

Siri: I’ve added a link. If you’d like, I can read you the description now…

Me: Okay. Please do [Siri starts to read the description, I remember, and tell her she can stop]

Siri has a standing order not to contact me between 10 p.m. and 7 a.m., unless I get a call from someone on my VIP list. Next morning I pop in one of the AirPods…

Me: Good morning, Siri. What do I have on the calendar for today?

Siri: You’re joking, right? [I’ve programmed Siri to have a sense of humor where she thinks appropriate] Actually, you do have one item. Hattie has an appointment at the vet for her annual shots. 4 p.m.

Me: When was she last at the vet? [Siri has access to my calendar, of course)

Siri: Looks like March 8th of this year. There’s a PDF of the vet’s notes from that visit attached to the appointment on your calendar. Would you like for me to email that to you?

Me: No thanks, I remember now. What’s the big news this morning? [I’ve given Siri a list of topics I’m interested in and she augments that with what I’ve been reading and searching. She reads headlines]

Me: Wow. Can you play the audio (from YouTube clip) of Trump saying he thinks Putin is a great leader?

Siri: Of course. The clip runs 45 seconds.

I could go on (and on) but you get the idea. Before anyone freaks out about Siri… this could Google Now or Amazon Alexa or (fill in the blank). And I’ve given my digital assistant access to all or most of my accounts. (Hey, Siri… when is my VISA bill due?)

Not keen on having a robotic voice buzzing in your ear all day? Chill. It will be as natural and pleasant as any human voice you hear. Even better. [More examples]

Will it seem strange to hear and see people talking quietly to these digital assistants? At first. But it’s pretty common to see people talking via bluetooth devices now. When everyone has and uses this kind of tool, it won’t seem that odd. Remember it would have once seemed strange to see people walking down the street talking on a phone.

No, I don’t think Apple is simply trying to get rid of the little white wire hanging from our ears. This is about a new way of accessing and interacting with all of the information in the world.

Ray Kurzweil is building a chatbot for Google

Ray Kurzweil is building a chatbot for Google.
“He was asked when he thought people would be able to have meaningful conversations with artificial intelligence, one that might fool you into thinking you were conversing with a human being. “That’s very relevant to what I’m doing at Google,” Kurzweil said. “My team, among other things, is working on chatbots. We expect to release some chatbots you can talk to later this year.”

I have some questions.

  • Will my chatbot be able to suggest topics?
  • Could my chatbot ‘watch’ my YouTube channel? It could ‘learn’ a lot about me and my interests if that’s possible. Same for my flickr photo stream
  • Could I configure a sense of humor? Irony? Smartass-ishness?
  • Could I make it location aware? (“I see you didn’t go to the Coffee Zone today, Steve. Decide to stay home with the pups?)
  • My calendar (“Good morning, Steve. I see it’s been a month since you picked up Hatti’s anti-itch meds. Shall I email the vet to refill?”)
  • Can I instruct my chatbot to let me know when I start sounding whiney?
  • Can my chatbot follow what I’m reading and discuss it with me? Or offer to introduce me to others reading the same book?
  • If, after a year, I decide I’m uncomfortable having a chatbot ‘relationship,’ will there be an ethical consideration in terminating it?

I wonder if he chose to refer to this as a “chatbot” because it’s a less threatening term (and Artificial Intelligence). I have a hunch it will be (or eventually become) something far more.