The Turing test becomes an issue for a Google employee. Poor guy lost his job over the claims. And the fact that this guy is a priest who seems to want to protect the AI’s soul (my word, not the article’s), is an extra layer of juicy complication.

You can see where this is going – at some point soon a chatbot will be convincing enough that anyone will believe they are speaking with a sentient being, sooooo…what do we then label the software? It’s possible that a chatbot could be good enough to be 100% convincing of a thought process behind the scenes and know absolutely nothing beyond how to converse. For that matter, maybe we’re simply mobile language processing machines? If you can teach the chatbot math, does that qualify it as sentient? Machines can already learn.

I think the test needs to be more about autonomy, about free will. A baby is born with a will – it wants things, even though it doesn’t understand them. Because it wants, it takes action. So if a chatbot suddenly initiates action, if it seems to want to reach a goal, then we should talk about sentience. And if an AI decides it needs a priest…yikes! Head for the exits, because a religious machine intelligence would be the worst possible outcome for humanity. The Crusades 2.0. There’s definitely a novel in that idea.

Very sad news from Louisville today. A mass shooting happened at the foot of the Big Four Bridge, shortly after a gun control rally downtown. All those involved were juveniles, which begs the questions: Where were their parents? Where did they get the guns?

We walk and bike across that bridge, so I’m sad and concerned to hear that it’s a gathering place for kids with guns.

Leave a Reply