It turns out this whole time that the Turing Test was the wrong way to think of it. Thinking a chatbot is alive is not a test of how good the chatbot is, but of your own ability to think of other human beings as real and complete people
I heard some professor put googly eyes on a pencil and waved it at his class saying "HI! I'm Tim the pencil! I love helping children with their homework but my favorite is drawing pictures!"
Then, without warning, he snapped the pencil in half.
1/2
When half his college students gasped, he said "THAT'S where all this AI hype comes from. We're not good at programming consciousness. But we're GREAT at imagining non-concious things are people."
2/2
There’s a bit in Community where a character does something like that, he says “what makes us human is that I can tell you this pencil’s name is Dave and if I do this” *snaps pencil in half* “part of you just died.”
It’s also entirely possible that the professor saw the episode and took from it a good example of how to demonstrate how easy it is for humans to engage in anthropomorphism, but my god would your way be funnier
I would have gasped! But not ONLY because I subconsciously believed a pencil is a person for a sec.
But because someone who just *characterized* a pencil as a person then destroyed it.
Like... that's the more shocking thing!
Dan Harmon's a philosophy of mind geek from way way back, so it's as likely that he was incorporating something one of his own profs did back in the day.
Exactly.
We humans have such a drive to communicate we want to recognise consciousness in everything.
We will anthropomorphise anything.
Our imaginary friends are getting dangerous.
The whole time I was reading this tweet, I just kept thinking, "He killed Clippy." That weird little paper clip (yes, I know that's not the same as a pencil) used to get on my nerves until I stopped seeing him anymore. Regardless, it was another example of trying to humanize an object.