Well, how funny. I skimmed over the italicized words, read the neutral and generic post, and went back to try to figure out who wrote this and why you, whose prose is usually so sharp and interesting, would invite this boring writer to guest post. And then when I read it was ChatGPT I felt silly, but it also chimed with my own experience playing with the bot. It makes very well constructed sentences that are careful to say very little of interest. Last night I had the thought that it’s almost like the technology that disrupted the “trustworthy neutral news reporter” voice (which always seemed a bit robotic) has reinvented it now as a literal robotic chat system. In my own experiments, I have found the bot useful for information that I have little knowledge of - e.g., I had some questions about app design for a puzzle hunt and it had good answers that pointed me to genuine resources. But I have also found that (like some people) the bot prefers to make up information rather than admitting it doesn’t know something. I hope they can figure out how to train that “lying” bug away, because it makes the search results unreliable.
Thanks for the comment, Jay! I tried another experiment to bully ChatGPT into writing emotionally engaged and engaging content. Watch out for it on 25th February!
Well, how funny. I skimmed over the italicized words, read the neutral and generic post, and went back to try to figure out who wrote this and why you, whose prose is usually so sharp and interesting, would invite this boring writer to guest post. And then when I read it was ChatGPT I felt silly, but it also chimed with my own experience playing with the bot. It makes very well constructed sentences that are careful to say very little of interest. Last night I had the thought that it’s almost like the technology that disrupted the “trustworthy neutral news reporter” voice (which always seemed a bit robotic) has reinvented it now as a literal robotic chat system. In my own experiments, I have found the bot useful for information that I have little knowledge of - e.g., I had some questions about app design for a puzzle hunt and it had good answers that pointed me to genuine resources. But I have also found that (like some people) the bot prefers to make up information rather than admitting it doesn’t know something. I hope they can figure out how to train that “lying” bug away, because it makes the search results unreliable.
Thanks for the comment, Jay! I tried another experiment to bully ChatGPT into writing emotionally engaged and engaging content. Watch out for it on 25th February!
I will!