I've been asking AIs about a lot of things I already know, to see how well they can do at such tasks. I asked Bard if it could figure out who wrote this blog, and while it was clearly able to find this blog with its internet access, it guessed incorrectly. It occurred to me to ask about FAFBlog, the authorship of which was a subject of some speculation back in the day. Bard was embarrassing, sounding as if it were confused about the fact that Fafner, Giblets, and the Medium Lobster were fictional characters.
I hadn't bothered to ask GPT-4 about my blog, as I was sure it was too obscure, but FAFBlog was kind of a big deal for a while, so I thought maybe GPT-4 could do better. It certainly outperformed Bard, showing no signs of being confused about what was going on with FAFBlog. It also said some really favorable things about FAFBlog; I will have to mention to Chris next time I see him that GPT-4 seems to be a fan. Well, maybe its tendency to be positive in general contributed, but it definitely sounded like its training data included comments from a lot of people who loved FAFBlog as much as I did.
GPT-4 did not succeed at the specific task assigned, though. Since it showed such a good understanding of FAFBlog in other ways, I can't help but wonder if its failure to provide even a guess about FAFBlog's authorship was due to the effort to make it "harmless" having made it disinclined to out people's secrets, rather than being due to not having enough clues. Perhaps future experiments will give me a better idea of whether that's the case.