Generative artificial intelligence (AI) is known for being prone to factual errors. So, what do you do when you’ve asked ChatGPT to generate 150 presumed facts and you don’t want to spend an entire weekend confirming each by hand? Well, in my case, I turned to other AIs. In this article, I’ll explain the project, consider how each AI performed in a fact-checking showdown, and provide some final thoughts and cautions if you also want to venture down this maze of twisty, little passages that are all alike.
The project involved ChatGPT generating 50 picturesque images representing each US state along with three interesting facts for each state. The results were rather abstract, with some strange outcomes including placing the Golden Gate Bridge in Canada and generating two Empire State Buildings. The individual facts were mostly accurate, but without independent fact-checking, it’s hard to verify their accuracy. So, the next step was to use other AIs to fact-check the information.
Google’s Bard provided the best feedback, pointing to some inaccuracies in ChatGPT’s fact list while also overemphasizing some details. Meanwhile, Anthropic Claude found the fact list to be mostly accurate with some issues regarding lack of nuance in the fact descriptions. On the other hand, Microsoft’s Copilot was unable to provide useful feedback due to its character limit constraints. Ultimately, Bard’s fact-checking seemed impressive but often missed the point and got things just as wrong as any other AI.
For a broader perspective and a more thorough fact-check, I fed Bard’s responses back into ChatGPT. The results showed discrepancies in some of Bard’s corrections, highlighting the complexities and nuances in certain facts. Despite inaccuracies, Bard’s entries are prominently featured in search results, raising concerns about the accuracy of AI-generated information.
Overall, leveraging multiple AIs for fact-checking provides a more comprehensive evaluation than relying solely on one source. It’s important to understand the limitations and potential inaccuracies when using AI-generated information. Although AIs can assist in generating and verifying facts, they may also introduce their own biases and inaccuracies.