Some people eat, sleep and chew gum, I do genealogy and write...

Wednesday, August 20, 2025

Guiding Principles for Responsible AI in Genealogy: Accuracy

 

Accuracy

AI can generate false, biased, or incorrect content. Therefore, members of the genealogical community verify the accuracy of the information with other records and acknowledge credible sources of content generated by AI.

This is one of the guiding principles for the responsible use of AII for genealogy developed by the Coalition. See CRAIGEN.org

Initially, AI chatbots were "rule-based" and used a set of predefined rules and keywords. If a user's input didn't match a specific rule, the chatbot couldn't provide a relevant response, leading to frustrating and limited interactions. An example of a rule-based chatbot is a telephone tree answering service. See the following:

Wills, Jason. “The Evolution of AI Chatbots in the Last Decade.” Medium, October 8, 2024. https://jason-wills343.medium.com/the-evolution-of-ai-chatbots-in-the-last-decade-012cf8b126f3.

“The Evolution of NLP Chatbots and Generative AI: How They Work, Why They Matter, and What’s Next.” Accessed August 7, 2025. https://quickchat.ai/post/nlp-chatbot-generative-ai-evolution.

AI chatbots became more useful as natural language processing (NLP) advanced. NLP is a field of AI that helps machines understand, interpret, and generate human language. However, initially, chatbots were limited by the following factors that lead to hallucinations. Here is a list of sources that discuss the issue of hallucination. 

Choi, Anna, and Katelyn Xiaoying Mei. “What Are AI Hallucinations? Why AIs Sometimes Make Things Up.” The Conversation, March 21, 2025. http://theconversation.com/what-are-ai-hallucinations-why-ais-sometimes-make-things-up-242896.

Timothy, Maxwell. “What Is AI Hallucination? How to Fix It.” Accessed August 7, 2025. https://www.chatbase.co/blog/ai-hallucination.

“What Are AI Hallucinations? | IBM.” September 1, 2023. https://www.ibm.com/think/topics/ai-hallucinations.

Wikipedia. “Hallucination (artificial intelligence).” July 29, 2025. https://en.wikipedia.org/w/index.php?title=Hallucination_(artificial_intelligence)&oldid=1303244625.

Currently, some chatbots are source centric and provide links to websites that are used to construct the answers to prompts (questions). Although it is still necessary to examine the information from the sources, the accuracy of chatbots such as ChatGPT, Gemini, and CoPilot, have evolved to the point of being more reliable. 

The accuracy principle cited above will continue to be a factor in using chatbots for genealogical purposes and will also require the genealogical researcher to continue to take responsibility for the accuracy of the information supplied. 

By actively searching for information and cross-referencing it from multiple sources, deep research models significantly reduce the likelihood of "hallucinating" or making up facts. They are designed to find information to support their claims, and if they can't, they are less likely to generate a false statement with high confidence.

The key, as set forth above, is not to rely on AI as you would a historical source. From my perspective, I use AI as I would an entry in an online family tree or a surname book with no sources, unless the sources are supplied by the AI Chatbot. 

“Ethics and Standards - Board for Certification of Genealogists.” Accessed August 20, 2025. https://bcgcertification.org/ethics-standards.

 



No comments:

Post a Comment