Some people eat, sleep and chew gum, I do genealogy and write...

Wednesday, August 20, 2025

Guiding Principles for Responsible AI in Genealogy: Accuracy

 

Accuracy

AI can generate false, biased, or incorrect content. Therefore, members of the genealogical community verify the accuracy of the information with other records and acknowledge credible sources of content generated by AI.

This is one of the guiding principles for the responsible use of AII for genealogy developed by the Coalition. See CRAIGEN.org

Initially, AI chatbots were "rule-based" and used a set of predefined rules and keywords. If a user's input didn't match a specific rule, the chatbot couldn't provide a relevant response, leading to frustrating and limited interactions. An example of a rule-based chatbot is a telephone tree answering service. See the following:

Wills, Jason. “The Evolution of AI Chatbots in the Last Decade.” Medium, October 8, 2024. https://jason-wills343.medium.com/the-evolution-of-ai-chatbots-in-the-last-decade-012cf8b126f3.

“The Evolution of NLP Chatbots and Generative AI: How They Work, Why They Matter, and What’s Next.” Accessed August 7, 2025. https://quickchat.ai/post/nlp-chatbot-generative-ai-evolution.

AI chatbots became more useful as natural language processing (NLP) advanced. NLP is a field of AI that helps machines understand, interpret, and generate human language. However, initially, chatbots were limited by the following factors that lead to hallucinations. Here is a list of sources that discuss the issue of hallucination. 

Choi, Anna, and Katelyn Xiaoying Mei. “What Are AI Hallucinations? Why AIs Sometimes Make Things Up.” The Conversation, March 21, 2025. http://theconversation.com/what-are-ai-hallucinations-why-ais-sometimes-make-things-up-242896.

Timothy, Maxwell. “What Is AI Hallucination? How to Fix It.” Accessed August 7, 2025. https://www.chatbase.co/blog/ai-hallucination.

“What Are AI Hallucinations? | IBM.” September 1, 2023. https://www.ibm.com/think/topics/ai-hallucinations.

Wikipedia. “Hallucination (artificial intelligence).” July 29, 2025. https://en.wikipedia.org/w/index.php?title=Hallucination_(artificial_intelligence)&oldid=1303244625.

Currently, some chatbots are source centric and provide links to websites that are used to construct the answers to prompts (questions). Although it is still necessary to examine the information from the sources, the accuracy of chatbots such as ChatGPT, Gemini, and CoPilot, have evolved to the point of being more reliable. 

The accuracy principle cited above will continue to be a factor in using chatbots for genealogical purposes and will also require the genealogical researcher to continue to take responsibility for the accuracy of the information supplied. 

By actively searching for information and cross-referencing it from multiple sources, deep research models significantly reduce the likelihood of "hallucinating" or making up facts. They are designed to find information to support their claims, and if they can't, they are less likely to generate a false statement with high confidence.

The key, as set forth above, is not to rely on AI as you would a historical source. From my perspective, I use AI as I would an entry in an online family tree or a surname book with no sources, unless the sources are supplied by the AI Chatbot. 

“Ethics and Standards - Board for Certification of Genealogists.” Accessed August 20, 2025. https://bcgcertification.org/ethics-standards.

 



Saturday, August 2, 2025

Don't get left behind by AI

 

Artificial intelligence is not a passing fad. It is a fundamental change affect nearly every aspect of our lives. Genealogy does not escape the effects of these major changes. One obvious effect of AI to FamilySearch.org is the end of volunteer-based indexing. If you haven't noticed, Indexing has disappeared from the website. 


All the indexing of records on the FamilySearch.org website is being done by AI-driven Full-Text Search based, in part, on AI based handwriting recognition. 


I suggest the following to get some idea about what is going on. 

I have a number of other videos on the BYU Library Family History Center YouTube Channel,  but they are now months old and possibly already far out of date. This is the same problem with looking online for current information about AI. You have to look at the date of the article or video and try to find something that is not already out of date. 

AI is currently reaching the saturation point of online text and videos. By the way, I have more videos about AI coming up during the months ahead.