Some people eat, sleep and chew gum, I do genealogy and write...

Sunday, April 26, 2026

The First Three Rules for Using AI in Genealogy


Because there are Rules for Genealogy, I thought there ought to be some Rules for Using AI in Genealogy. As I thought about the idea for a while, I realized I had already started telling people about the rules in a previous blog post so it was a good idea to codify them in a subsequent post. Here I go with the first three rules. These rules can also apply to using AI in general. 

Rule #1: AI is a tool, not a toy.

I have spent the last three years refining methods for using AI in legitimate genealogical research. During this time, I’ve observed many people using personal AI chatbots primarily for entertainment or constant conversation. However, using AI strictly for diversion falls into the same category as playing video games or watching social media reels; to be effective in genealogy, AI ahould be treated as a functional tool rather than a pastime. 

When I was in grade school, I took a shop class where we learned to use woodworking tools, including an electric table saw. One of the things that they taught us was a movie that showed what happened when a person was using a table saw and got injured severely by the kickback from the wood. I still remember this movie vividly every time I watch someone use a table saw or try to use one myself. The movie it was followed by considerable instructions on the proper way to use the table saw. 

In the past, educators have become concerned about the use of hand calculators and Wikipedia, for two examples. Apparently, none of the bad effects of using a hand calculator or the bad effects of using Wikipedia have resulted in the entire collapse of education in the United States. Rather than prohibiting the use of AI by students, how about instructing them on the way AI can be used properly? 

The huge number of comments online about the dangers of AI are doing progress a disservice. Rather than constantly focusing on the dangers of AI, how about a lot more discussion about how to prevent any of the dire consequences that seem to be prevalent in current online discussions? 

Rule #2: You must ask the right questions to get the right answers.

AI is a complicated computer program. With the advent of the natural language interface, there was a perception that somehow it was not necessary to program computers, that the computer itself could program itself. It is true that current AI chats can write complex programs; however, the feedback I see is that the computer programs need to be very closely reviewed in order to become entirely operable. Since I deal with a huge variety of computer programs, I am almost constantly aware of inconsistencies and "bugs" in almost all the programs I deal with. Sometimes efforts to resolve a computer program error end up in creating more inconsistencies and difficulties.

I think one way to overcome what we would normally call bugs in an AI program, instead of hallucinations, would be to help people become more familiar with the way to ask the right questions and give the right set of instructions. It is likely that AI programs in general will always have a degree of inconsistency, but it is also true that there are valid ways of developing to limit the ability of the AI program to hallucinate or fabricate. A good example of this is the Gemini/NotebookLM combination that can essentially eliminate hallucinations or fabrications. 

Rule #3: Always require a source. 

When using an AI chat for tasks requiring accuracy, such as genealogical research, it's crucial that the AI program automatically cites sources for its statements. Information provided without a source may be incorrect and is often useless for research. To illustrate, try asking your chat program to review its own unsourced response for accuracy—you might be surprised by the outcome.

Stay tuned, I may have a few more rules. 

No comments:

Post a Comment