Some people eat, sleep and chew gum, I do genealogy and write...

Monday, January 26, 2026

Overcoming the Fear of AI (FOAI) Part One Overview

 


Because of my long term perspective from working with both volunteers and patrons at the BYU Library Family History Center, I constantly hear their concerns and fears about the advent of what I will refer to as generative AI as opposed to the background AI dating back over 50 years. I have been aware of these concerns since I began reading a lot of science fiction beginning when I was about nine years old beginning with Isaac Asimov's Pebble in the Sky. This book was a heavy start for a young person. Later, because of my continued fascination with everything about science fiction from robots (again Isaac Asimov with I, Robot) to Douglas Adams, Hitchhiker's Guide to the Galaxy, I began to think about the concept of an all-knowing possible computer system and the dramatic dangers of the computer's control. This awareness was expressed in Robert Heinlein's novel, The Moon is a Harsh Mistress." The culmination of these themes revolves around three themes: the singularity, superintelligence, and omniscient AI. I believe that presently these evolving themes have developed into a basic cultural fear of AI as a force of evil. 

Of course, I have my own personal opinions about the possible reality of the basis for the current online discussion about AI being the nemesis of humanity but I think the concept can be traced to a short story by Isaac Asimov (again) called The Last Question published in 1956.

It is my observation that the level of the Fear of AI (FOAI) depends on a number of underlying factors: the education and experience level of the person expressing fears and the actual level of the individuals interaction with generative AI. A review of the current online discussions that express fear or anxiety about AI discloses for main FOAI topics:

The Erosion of Truth and Economic Utility: In the current year, the most prominent fears are no longer about "killer robots" but about the disintegration of shared reality and professional value.

The Future of Existential Risk and Cognitive Atrophy: Looking toward the horizon of the next decade, the conversation focuses on the "Intelligence Explosion" and the fundamental nature of humanity.

Structural and Environmental Fears: Beyond the direct interaction with AI, there are systemic fears regarding the infrastructure required to sustain it.

The Continued Erosion of Individual and Group Privacy: In 2026, the privacy discourse has shifted from "protecting my email address" to "protecting my cognitive and biological essence." As an AI expert, I categorize the current privacy landscape into three high-stakes battlegrounds: Data Persistence, Biological Privacy, and Inference Risks.

I am not a programmer. I do not have a degree in Electrical Engineering or Computer Science or any other related topic. What I do have is a life-long passion for research and learning and I have been involved with computers since about 1970, long before generative AI was even a concept. My long experience as a trial attorney has taught me to be suspicious of any simple statement that has no basis is fact or logic but merely expresses an unfounded fear. One question I ask from time to time is how anyone can survive the complexity today's world. Further from my perspective, developments in computer technology have presently given me the tools of unlimited research and knowledge but reality has give me only a few years to benefit from it. 

Stay tuned for the rest of this series.

No comments:

Post a Comment