Some people eat, sleep and chew gum, I do genealogy and write...

Wednesday, January 28, 2026

Overcoming the Fear of AI (FOAI) Part Two: The Present: Erosion of Truth and Economic Utility


This post continues our series, following Part One: Overcoming the Fear of AI. Here, I examine the pressing issues surrounding artificial intelligence—concerns that have sparked intense and often dramatic debate across social media and YouTube. This compilation serves to address those anxieties by exploring the real-world implications of AI technology today. The more you learn about AI what it can and cannot do, the more you will have the tools to confront any fear about using AI. 

The "Post-Truth" Reality: With deepfakes becoming cheap, routine, and scalable, there is a profound fear that the legal system, journalism, and personal reputations are becoming indefensible. The emergence of synthetic identities in courtrooms and media has triggered a "reality threshold" crisis where seeing is no longer believing. See 11 things AI experts are watching for in 2026 From my own perspective, one of the major developments of AI images occurred when Adobe converted Photoshop into an "all AI" workspace. However, Content Credentials (the "nutrition label" for images) are now becoming a standard requirement for legal and journalistic integrity. Despite the historical fact that altered photographs have been created since the beginning of photography (see List of photograph manipulation incidents), modern AI makes these changes seamless and makes changing and generating photos accessible to everyone See List of photograph manipulation incidents The Coalition for Responsible AI in Genealogy (CRAIGEN.org) has addressed this issue with an official statement on its website. I realize that I have published these guidelines before but this present topic merits a repetition:

Three guidelines are recommended for everyone to follow:
  1. Always Label. Provide a visible, human-readable label stating that the image was modified or generated.
  2. Always Cite. A minimal citation is recommended, noting the original source and that the image was modified or generated. For greater clarity, a more detailed, layered citation is encouraged, recording the process and specific edits.
  3. Use as Illustration, Not Evidence. Treat modified or generated images as illustrative only. Do not use them to prove identity, time, or place.
These guidelines are a practical antidote to the "Post-Truth" reality.

The Professional Obsolescence Crisis: We are seeing a shift from "AI taking jobs" to "AI making humans obsolete." The psychological fallout—now being processed in therapy sessions—revolves around the identity crisis of workers whose roles have shifted from creation to validation (or "workslop" e.g., low-quality, AI-generated content that requires human filtering and management). Granted, AI is creating a turnover in the job market, See Amazon laying off about 16,000 corporate workers in latest anti-bureaucracy push But from a historic perspective, people have been losing jobs to technology since the Luddite movement began in 1811. See Why did the Luddites protest? As the AI technology continues to gain momentum it is undeniable that it will dramatically affect a significant number of jobs. See National Survey: 95% of College Faculty Fear Student Overreliance on AI and Diminished Critical Thinking Among Learners Who Use Generative AI Tools As individuals caught up in this technological revolution, it is important to begin to adapt to the new challenges. The open pathway is that AI has basically freed information for anyone willing to learn. See 22 Thoughts on Using AI to Learn Better

I am already using AI to teach me about how to use AI to do genealogical research and I am impressed with the speed I can begin to understand new AI concepts that match and extend my own research abilities.

Weaponized and "Self-Aware" Malware: In the security sector, there is significant alarm regarding agentic AI used by threat actors. This includes malware that can "play dead" when it detects a sandbox environment or autonomously pivot its tactics in real-time to evade human defenders. This has been an ongoing battle since the development of the internet and it is fortunate that the human defenders can use AI tools to overcome the attacks. This reality doesn't just create threats; it defines a new era for cybersecurity professionals who must now pivot from static defense to managing dynamic, AI-driven security ecosystems.

The rapid evolution of artificial intelligence often feels like a loss of control, but history shows that fear is best managed through active adaptation and understanding. While the "Post-Truth" reality and shifting job markets present genuine challenges, they also offer an unprecedented "open pathway" for those willing to learn. By adopting clear standards like the CRAIGEN.org guidelines and leveraging AI to enhance our own research abilities, we move from being passive victims of change to active participants in a new, AI-driven era.

As we have seen from the Luddite movement to the modern rise of agentic AI, technology does not just replace old systems; it creates new landscapes for those ready to navigate them. The key to overcoming the Fear of AI is not to wait for the world to return to "normal," but to use these very tools to build a more secure and informed future.

No comments:

Post a Comment