Some people eat, sleep and chew gum, I do genealogy and write...

Friday, October 20, 2023

Artificial Intelligence: Is the cat out of the bag?

 

By PawełMM - Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=76953571

The full question here includes whether the bag was already empty. There seems to be an invalid assumption that the current developments in artificial intelligence are somehow revolutionary rather than evolutionary. If you or anyone understands and knows about the history of the development of computers and programming over the past 100 years or so, you would not be surprised by the current developments or unduly concerned. Yes, there are concerns, but these concerns are also not "new." They are merely repeating what has been said about industrialization from the time it began in about 1830. Here is a short history of the basis for the evolutionary developments in AI. The problem lies with the people who use the information.

The basis for the ability of artificial intelligence to create text and images comes from large language models (LLM). Here is a short summary of the history of LLMs from Bing, “A Summary of the History of Large Language Models”, 2023. Retrieved from Bing on October 19, 2023. Note the list of sources provided by the search. 

Large language models (LLMs) are neural networks that can process and generate natural language using massive amounts of data and computational resources. They have evolved from the early attempts to create rule-based systems that could mimic human conversation, such as Eliza in the 1960s, to the modern models that can perform a wide range of tasks, such as GPT-4 and Google Bard. 

The development of LLMs has been driven by several key innovations in natural language processing (NLP), such as:

  • The introduction of Long Short-Term Memory (LSTM) networks in 1997, which enabled deeper and more complex neural networks that could handle longer sequences of data.
  • The creation of Stanford’s CoreNLP suite in 2010, which provided a set of tools and algorithms for various NLP tasks, such as sentiment analysis and named entity recognition.
  • The emergence of Transformer architectures in 2017, which improved the efficiency and performance of neural networks by using attention mechanisms to focus on relevant parts of the input and output.
  • The adoption of self-supervised and semi-supervised learning methods, which allowed LLMs to learn from large amounts of unlabeled text data without requiring human annotations.
LLMs have demonstrated remarkable abilities to understand and generate human-like text, as well as to acquire general knowledge and facts from the data they are trained on. However, they also face several challenges and limitations, such as:

  • The ethical and social implications of using LLMs for various purposes, such as misinformation, bias, privacy, and accountability.
  • The scalability and efficiency issues of training and deploying LLMs, which require huge amounts of data, parameters, and computational power.
  • The generalization and robustness problems of LLMs, which may fail to handle novel or out-of-distribution inputs, or to explain their reasoning and decisions.

LLMs are one of the most impressive and influential achievements in AI technology, but they are also constantly evolving and improving. The future of LLMs may involve new architectures, algorithms, datasets, applications, and challenges that will push the boundaries of natural language understanding and generation.

Here is a list of sources and citations to sources for the summary of the history of large language models:

  • A Comprehensive Overview of Large Language Models by Humza Naveed, Asad Ullah Khan, Shi Qiu, Muhammad Saqib, Saeed Anwar, Muhammad Usman, Naveed Akhtar, Nick Barnes, Ajmal Mian. This is a review article that covers a broad range of topics related to LLMs, such as architectures, datasets, benchmarks, efficiency, and challenges. You can cite this source as follows:

Naveed, H., Khan, A. U., Qiu, S., Saqib, M., Anwar, S., Usman, M., Akhtar, N., Barnes, N., & Mian, A. (2023). A Comprehensive Overview of Large Language Models. arXiv preprint arXiv:2307.06435.

Scribble Data. (2023). Large Language Models 101: History, Evolution and Future. Retrieved from Scribble Data on October 19, 2023.

Snorkel AI. (2023). Large language models: their history, capabilities and limitations. Retrieved from Snorkel AI on October 19, 2023.

Zhang, Y., & Liang, P. (2023). Studying Large Language Model Generalization with Randomized Training Data. arXiv preprint arXiv:2308.03296. 

None of these developments could have happened without the initial development of super fast computers, huge memory storage capabilities, and the internet. Which came first, artificial intelligence or computers? The concept of AI came from the earlier concept of thinking machines. The earliest idea of a "thinking machine" came in the 1830s when British mathematician Charles Babbage envisioned what he called the analytical engine. Viewed in the context of history, AI as it exists today was inevitable. 

What does all this mean? Essentially, the current notoriety of AI is based on developments that started more than a hundred years ago. The current handwringing and predictions about the end of the world, have been going on since before Karel Čapek's novel R.U.R., which introduced the word robot in 1921, and can be glimpsed in Mary Shelley's Frankenstein (published in 1818). See Wikipedia: AI Takeover.

What will happen to genealogy as soon as one genealogy company works out the details of using AI to analyze the information in their data base and family trees? You can see a glimmer of what is already happening with the suggestions now being made when you add a new ancestral line to an Ancestry.com family tree with their record hints and suggestions for parents. With the constant and accelerating development of AI programs, it is certain that how we do genealogy today will be different tomorrow.

No comments:

Post a Comment