News Release

Generative artificial intelligence: a historical perspective

Peer-Reviewed Publication

Science China Press

Timeline of the development of Generative AI methods and applications.

image: 

The timeline shows representative Generative AI technologies that were developed and refined during different historical periods. The color of each bullet point indicates the specific stage in the development of Generative AI.

view more 

Credit: ©Science China Press

This study is led by Dr. Ran He (Institute of Automation, Chinese Academy of Sciences). His research team has conducted a comprehensive review of the development of Generative Artificial Intelligence (Generative AI) over the past half-century. Their work systematically traces the evolution of Generative AI, identifying key milestones such as the rise of deep learning, transformer architectures, and foundation models. To provide a structured understanding, they organized the development of Generative AI into four distinct stages:

  1. Rule-based Generative Systems: They highlighted that early methods for autonomous content generation emerged in the 1950s. These systems were built on predefined rules crafted by human experts and achieved notable success in specific tasks, particularly through expert systems.
  2. Model-based Generative Algorithms: They summarized the development of generative algorithms based on statistical or physical models, which expanded Generative AI to include fields such as machine learning, neural networks, computer graphics, and computer vision. Practical applications like computer animation became reliable tools, significantly reducing the need for manual content creation.
  3. Deep Generative Methodologies: With advances in computational power and data availability, deep generative models demonstrated exceptional capabilities. They summarized the technologies such as autoregressive and diffusion-based models, which have become foundational for numerous applications.
  4. Foundation Models: They emphasized that foundation models, such as GPT and deepSeek, now dominate the forefront of Generative AI development. These models, characterized by their large scale in terms of size and training data, offer unparalleled advantages, including high-quality content generation, natural interactions, and versatility across tasks. Foundation models have become the cornerstone of various applications, driving innovation across multiple industries.

They also compiled a representative timeline illustrating the development trajectory of Generative AI methods and applications (see the below figure titled 'Evolution of Design Principles in Generative AI'). Their work delves into representative approaches, evaluates the strengths and limitations of different generative technologies, and highlights successful applications in various fields. Additionally, they identify open challenges in the field, emphasizing that issues such as safety concerns and breakthroughs in theoretical paradigms urgently require further attention and development.

 

See the article:

Generative Artificial Intelligence: A Historical Perspective

https://doi.org/10.1093/nsr/nwaf050


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.