Tsukuba, Japan—Art has emerged as a significant investment asset. This has led to growing interest in art price prediction as a tool for assessing potential returns and risks. However, organizing and annotating the data required for price prediction is challenging due to the substantial human costs and time involved. To address this, researchers applied a technique known as "zero-shot classification," which leverages a large language model (LLM) to classify data without the need for pre-prepared training data.
The research team explored the feasibility of automatically determining artwork types—such as paintings, prints, sculptures, and photographs—by optimizing the LLM "Llama-3 70B," an open model, to a 4-bit format. The results confirmed that the model classified artwork types with an accuracy exceeding 90%. Furthermore, when compared to OpenAI's GPT-4o generative AI, it achieved slightly higher accuracy.
This approach enables performance comparable to conventional machine learning methods while notably reducing the human effort and time required for data organization. These results could enhance accessibility to art analyses and price evaluation, expanding opportunities not only for investment but also for research and appreciation.
Original Paper
Title of original paper:
Zero-Shot Classification of Art With Large Language Models
Journal:
IEEE Access
DOI:
10.1109/ACCESS.2025.3532995
Correspondence
Associate Professor YOSHIDA, Mitsuio
Institute of Human Sciences, University of Tsukuba
TOJIMA, Tatsuya
Degree Programs in Systems and Information Engineering, University of Tsukuba
Related Link
Institute of Business Sciences
Master's / Doctoral Program in Risk and Resilience Engineering
Journal
IEEE Access
Article Title
Zero-Shot Classification of Art With Large Language Models
Article Publication Date
23-Jan-2025