News Release

Hiding in plain sight

Generative AI used to replace confidential information in images with similar visuals to protect image privacy

Reports and Proceedings

University of Tokyo

Generative content replacement (GCR).

image: 

The sections of these images outlined with a red box were annotated as privacy threatening with the use of an open source dataset called DIPA. GCR then used the annotated text prompts to replace the sections with visually similar or well-integrated substitutes. 

view more 

Credit: 2024 A. Xu, S. Fang, H. Yang et al./ Association for Computing Machinery

Image privacy could be protected with the use of generative artificial intelligence. Researchers from Japan, China and Finland created a system which replaces parts of images that might threaten confidentiality with visually similar but AI-generated alternatives. Named “generative content replacement,” in tests, 60% of viewers couldn’t tell which images had been altered. The researchers intend for this system to provide a more visually cohesive option for image censoring, which helps to preserve the narrative of the image while protecting privacy. This research was presented at the Association for Computing Machinery’s CHI Conference on Human Factors in Computing Systems, held in Honolulu, Hawaii, in the U.S., in May 2024.

With just a few text prompts, generative AI can offer a quick fix for a tricky school essay, a new business strategy or endless meme fodder. The advent of generative AI into daily life has been swift, and the potential scale of its role and influence are still being grappled with. Fears over its impact on future job security, online safety and creative originality have led to strikes from Hollywood writers, court cases over faked photos and heated discussions about authenticity. 

However, a team of researchers has proposed using a sometimes controversial feature of generative AI – its ability to manipulate images – as a way to solve privacy issues.

“We found that the existing image privacy protection techniques are not necessarily able to hide information while maintaining image aesthetics. Resulting images can sometimes appear unnatural or jarring. We considered this a demotivating factor for people who might otherwise consider applying privacy protection,” explained Associate Professor Koji Yatani from the Graduate School of Engineering at the University of Tokyo. “So, we decided to explore how we can achieve both — that is, robust privacy protection and image useability — at the same time by incorporating the latest generative AI technology.”

The researchers created a computer system which they named generative content replacement (GCR). This tool identifies what might constitute a privacy threat and automatically replaces it with a realistic but artificially created substitute. For example, personal information on a ticket stub could be replaced with illegible letters, or a private building exchanged for a fake building or other landscape features.

“There are a number of commonly used image protection methods, such as blurring, color filling or just removing the affected part of the image. Compared to these, our results show that generative content replacement can better maintain the story of the original images and higher visual harmony,” said Yatani. “We found that participants couldn’t detect GCR in 60% of images.” 

For now, the GCR system requires a lot of computation resources, so it won’t be available on any personal devices just yet. The tested system was fully automatic, but the team has since developed a new interface to allow users to customize images, giving more control over the final outcome. 

Although some may be concerned about the risks of this type of realistic image alteration, where the lines between original and altered imagery become more ambiguous, the team is positive about its advantages. “For public users, we believe that the greatest benefit of this research is providing a new option for image privacy protection,” said Yatani. “GCR offers a novel method for protecting against privacy threats, while maintaining visual coherence for storytelling purposes and enabling people to more safely share their content.”

#####

Paper Title

Anran Xu, Shitao Fang, Huan Yang, Simo Hosio, and Koji Yatani. 2024. Examining Human Perception of Generative Content Replacement in Image Privacy Protection. In CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 22 pages. 14 May 2024. https://dl.acm.org/doi/10.1145/3613904.3642103   

Useful Links:

Graduate School of Engineering: https://www.t.u-tokyo.ac.jp/en/soe 

Interactive Intelligent Systems Laboratory: https://iis-lab.org/ 

Funding:

This research is part of the results of Microsoft Research Asia CORE-D program as well as Value Exchange Engineering, a joint research project between R4D, Mercari Inc., and the RIISE.

Competing interests

None.

Research Contact:

Associate Professor Koji Yatani

Department of Electrical Engineering and Information Systems

Graduate School of Engineering

The University of Tokyo, 7-3-1 Hongo,

Bunkyo-ku, Tokyo, 113-8656, Japan

Email: koji@iis-lab.org

Press contact:
Mrs. Nicola Burghall (she/her)
Public Relations Group, The University of Tokyo,
7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8654, Japan
press-releases.adm@gs.mail.u-tokyo.ac.jp

About the University of Tokyo

The University of Tokyo is Japan’s leading university and one of the world’s top research universities. The vast research output of some 6,000 researchers is published in the world’s top journals across the arts and sciences. Our vibrant student body of around 15,000 undergraduate and 15,000 graduate students includes over 4,000 international students. Find out more at www.u-tokyo.ac.jp/en/ or follow us on X at @UTokyo_News_en.


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.