X

Samsung's Galaxy S24 Phone Will Flag Its AI-Generated Photos

In the era of artificial intelligence, more transparency about what we're doing to our photos could help maintain more trust.

Stephen Shankland Former Principal Writer
Stephen Shankland worked at CNET from 1998 to 2024 and wrote about processors, digital photography, AI, quantum computing, computer science, materials science, supercomputers, drones, browsers, 3D printing, USB, and new computing technology in general. He has a soft spot in his heart for standards groups and I/O interfaces. His first big scoop was about radioactive cat poop.
Expertise Processors | Semiconductors | Web browsers | Quantum computing | Supercomputers | AI | 3D printing | Drones | Computer science | Physics | Programming | Materials science | USB | UWB | Android | Digital photography | Science Credentials
  • Shankland covered the tech industry for more than 25 years and was a science writer for five years before that. He has deep expertise in microprocessors, digital photography, computer hardware and software, internet standards, web technology, and more.
Stephen Shankland
2 min read
The rear-facing cameras on a Samsung Galaxy S24 Ultra

Samsung's Galaxy S24 Ultra will label photos that have been modified by AI.

John Kim/CNET

Among the AI features in Samsung's new Galaxy S24 phones is a version of Google's generative AI technology for editing photos. But when you use it, Samsung wants other folks to know.

Samsung's software will add a quartet of little stars to the lower left corner of the photo when you use generative AI for actions like repositioning people in a shot, expanding a scene or deleting distractions in the background. In addition, Samsung will add invisible text called metadata into the photo file that declares the photo to have been modified by AI, said Hamid Sheikh, Samsung's vice president of intelligent imaging.

"We understand that with new possibilities for technology, there can also be concerns," Sheikh said during the Samsung Galaxy S24 launch event. "That's why with every generated image, we will be adding a watermark to the image and the label in the metadata."

The move is a notable response to worries AI could strip the truth out of our photos, undermining the trust we have in our visual communications. Watermarks and metadata can help provide some transparency in a world of deepfakes and disinformation.

It's not yet clear how noticeable Samsung's moves will be beyond its own phones and photo gallery software. "Since this is a relatively new development without a standardized format, it is currently incompatible on third party devices," the company said in a statement Thursday. "We are always looking at opportunities for future compatibility."

Samsung isn't the only one working on AI transparency measures. Google uses metadata to label its AI-generated images. Adobe uses a more elaborate approach called content credentials that's designed to detail who made AI-related changes and with what editing tools. It's backed by camera makers including Sony and Nikon.

Samsung didn't comment on why it used its own approach.

A screenshot of a photo showing the four stars Samsung will add to photos to disclose when generative AI has been used to transform a photo.

Samsung Galaxy S24 phones will add a group of stars to the lower left of an image if you use generative AI to edit photos on the new phones. This is a cropped portion of a Samsung demonstration image; the stars are a relatively small element in the overall photo.

Samsung; screenshot by Stephen Shankland

Modern AI on phones means powerful image editing that sidesteps the difficulties of painstaking photo manipulation in tools like Photoshop. Samsung demonstrated it by elevating a jumping basketball player toward the basketball hoop and expanding the scene for a better composition. Those are pretty dramatic changes to the original shot.

It's possible that people could strip out Samsung's metadata and watermarks, for example by expanding a frame with AI then cropping it back to get rid of a watermark. But defaults matter, especially for mainstream use, so it's likely that the approach would indeed help alert people that generative AI was used to change a photo.

Even if deceptive people want to fool you with fake photos, metadata and watermarks also can play a role for others who want to provide some transparency.