The proliferation of images generated by artificial intelligence (AI) is problematic in multiple ways. AI models have faced allegations of being trained using stolen art, then there is their exorbitant use of water and alarming carbon footprint. There is also the threat – both political and otherwise – of increased misinformation, with the creation of fake images with propaganda (or other nefarious means) in mind. But even innocuous images have the power to spread nonsense.
AI-generated images have been seen at the top of the image search results of large search engines. Google has stated that in the coming months, they will add the fact that an image was AI-generated or modified to the Content Credentials for that image. This is only for images that contain Content Provenance and Authenticity (C2PA) metadata – there is currently no announced plan on how they will deal with AI images that don’t use the C2PA standards.
Search engines giving AI-generated results is not good – this has been recently highlighted by social media users who have been pointing out many different examples of misinformed search results. A particularly concerning one is image searches for a “baby peacock”. On Bing, one of the first image search results as of the time of writing is an AI-generated stock image. On Google Images, at the time of writing, this and other AI-generated images also appear, although some are linked to articles that debunk the fake images.
The “baby peacock” – sporting Disney-like doe eyes, blue feathers, and some eldritch horror going on with the feet – makes for a pretty striking visual. That image is very wrong; their chicks are generally brown, with regular eyes and regular feet.
Peacocks are not an exception. On the Google Images search page for “galaxy” at the time of writing, there are real images, AI-generated images that are labeled as such if you go to their source, and ones that are clearly not genuine but are not labeled as AI-generated.
How to spot an AI-generated image
Many AI-generated images can be easily spotted – the more sophisticated ones often have similar limitations, but you have to spend a little more time to find the mistakes that are more obvious in other pieces.
Eyes, limbs, and other oddities
Looking for errors is always a starting point. Thinking of the peacock, the eyes might be cute (albeit fake) but the legs appear off. Fingers and limbs often seem to be hard for AI to reproduce accurately. In fact, eyes can be used to check even very realistic fake images of humans – the light reflected (dubbed the “stars in their eyes”) is very difficult to reproduce. Thank you, physics!
Non-bodily errors
Errors might be more subtle: weird colors; textures that change where they shouldn’t; shadow, architectural, and lighting issues; things that a human artist would not usually do. Often there are objects, people, or other small details that are out of place or shouldn’t be there. Sometimes, the image looks too perfect – do the subjects look like they have been airbrushed within an inch of their lives?
Also pay attention to the background objects, which might be rendered less faithfully, as well as text that might be nonsensical. One famous example that became an internet meme was the Willy’s Chocolate Experience, whose AI-generated posters invited people to a “pasadise of sweet teats”.
Can you source it?
Some images have watermarks (like the baby peacock), so that should make them easy to check, and it is important to be able to check the credit. Try to find where the image came from and to go directly to the origin.
Ultimately, a cardinal rule of research is to look at different sources and see if they agree. Getting to a truthful answer might not be easy with all the garbage out there, but at least you’ll know the fakes.
All “explainer” articles are confirmed by fact checkers to be correct at time of publishing. Text, images, and links may be edited, removed, or added to at a later date to keep information current.
Source Link: AI-Generated Images In Search – How To Spot Them And Why They Are A Problem