Google stringently tackles deepfakes, false images created or modified using artificial intelligence, posing a threat to accurate information and causing harm to those involved. The company announced in an official post that it will filter results to reduce the chance of multiple people viewing duplicates online, and will change the classification of these contents.
Users have long been able to request the removal of such images based on Google’s policies. The new development is that the search engine giant will also search for all websites containing the same image, automatically removing it from the results. This update should help block the sharing of deepfakes and their presence on the web. “These protections have already proven effective in combating other types of non-consensual images,” Google states, “and we have now created the same functionalities for explicit false images.”
Google is also updating its classification systems for searches where there is a higher risk of these false contents appearing. When the user’s search parameters clearly reference this type of content, the system will aim to prioritize high-quality and non-explicit contents such as relevant news articles, when available.
“The updates we have made this year have reduced exposure to explicit image results by over 70%,” emphasizes Google’s Search lead, Emma Higham. “There is also a need to distinguish explicit content that is real and consensual (like nude scenes of an actor) from deepfakes. While this distinction poses a technical challenge for search engines, we are continuously improving to better highlight legitimate content and downgrade fakes.”
(© Copyright )