A group of major technology companies building AI solutions have formed a new initiative to address the growing challenge of identifying AI-generated synthetic media and “deepfakes.”
The Coalition for Content Provenance and Authenticity (C2PA) includes Adobe, Microsoft, Intel, and other tech firms. Their goal is to create an open standard that can certify the origins and provenance of online content.
The C2PA aims to develop metadata tools that can attach critical information to digital content, including whether AI was used to create or alter it. This could help content platforms and users identify synthetic or falsified media.
The need for such tools is growing as AI techniques become more advanced, allowing for hyperrealistic fake images and videos. External pressure is also mounting on tech firms over this issue.
To begin addressing this, the C2PA has introduced a logo that can be attached to certified content. However, some experts say more visible labels may be needed for end users.
Developing robust authentication methods remains challenging, as no flawless AI detection system addresses; many view voluntary efforts like the C2PA as a critical first step.
The C2PA initiative has released open-source tools that any organization can adopt. Members include media outlets, academics, nonprofits, and tech firms.
While the steps fall short of official regulation, the coalition hopes its standards will promote transparency and trust online. Further, self-regulation is not an unknown concept, and the approach to such topics remains highly controversial. With advanced AI proliferating across industries, that effort is just beginning.
The post Microsoft, Adobe and Intel take small step against deepfakes with opt-in tool for ‘combined’ media appeared first on CryptoSlate.