Hello, Guest!

Defense and Intelligence

DARPA to Drive Deepfake Defense Technology’s Transition to Commercial Use

Manipulated media

deterrent

DARPA to Drive Deepfake Defense Technology’s Transition to Commercial Use

The Defense Advanced Research Projects Agency is undertaking two new initiatives to build upon the momentum that the agency’s Semantic Forensics program and its predecessor, Media Forensics, have built to defend against deepfakes or manipulated media. 

According to DARPA, the SemaFor program, now in its final phase, has produced numerous analytics and methods that can help protect organizations or individuals against various deepfake threats. The program’s gains have reduced developmental risks, and the commercial and academic sectors can now leverage them for commercialization, DARPA said Thursday.

To drive the transition of the SemaFor technologies to commercial use, the agency will issue an analytic catalog listing the open-source facilities developed under the program that researchers and industry can access. The catalog will be updated as other capabilities are developed and mature. 

 DARPA will also hold the AI Forensics Open Research Challenge Evaluation, an open community research program to develop machine learning models that can identify synthetic artificial intelligence-generated images. 

AI FORCE will come as a series of challenges wherein the participants will develop and demonstrate models that can differentiate between authentic and AI-manipulated images. The first challenge is scheduled for launch on Monday, with its link for posting on the SemaFor program page.

On March 21, join the Potomac Officers Club’s 5th Annual AI Summit, where federal leaders and industry experts converge to explore AI’s transformative power. 

Potomac Officers Club Logo
Become a Potomac Officer Club Insider
Sign up for our weekly email & get exclusive event, and speaker updates, and find networking opportunities to connect with GovCon decision makers.