Click For Photo: https://media.wired.com/photos/5bc675fd2ba3661093860d5a/191:100/pass/Deepfake_Fin.png
That’s as concerning as it is unsurprising to anyone who’s been alive and on the internet lately. But it’s of particular concern to the military and intelligence communities. And that’s part of why Lyu’s research is funded, along with others' work, by a Darpa program called MediFor—Media Forensics.
MediFor started in 2016 when the agency saw the fakery game leveling up. The project aims to create an automated system that looks at three levels of tells, fuses them, and comes up with an “integrity score” for an image or video. The first level involves searching for dirty digital fingerprints, like noise that's characteristic of a particular camera model, or compression artifacts. The second level is physical: Maybe the lighting on someone's face is wrong, or a reflection isn't the way it should be given where the lamp is. Lastly, they get down to the “semantic level”: comparing the media to things they know are true. So if, say, a video of a soccer game claims to come from Central Park at 2 pm on Tuesday, October 9, 2018, does the state of the sky match the archival weather report? Stack all those levels, and voila: integrity score. By the end of MediFor, Darpa hopes to have prototype systems it can test at scale.
Clock - Sound - AI - Data - Years
But the clock is ticking (or is that just a repetitive sound generated by an AI trained on timekeeping data?). “What you might see in a few years’ time is things like fabrication of events,” says Darpa program manager Matt Turek. “Not just a single image or video that’s manipulated but a set of images or videos that are trying to convey a consistent message.”
Over at Los Alamos National Lab, cyber scientist Juston Moore’s visions of potential futures are a little more vivid. Like this one: Tell an algorithm...
(Excerpt) Read more at: WIRED
Wake Up To Breaking News!