How Microsoft Is Leading A War On Deepfakes

Microsoft’s new tool detects digital manipulation in real-time based on nearly invisible imperfections.

Believe it or not, this is not former US President Barrack Obama. This is actually a video of acclaimed director/writer/actor Jordan Peele that’s been altered to look like the 44th President of the United States using an artificial intelligence software. They’re called deepfakes, digitally altered videos in which the face of an individual is replaced with that of another, and they’re beginning to raise some questions in regards to disinformation and fake news.

It’s likely you’ve seen this technology used before. Over the past year, we’ve seen several deepfake videos go viral online, many of which featuring hilarious face swaps between well-known celebrities and other public figures; my personal favorite being the absolutely stunning Jennifer Buscemi. Of course, like any fledgling technology, it’s just a matter of time until someone abuses its power to fit their own nefarious agenda.

That’s exactly what Microsoft is worried about leading up to this year’s monumental 2020 election. More specifically, its use as a tool for spreading disinformation and fake news by manipulating the image of prospective candidates. Basically, they’re afraid some less-honest folk may attempt to hurt the image of certain individuals by manipulating large groups with false information. It’s a valid concern, especially considering how easy the videos are to make. Some of the more expertly crafted videos are nearly indistinguishable from real life, which is why Microsoft is striking back with some powerful technology of its own.

Earlier this week, Microsoft announced the launch of a brand new tool capable of identifying videos that have been digitally altered using artificial intelligence technology. Developed as part of Microsoft’s Defending Democracy Program, Microsoft Video Authenticator analyzes videos as well as still images to detect signs of artificial manipulation, providing users with a percentage change or “confidence score” based on its findings.

How exactly does it do this? Put simply, the tool scans for minor imperfections located on the edges of the subject, some of which are invisible to the human eye, including subtle fading and gray-scale elements. It can even display percentages in real-time over each and every frame as it plays. As deepfake technology continues to advance, the company will seek more powerful detection methods to ensure the authenticity of future media.

“We expect that methods for generating synthetic media will continue to grow in sophistication,” states Microsoft in an official blog post. “As all AI detection methods have rates of failure, we have to understand and be ready to respond to deepfakes that slip through detection methods. Thus, in the longer term, we must seek stronger methods for maintaining and certifying the authenticity of news articles and other media. There are few tools today to help assure readers that the media they’re seeing online came from a trusted source and that it wasn’t altered.”

However, it’s not just Microsoft combating this growing epidemic of artificially manipulated content. Researchers at UC Berkeley are developing their own method for digital forensics which involves analyzing multiple videos of a subject using machine learning technology in order to identify unique facial mannerisms.

With multiple organizations diving headfirst into deepfake detection, it’s clear this ongoing trend of digitally-manipulated videos will only continue to expand. For more information on Microsoft Video Authenticator and the company’s other global initiatives, visit here.

Image Credit: Microsoft

Former Writer (Kyle Melnick):
Related News