We have all heard of things like photoshop, which allows people to create fake images or videos. Many times if the person is very skilled you may not be able to tell the difference between the fake and the real thing. Across the web, this is known as a deep fake, which is a synthetic media where a person in existing media is replaced with someone else’s likeness. While this is not a new concept, leveraging AI and machine learning techniques has drastically increased the ability of people to make convincing deep fake media. Deep Fakes have become well known for their application in celebrity porn videos, revenge porn, fake news, and financial fraud. This can also have several uses in propaganda and can easily lead to misinformation being created and disseminated across the internet, namely on social media. The ability to sway public opinion by creating videos of high-profile celebrities or politicians appearing to say or do certain things is very powerful and it’s something that needs to be guarded against. Fortunately, we do have a solution, deep fake detection is an AI-driven solution that can help to identify these fake pieces of media. Here is a great example using Barack Obama:
What is deep fake detection?
Deep fake detection is a natural counter to deep fakes. It leverages AI and machine learning to identify fake media and it can reportedly track their origin by reverse-engineering the media. Facebook, in conjunction with Michigan State University, is one of the leaders when it comes to deep fake detection and they assert that they have working samples of this technology. "Our method will facilitate deepfake detection and tracing in real-world settings, where the deepfake image itself is often the only information detectors have to work with," - research scientists for Facebook Xi Yin and Tal Hassner.
Facebook's new software runs deepfake images through its network. Their AI program looks for cracks left behind in the manufacturing process used to change an image's digital "fingerprint."
"In digital photography, fingerprints are used to identify the digital camera used to produce an image," the researchers explained. Those fingerprints are also unique patterns "that can equally be used to identify the generative model that the image came from."
Microsoft is another company that has revealed that they have developed a deep fake detection tool that can be used to identify this false media. However, the issue with this technology is that even if the technology works well today as the technology that creates these deep fakes evolve, the tools can quickly become outdated.
What does this mean for the everyday person?
This is another example of why we should be wary of what we read on the internet. It’s easier than ever for people to create fake media but it looks extremely realistic and we need to be careful to avoid being deceived. Especially when it comes to important issues. Take Covid-19 for example and the tension between people that are for vaccines and not for vaccines. It would be easy for someone to take an old video of a politician and create false media around lockdowns, the safety of the vaccine, future plans, etc. We need to be extra careful what sources we consume and allow them to influence our decisions.
How to get more free content
If you like this article and would like to read more of our content for cybersecurity insights, tips and tricks feel free to follow us on our social media. If you’re a struggling business owner who needs help in assessing their business’s cybersecurity posture feel free to take advantage of our free introductory assessment and we’ll help you figure out a game plan for keeping your company safe.