Harmful political and social remarks can be created through deepfakes. (Unsplash, CC0)
In the past decade, the shift from hard-copy to online news has made spreading misinformation effortless, and this problem only gets worse as technology continues to advance. Deepfakes, the result of video editing supported by artificial intelligence, is the newest addition to the growing list of fabricated news.
Deepfakes are computer generated pictures or videos that depict someone doing something they never actually did. After the computer program is supplied with enough audio and visuals of the person it is imitating, artificial intelligence synthesizes the video by superimposing the images and audio onto the source video.
The concept of videos presenting actions that didn’t really happen is alarming. Deepfakes of politicians delivering controversial remarks could sway the outcome of large-scale elections, especially if they are manipulated on social media.
Even if the power of deepfakes isn’t readily abused, the presence of fake videos still weakens the credibility of both social media and mainstream media. Today, where mainstream media is under flak for being, as Donald Trump labels it, “fake news” and “the enemy of the people,” deepfakes only contributes to the discredit of the press.
What’s even more startling is the ease of access to the artificial intelligence that makes deepfakes. A desktop app called FakeApp, paired with a myriad of online tutorials that teach users how to use the app, allows any persons to make deepfakes supporting their political motives. Sooner or later, people won’t be able to tell the difference between legitimate news and deepfakes, so nothing online will be believable.
Not all hope is lost, however. Although deepfakes can be misused, deepfake-creating software shows promise. As computing capacity increases annually, it’s conceivable that international affairs conducted through video calls are no longer limited by language barriers in the near future. As the software gets more powerful, computers may be able to insert both audio and altered facial structures in real time, so both parties appear to be speaking the same language.
Furthermore, both the government and individual tech companies are looking into the programming behind the creation of deepfakes and searching for flaws in the technology. From these defects, researchers across all fields are building detection software that checks for characteristics that might indicate a fabricated video. For example, irregular differences in pulses across the face or a lack of blinking indicate that the video might be doctored.
Trying to fully repress deepfakes is an effort that will likely take years and may not even succeed in the end. After all, as computing power increases and apps like FakeApp are improved, the videos will be so realistic that it’s impossible to distinguish them.
How cautious people will be of spoofed videos when they surf the web is a problem as the conflict over deepfakes continues to persist.