President Barack Obama sits in front of the American flag: “We are entering an era in which our enemies can make it look like anyone is saying anything at any point in time,” he warns. Although he is using familiar expressions and hand gestures, there is something strange about the video. Obama’s face looks weird, and his voice sounds flat and forced. It is difficult to distinguish exactly what is wrong with the video, and it only gets spooky from there. Obama references Black Panther and Get Out, and, in an unbelievable move, calls President Trump a “total and complete dipshit.” This is not true, it is almost impossible, and it is supposed to be. At the thirty-six-second mark, the screen splits, and it becomes evident that Oscar-winning filmmaker and comedian Jordan Peele is behind the fake. Despite being exactly him, Obama is not speaking. Instead, Peele used artificial intelligence to manipulate previous videos of Obama, along with technology to manipulate audio and create an incredibly realistic video of Obama saying and doing things he has never said or done.

In the last two years, fake content has fueled the virality of prejudiced inaccuracy to the extent that it has directly contributed to everything from outbreaks in Europe and the United States, through market manipulation in cryptocurrencies, the rise of the alt-right, the mainstreaming of conspiracy theories, Coronavirus fakes, etc. Deepfakes have advanced to the point where they are nearly indistinguishable from authentic videos. Using a mix of artificial intelligence and machine learning, the technology behind them will only continue to advance. As more Internet users learn how to harness deepfake technology, these videos will become more widespread and begin to creep into the public mind. Broadly speaking, a “deepfake” is a hyper-realistic video that has been digitally altered to depict an event or events that never occurred.

As deepfakes become more popular, the ability to distinguish between which videos are authentic and which are false will begin to diminish, causing the potential for social, legal, and political harms in a variety of areas in our daily lives, especially since no regulation exists.

There are three key points that we might consider when approaching deepfakes.

  1. Deepfakes present a significant systemic risk to the fidelity of the systems and processes we use.
  2. Deepfakes can be created by anyone with a computer and some time on their hands.
  3. The technological solutions for combating deepfakes are reactive in every sense (and I do not foresee a proactive defense). They require the content to be released before they can identify it; they must respond to improvements in the faked content after the fact; and the countermeasures tend to be specific, whereas the technology that creates the fakes is vast and widely applicable.

In my very own opinion, deepfakes are likely to be the cyber threat of the decade and as usual, "security experts" are not giving the correct awareness and advice on this topic.

Examples of deepfakes range from the silly to the sinister. Some of the hilarious applications of deepfakes include videos putting Nicholas Cage into famous scenes from different movies or videos of a Wall Street Journal reporter performing Bruno Mars’s dance moves. But because deepfakes origins are closely tied to pornography, a darker point of focus for many deepfakes involves creating pornographic videos of famous celebrities.

Advances in automation and social networking have allowed deepfakes to spread with great speed, reach, volume and coordination. Social media, in particular, has been fundamental to the distribution and evolution of content online since its platforms enable information flows to scale exponentially through network reach and the emergence of social botnets has played a significant role in augmenting the speed of spread on social media, particularly by triggering trending algorithms.

The virality of interest and consumption can spread a message across time zones long before there is a chance to corroborate or validate the source, which is something already exploited by the press in many countries, unfortunately; there are quite many examples in Mexico (because there are many fakes already done by journalists from almost all Mexican agencies and newspapers) that had made use of this nasty fake resources to spread deliberately continuous inaccurate information boarding different topics, mainly for destroying political images/agendas, but at the same time perpetrating substantial collateral damage to the population at unprecedented levels including important psychological traumas, depression, etc. From my perspective, all these kinds of agencies, newspapers and journalists should be catalogued already as cybercriminals no matter their past reputation, and depending on the criticality of their deepfakes generated, they even should be listed already as Advanced Persistent Threats as we will see below by their TTP's exhibited.

As I mentioned, the methods of spread are mainly via social networking but there can be other components useful in this task; the first one is with a botnet which is a synchronised network of remote accounts (bots) that are controlled centrally by servers. There are numerous types of bots, from commercial bots that promote click traffic, through customer service and transactions bots, to malicious bots spreading spam, disinformation, deepfakes and malware.

Social botnets refer to the coordinated use of many social media accounts, programmed to specific tasks, such as generating simple comments supporting a specific agenda, or retweeting or liking content created by real users who advocate for that agenda. This allows the controller of the botnet to generate a large volume of posts over a short period of time, creating the illusion of substantial organic information flows by real people. The range of malicious activity these bots can execute is listed as follows:

Smokescreening

Using hashtags related to a topic to comment on unrelated topics, thereby diluting the conversation about the primary topic. For example, a popular protest could be undermined through thousands of bots deliberately swamping the social media presence related to that activity, making it difficult for users to find and irritating those keeping track of it.

Misdirecting

The process of using tags and hashtags for a popular topic to direct people to another cause, for example, people searching for #TokyoOlimpics will see many tweets about voting for a candidate.

Astroturfing

Swamping a public debate online with a particular point of view, for example, pro-Biden or pro-Brexit. The appearance of significant support can at times generate more support through the illusion of philosophical equivalence.

Beyond social media platforms, botnets have created high-volume, high-margin, highly sophisticated illegal businesses through spamming campaigns, click frauds and bank frauds, among others and improvements in machine and deep learning have enabled botnets to complete increasingly sophisticated tasks. Particularly in the last two years, botnets have moved from predominantly automated spam accounts to more complex tactics.

  • Ability to target specific individuals or groups with high precision. Some bots, sock puppet accounts (numerous accounts controlled by a single person) and cyborgs (people who use software to automate their social media posts) have become very effective at propagating misinformation. In some cases, there are communities of interconnected viral bots and cyborgs that share each other’s posts and use retweets, mentions and other strategies to interact with target groups.
  • Ability to create virality by targeting specific moments of the information life cycle, particularly the first few seconds after an article with fake news is first published.
  • Capacity to use metadata on social platforms, particularly from user-generated content like written text, to better mimic human behaviour and avoid detection.
  • Capacity to target and amplify human-generated content rather than automated content because the former is more likely to be polarising.

After mentioning these tactics, it is important to mention again the individual and collective harms generated by the dissemination of deepfakes. For example, an individual whose likeness has been appropriated for deepfake pornography, or other similarly harmful content, may suffer severe emotional distress, psychological harm, and reputational injury.

On the other hand, the collective harms generated by deepfakes tend to be exacerbations of existing social problems and the reduction in our capacity to differentiate authentic from inauthentic content. Hyper-realistic deepfakes will undermine public safety, compromise international relations, and jeopardize national security. Deepfakes are certainly capable of all these things, but so are many other kinds of misinformation. And it is this root cause, our inner vulnerability to misinformation, that must be addressed and be considered in the spectrum of IT Security targeting those malicious actors generating and spreading deepfakes like the ones mentioned above. Fake content is used to construct narratives that are often used as proof to confirm particular beliefs, assuaging people’s natural confirmation bias. These technologies become even more powerful when their realism is coupled with the targeted distribution.

Researchers have been attempting to develop algorithms and other AI-assisted tools to determine whether a video is a deepfake or not. An individual’s pulse tends to stay constant, even at different pulse points; however, if a video was created by layering images and videos on top of each other, then what seems to be one individual in a video may have different pulses at various pulses points. The tool picks up those differences as evidence that a video is actually a deepfake.

This prompted new approaches that focus on assessing the target of the image to ensure that the human characteristics they exhibit are consistent with reasonable expectations including assessing skin tone, perspiration, breathing, blinking, heart rate, etc. While this has proven to be the most effective method for spotting fakes to date, the fakers are simultaneously integrating these protocols into newer versions, which are getting harder to spot.

I strongly believe that in the coming years, we will see more deepfakes being weaponized for monetization. The technology has proven itself in humiliation, misinformation, and defamation attacks. Moreover, the tools are becoming more practical and efficient. Therefore, it seems natural that malicious users will find ways to use the technology for a profit.

As a result, we expect to see an increase in deepfake phishing attacks and scams targeting both companies and individuals. As the technology matures, real-time deepfakes will become increasingly realistic. Then, we can expect that the technology will be used by cybercriminals to perform reconnaissance as part of an APT, and by state, actors to perform espionage and sabotage by reenacting officials or family members.

I would expect further research on solutions which do not require analyzing the content itself. It would be beneficial for future works to explore the weaknesses and limitations of current deepfakes detectors. By identifying and understanding these vulnerabilities, researchers will be able to develop stronger countermeasures against and overalls by raising awareness of verifying always everything they see over the Internet.