Is it illegal to abuse free trials?

Fake Videos and Deep Fake - How Can Users Protect Themselves?

The rise of deepfakes

The term “deepfake” is made up of “deep” for deep learning and “fake”. Deep learning is an advanced artificial intelligence (AI) -based method that uses multiple levels of ML (machine learning) algorithms to gradually extract finer and finer features from the raw data. Technology is able to learn from unstructured data, including the human face. For example, AI can collect data on physical movement.

This data is then processed to create a deepfake video via a GAN (Generative Adversarial Network). This is another specialized machine learning system. It uses two neural networks that compete against each other in learning the features from the training data. This data includes, for example, portraits of the victim. The information obtained is then used, for example, to generate new photos that have the same characteristics as the original.

Since such networks repeatedly test the images using the training data, the results become more and more convincing. That makes deepfake a real danger. In addition, GANs can forge data other than photos or videos. Deepfakes ML and synthesizing techniques can even mimic voices.

Deepfake examples

Prominent examples of deepfakes aren't hard to find. For example, there is the video by comedian Jordan Peele, in which he used real footage of Barack Obama, but overlaid his facial expressions and voice to publicly warn against deepfake videos. He then also showed what the two merged videos looked like separately. His advice? We have to question everything we see.

Another example is the video by Facebook CEO Mark Zuckerberg, in which he apparently talks about how Facebook wants to control the future of stolen user data - and that on Instagram. The original video is from his talk about Russia's interference in the US elections. Just 21 seconds of this speech was enough to create the new video. The voice imitation was not as good as Jordan Peele's Obama fake and ultimately revealed the whole thing to be a fake.

But even such less convincing fakes can have consequences. A video of the drunk Nancy Pelosi, spokeswoman for the House of Representatives in the United States, reached several million views on YouTube. The video was artificially slowed down so that it sounded like she was babbling. But not only politicians have to struggle with fakes: Many famous women have already been victims of deepfakes, in which their faces were superimposed on pornographic photos and videos.

Deepfake threats - fraud and extortion

Deepfake videos have already been used for political purposes or for personal revenge. But more and more often they are also used in large-scale blackmail and fraud attempts.

The CEO of a British energy company was defrauded by a deepfake for the equivalent of 220,000 euros. The voice of his supervisor from the parent company was faked to request an emergency money transfer. The fake was so convincing that he didn't even get suspicious. However, the funds were not transferred to the parent company's office, but to a third-party bank account. The CEO didn't realize it until his manager asked for funds again. This time the alarm bells rang for the CEO. But it was already too late to return the amounts that had already been transferred.

In France, a recent fraud campaign used counterfeit rather than deepfake, combined with an accurate replica of Foreign Minister Jean-Yves le Drian's office and furniture. With this action, executives were defrauded of millions of euros. The fraudster Gilbert Chikli has been accused of disguising himself as the minister to solicit money from wealthy people and company executives, allegedly to be used to free French prisoners in Syria. His trial is ongoing.

Such blackmail of company boards is also possible by deepfake attackers, e.g. B. about fake videos that would ruin the reputation of the victims - if no ransom is paid. Or criminals can gain access to your network by simply spoofing a video call from your chief information officer and getting employees to provide them with passwords and permissions. And with them you have control over all your confidential data.

Another danger is deepfake porn videos with which reporters are blackmailed - as happened at Rana Ayyub in India, who uncovered cases of abuse of power. The cheaper the technology, the more often deepfakes are used to blackmail or defraud people.