AI-generated synthetic media has become one of the most serious threats to information security. Deepfakes, fabricated videos, audio recordings and images, can portray people saying or doing things that never occurred. The technology relies on generative adversarial networks (GANs), which process vast datasets to produce convincing visual and audio scenes that credibly imitate reality. Widely available software, increased computing power and the proliferation of online generators have placed the tool within reach of ordinary internet users, not just major studios. Even specialists can now struggle to distinguish a forgery from the genuine article.
The technology does have legitimate applications. In film and advertising it is used to replace actors, recreate scenes featuring deceased or unavailable performers, and generate special effects. In education and culture, deepfakes support the reconstruction of historical events, the demonstration of notable figures' speeches in teaching materials, and artistic projects that are clearly labelled as AI-generated. In entertainment, the technology is used for memes, novelty videos and gaming effects. In these contexts, content is typically transparent to its audience: it is identified as AI-generated and does not create a false impression of real events or the actions of real individuals.

At the same time, the technology is increasingly exploited for criminal and destructive purposes. Fabricated videos featuring politicians can shift public opinion and interfere with electoral processes. Among the documented examples are deepfake videos depicting Venezuelan President Nicolás Maduro and other world leaders apparently making statements or taking part in events that never took place. Such material spreads through social media and messaging platforms, lending it a veneer of credibility. Financial fraud cases have also emerged: employees have transferred millions of euros to criminals who cloned the voices of company executives. Fake advertising campaigns featuring celebrities mislead consumers into paying for products or services that do not exist. High-profile cases of non-consensual deepfake pornography have involved actresses Gal Gadot and Collien Fernandes, among others. Fabricated compromising videos are also produced for the purposes of blackmail, threats and psychological abuse.
Synthetic content featuring public figures has become one of the most visible tools of disinformation. Taylor Swift, Tom Hanks and other entertainers have appeared in fabricated videos promoting non-existent products or apparently making scandalous remarks. Videos featuring Nicolás Maduro and other leaders showed them in false scenarios, provoking international controversy. Collien Fernandes, Gal Gadot and other public women have been targeted by deepfake pornography, drawing widespread media and public attention. Such videos damage not only the reputations of the individuals depicted but also the broader public, which risks accepting the forgery as real. Fabricated content distorts perceptions of what people have said and done, can lead to the termination of contracts and financial losses, and in a wider sense deepens societal polarisation and erodes trust in media generally.
Legal systems around the world have begun to respond to the threat, though regulation is still developing slowly. The Delhi High Court ordered Google, Meta and Amazon to remove deepfake content featuring cricketer Gautam Gambhir that had been circulated without his consent. A Bombay court ruled for the removal of provocative videos depicting actor Akshay Kumar, explicitly noting the threat to public order. In Germany, the Collien Fernandes case acted as a catalyst for tighter legislation against deepfake pornography. In the United States, individual states are introducing laws restricting the use of the technology during electoral campaigns and imposing sanctions on platforms that fail to remove harmful content. Legal experts, however, point to the difficulty of establishing authorship and tracing the distribution of deepfakes: internet anonymity and jurisdictional differences create a legal grey area that legislators are only gradually beginning to close.
Experts advise relying only on verified media outlets and official sources, scrutinising videos for visual inconsistencies such as lip movements, shadows, eye colour, and refraining from sharing sensational content before checking its authenticity. Limiting the public availability of personal photos and videos, and enabling two-factor authentication to secure accounts, are further recommended precautions. Anyone who encounters a deepfake is urged to preserve evidence and report it to law enforcement authorities.
According to expert estimates, deepfake fraud costs billions of dollars annually. Human ability to detect forgeries barely exceeds chance, and modern deepfake generators routinely evade automated detection systems as well.