Dipfakes as a social threat

Download paper
Sergey G. Davydov

Candidate of Philosophical Sciences, Associate Professor, Department of Sociology, Senior Researcher, International Laboratory for Social Integration Research, Analyst, International Laboratory for Applied Network Research, HSE University, Moscow, Russia; ORCID 0000-0001-8455-9976

e-mail: sdavydov@hse.ru
Natalia N. Matveeva

Candidate of Economic Sciences, Associate Professor, Department of Economic Theory and Econometrics, Researcher, International Laboratory for Applied Network Research, HSE University, Moscow, Russia; ORCID 0000-0002-6378-7088

e-mail: nmatvveva@hse.ru
Anastasia V. Saponova

Deputy Director, ZIRCON Research Group; Senior Lecturer, Department of Social Institutions Analysis, HSE University, Moscow, Russia; ORCID 0000-0002-9393-3509

e-mail: saponova@zircon.ru

Section: Artificial Intelligence in Media and Communication Studies

The article examines the phenomenon of deepfakes – audio and/or visual synthetic content created using deep neural networks – from the perspective of its destructive use. The rapid development of deepfake technologies and the increasing number of scams using them make it necessary to systematically assess emerging social threats. The authors attempt to provide an assessment of the risks to various industries and fields, such as politics, media, business, and social and psychological well-being. Based on a survey of 33 Russian experts, the main threats to the spread of deepfakes and the effectiveness of methods for combating them were identified. Digital hygiene practices, which are recognized as the most effective method of preventing these threats, have also been systematized in order to prevent threats to three target audiences: individuals, organizations and governments. Finally, the authors present a typology of software products for implementing digital hygiene, including a digital notary, digital adviser, and digital bodyguard.

Keywords: deepfake, artificial intelligence, digital hygiene, digital security
DOI: 10.55959/msu.vestnik.journ.5.2025.5478

References:

Birrer A., Just N. (2024) What we know and don’t know about deepfakes: An investigation into the state of the research and regulatory landscape. New Media & Society. Online first. DOI: 10.1177/14614448241253138.

Borodin K., Kudryavtsev V., Korzh D., Efimenko A., Mkrtchian G., Gorodnichev M., Rogov O. Y. (2024) AASIST3: KAN-Enhanced AASIST Speech Deepfake Detection using SSL Features and Additional Regularization for the ASVspoof 2024 Challenge. arXiv preprint arXiv:2408.17352.

Bray S. D., Johnson S. D., Kleinberg B. (2023) Testing human ability to detect ‘deepfake’ images of human faces. Journal of Cybersecurity 9 (1): tyad011. DOI: 10.1093/cybsec/tyad011.

Cover R. (2022) Deepfake culture: the emergence of audio-video deception as an object of social anxiety and regulation. Continuum: Journal of Media & Cultural Studies 36 (7): 1–13. DOI: 10.1080/10304312.2022.2084039.

Hameleers M., van der Meer T. G. L. A., Dobber T. (2024) They Would Never Say Anything Like This! Reasons to Doubt Political Deepfakes. European Journal of Communication 39 (3): 56–70. DOI: 10.1177/02673231231184703.

Hynek N., Gavurova B., Kubak M. (2025) Risks and benefits of artificial intelligence deepfakes: Systematic review and comparison of public attitudes in seven European countries. Journal of Innovation & Knowledge 10 (5): 100782. DOI: 10.1016/j.jik.2025.100782.

Köbis N. C., Doležalová B., Soraperra I. (2021) Fooled twice: People cannot detect deepfakes but think they can. iScience 24: 103364. DOI: 10.1016/j.isci.2021.103364.

Mai K. T., Bray S., Davies T., Griffin L. D. (2023) Warning: Humans cannot reliably detect speech deepfakes. PLoS ONE 18 (8): e0285333. DOI: 10.1371/journal.pone.0285333.

Mirsky Y., Lee W. (2021) The creation and detection of deepfakes: A survey. ACM Computing Surveys (CSUR) 54 (1): 1–41.

Ashmanov I. S., Kasperskaya N. I. (2021) Tsifrovaya gigiena [Digital hygiene]. St. Petersburg: Piter Publ. (In Russian)

Ivanov V. G., Ignatovsky Ya. R. (2020) Deepfakes: perspektivy primeneniya v politike i ugrozy dlya lichnosti i natsional’noy bezopasnosti [Deepfakes: Prospects for political use and threats to individual and national security]. RUDN Journal of Public Administration 7 (4): 379–386. (In Russian)

Kiselev A. S. (2021) O neobkhodimosti pravovogo regulirovaniya iskusstvennogo intellekta: dipfeyk kak ugroza natsional’noy bezopasnosti [On the need for legal regulation of artificial intelligence: deepfake as a threat to national security]. Vestnik Moskovskogo gosudarstvennogo oblastnogo universiteta. Seriya: Yurisprudentsiya 3: 54–64. (In Russian)

Krasovskaya N. R., Gulyaev A. A. (2020) Tekhnologii manipulyatsii soznaniem pri ispol’zovanii dipfeykov kak instrumenta informatsionnoy voyny v politicheskoy sfere [Technologies of consciousness manipulation using deepfakes as an information warfare tool in politics]. Vlast’ 28 (4): 93–98. (In Russian)

McLuhan M. (2007) Understanding Media: The Extensions of Man. Moscow: Giperboreya; Kuchkovo Pole Publ. (In Russian)

Sukhodaeva T. S. (2022) Formirovanie navykov tsifrovoy gigieny studentov kak sushchestvennyy element vospitatel’noy raboty v vysshey shkole [Formation of students’ digital hygiene skills as an essential element of educational work in higher education]. Vestnik SGUPS: Gumanitarnye issledovaniya 2 (13): 100–105. (In Russian)


To cite this article: Davydov S. G., Matveeva N. N., Saponova A. V. (2025) Dipfeyki kak sotsial’naya ugroza [Dipfakes as a social threat]. Vestnik Moskovskogo Universiteta. Seriya 10. Zhurnalistika 5: 54–78. DOI: 10.55959/msu.vestnik.journ.5.2025.5478