Issue
Content:
-
Artificial Intelligence in Media and Communication Studies
«Artificial intelligence in the media sphere: Research directions, professional contradictions and new risks»
The editors of the issue compared the key areas of artificial intelligence research in media in global science with the Russian context. In 2025, the following topics can be considered as uniting researchers from all over the world: theories of the introduction of AI into communication practice and their criticism; deontology of the use of AI in journalism and social communication; the practice of creating AI content in federal and regional media; AI in social networks and messengers; visual media content and AI; perception of AI and AI news in the media industry and the media audience: from hopes to fears and confrontation; government policy, platforms, media companies in the field of using AI in communication; machine learning, small and large language models in media content analysis; recognition of emotions, irony, malicious content based on machine learning; AI in fake news analytics news and disinformation. The key areas of further research are the formation of hybrid models of human-AI interaction, the development of regulations and ethical codes for the use of neural networks in journalism, the analysis of the uneven digital transformation of regional media, the study of cognitive effects and professional adaptation of journalists, socio-political and technological parameters of the development of a new social environment and its public regulation.
Keywords: artificial intelligence, journalism, deontology, digital transformation, media
DOI: 10.55959/msu.vestnik.journ.5.2025.322Svetlana S. Bodrunova, Kamilla R. Nigmatullina 3 -
«The Russian model of AI use in digital ecosystems of the media communication industry»
Media has been at the forefront of digital transformation in recent years: not only have the methods of creating, selling, storing, and consuming media content and media services changed, but also the structure of the media communication industry (MCI) itself. Considering its new structure and subjectivity, one cannot help but pay attention to artificial intelligence (AI) technologies that manifest themselves in the areas of communication, content generation, and processing of big data created by users in media communications processes. AI as a set of relevant technologies and related social relations has become an important characteristic of media communications, influencing the interactions of the subjects actively existing in them. In this context, digital ecosystems (DES), which have become key actors structuring the MCI and the digital media environment, are of particular interest. It was AI technologies that played a decisive role in the transformation of media platforms into media DES; therefore, the current research issue to which this article is devoted is the analysis of their contribution to the AI transformation of the whole media communication industry. The conducted analysis of data on the functioning and genesis of AI technologies in the Russian MCI allowed us to identify two qualitatively different models of the use and development of AI technologies in it: an ecosystem model based on leadership and extensive market capture and a media model focused on optimization through AI solutions of internal processes.
Keywords: media platform, digital ecosystem, media communication, digital and social media, artificial intelligence in media
DOI: 10.55959/msu.vestnik.journ.5.2025.2353Sergei A. Vartanov, Anna Yu. Tyshetskaya 23 -
The article examines the phenomenon of deepfakes – audio and/or visual synthetic content created using deep neural networks – from the perspective of its destructive use. The rapid development of deepfake technologies and the increasing number of scams using them make it necessary to systematically assess emerging social threats. The authors attempt to provide an assessment of the risks to various industries and fields, such as politics, media, business, and social and psychological well-being. Based on a survey of 33 Russian experts, the main threats to the spread of deepfakes and the effectiveness of methods for combating them were identified. Digital hygiene practices, which are recognized as the most effective method of preventing these threats, have also been systematized in order to prevent threats to three target audiences: individuals, organizations and governments. Finally, the authors present a typology of software products for implementing digital hygiene, including a digital notary, digital adviser, and digital bodyguard.
Keywords: deepfake, artificial intelligence, digital hygiene, digital security
DOI: 10.55959/msu.vestnik.journ.5.2025.5478Sergey G. Davydov, Natalia N. Matveeva, Anastasia V. Saponova 54 -
«Artificial intelligence in Russian mass media: instruments, problems, and threats»
The article presents the findings of a study on the use of artificial intelligence (hereinafter AI) by Russian federal-level media organizations. The research involved in-depth semi-structured interviews with 71 representatives from key editorial offices of various media types (television, radio, print, news agencies, and online media). The interview guide was designed to take into account the current use and regulation of new technologies in editorial operations, challenges faced by media in using AI, feedback from editorial staff on the future of journalism considering AI development, and changes in professional standards. The research findings indicate that, unlike foreign practices of using new technologies, Russian media editorial offices use AI mainly to solve routine tasks, thereby speeding up and optimizing their employees‘ work. In doing so, new technologies are often “promoted” by journalists rather than media management. Almost all respondents believe that AI will not replace journalists’ work but will only make some adjustments to the work of journalists. Alongside this, some newsrooms express their concerns about the new technologies‘ integration into their future editorial practices, including the workflow acceleration, which may adversely affect the journalists‘ psychological state; the potential for a large number of standardized materials; growth of unreliable information; and the possible redundancy of some specialties.
Keywords: editorial practices, artificial intelligence, artificial intelligence technologies, neural network, copyright, editorial standards, interview
DOI: 10.55959/msu.vestnik.journ.5.2025.79103Kristina L. Zuykina, Daria V. Razumova 79 -
«Artificial intelligence as a co-author for local media journalists: cognitive load implications in content creation»
The article presents a pilot study of cognitive load among local media journalists interacting with generative AI tools. The relevance of this research stems from the structural digital inequality between newsrooms: differences between national and regional media are structural rather than geographical, arising from unequal access to resources, levels of income, education, specialization, and digital competence. These factors shape the pace and quality of AI adoption and, consequently, the nature of cognitive load. Despite the growing potential of AI to automate routine tasks and support analytical work, journalists in local media face considerable challenges in mastering these technologies. The study aims to develop a conceptual framework and methodological toolkit for measuring journalists’ cognitive load when engaging with generative AI systems. At the first stage, a quasi-experiment was conducted with ten local media journalists who tested three platforms (GigaChat, DeepSeek, and @GPT4TelegramBot). The results provide a preliminary assessment of the applicability of the proposed approach and indicate directions for further research. The theoretical basis of the study is cognitive load theory, and measurement was carried out using a modified NASA-TLX scale. Findings suggest that the highest cognitive load occurs during the formulation and structuring of prompts, continuous monitoring and correction of generated text, and efforts to maintain or restore dialogue context when the system provides insufficient support. Interface constraints were found to contribute substantially to extraneous cognitive load.
Keywords: artificial intelligence, cognitive load, local media, journalism, AI tools, co-authorship, prompt
DOI: 10.55959/msu.vestnik.journ.5.2025104134Natalia A. Pavlushkina, Aleksandra N. Litvinova 104 -
«Artificial intelligence as a subject of professional reflection of journalists and media managers of Stavropol media»
The paper studies three levels of professional reflection in Stavropol media regarding artificial intelligence – presentation of AI technologies to a mass audience, interpretation on the part of journalists, on the one hand, and media managers, on the other, and concludes on some contradictions between these levels. Despite the loyal information policy of the media to AI promotion in the region, the presentation of good practices for the introduction of AI in the economy and social sphere, a survey of journalists revealed their concerns about new technologies, lack of understanding of the editorial strategy, and low assessment of their skills in this area. In-depth interviews with media managers revealed a range of opinions on the issues of the degree, factors, and prospects for the introduction of AI services into the work of editorial offices, ethical aspects, willingness of teams to use neural networks, etc. At the present stage, intense use of AI in editorial work is a privilege of large Stavropol media companies. Uneven implementation of AI innovations can result in a sharp lag for small editorial offices that do not have sufficient resources.
Keywords: Stavropol media, artificial intelligence, media manager, professional reflection, regional media space
DOI: 10.55959/msu.vestnik.journ.5.2025.135155Olga I. Lepilkina, Lyudmila N. Sokolova 135 -
«System challenges for regional news media in the implementation of neural networks in media production»
We hereby present a study of the introduction of artificial intelligence (AI) in regional editorial offices in Russia. Interviews in three types of media showed the prerequisites for future systemic challenges in the journalistic profession. We name the key new dimensions of transformation, including the widening gap between the editorial offices that have introduced technology into media production and those who have not yet completed the transformation on social media and therefore cannot move to the next level, the widening gap in literacy and audience awareness, the growing cleavage between AI-literate journalists and conservative media managers, and vice versa, and the widening income gap between those who optimized both the news production and social media management, and those who still base their work in digital media on manual labor.
Keywords: generative neural networks, artificial intelligence, neural networks in media, regional journalism, AI introduction, AI acceptance
DOI: 10.55959/msu.vestnik.journ.5.2025.156178Kamilla R. Nigmatullina, Renat M. Kasymov, Alexander K. Polyakov 156 -
«‘Opinion tree’: a method for mapping online discussions based on neural-network topic modeling and abstractive summarization»
So far, no neural-network-based methodologies that aim at online opinion detection allow for representing user discussions on social networks in forms that would simultaneously capture cumulation, shift, and dissipation of consensus. Such a method would allow for scrupulous tracing of the opinion dynamics (including polarization of views), shorten the time for evaluation of opinion dynamics, and help address several theoretical assumptions on the nature of cumulative opinions. We propose a method for construction of ‘opinion trees’ within user discussions on social networks. The case dataset features a Reddit discussion of the 27th UN Climate Change Conference (COP27/UNFCCC2022). The method includes three methodological steps, namely defining the topicality bifurcation points, measuring the ‘thickness’ of ‘branches’, and summarizing the meaning of individual ‘branches’, thus allowing for both topicality divergence assessment and quick enough opinion tracing. Our method integrates recursive BERTopic-based topic modeling and Pegasus-based abstractive summarization, allowing for opinions to be seen as ‘folded’, ‘unfolded’, and ‘polar’, as detected in summaries of varying length.
Keywords: cumulative deliberation, online discussions, topic modeling, abstractive summarization, BERT, Reddit, COP27
DOI: 10.55959/msu.vestnik.journ.5.2025.179208Svetlana S. Bodrunova, Ivan S. Blekanov, Nikita A. Tarasov 179 -
«Emigration narratives relevance assessment by large language models for social media monitoring»
The paper’s relevance lies in the lack of methodological experiments specifically focused on evaluating the heuristics of artificial intelligence (AI) for assessing users’ social sentiments expressed through digital markers, particularly in the context of Russian national audience’s emigration intentions. The article delves into studies that measure language and text as data, with a special focus on understanding users’ social attitudes and behaviors expressed through digital markers. The primary objective of the study is to assess the significance of the streams downloaded by the neural network, specifically the LSTM language model employed by the Medialogia service. The research design encompasses cognitive mapping (a preliminary stage to identify search queries) and social media analysis conducted by the Medialogia service. Additionally, a representative sample of the downloaded streams is manually analyzed to evaluate their relevance. The paper highlights common errors in creating search queries and provides strategies to overcome these inaccuracies, which can be further utilized to enhance the neural network’s ability to download relevant datasets. Furthermore, the relevance of the streams segmentation conducted by the language model is analyzed. The paper makes an assumption about the underlying reasons for the varying degrees of relevance of documents (posts) downloaded by the service.
Keywords: social media, digital communication, social media analysis, cognitive mapping, neuronetworks, large language models, methodological experiment
DOI: 10.55959/msu.vestnik.journ.5.2025.209232Anna Yu. Dombrovskaya, Elena V. Brodovskaya 209 -
«Fake news detection by large language models»
The article is based on the results of a study evaluating the ability of large language models (LLMs) to distinguish between reliable and false news. While specialized fact-checking organizations are capable of conducting thorough investigations using substantial resources, ordinary readers typically lack access to such powerful tools. Instead, they assess the credibility of information based on personal experience, the opinions of their social environment, and increasingly, the output of publicly accessible LLMs. The study revealed that LLMs are highly accurate in identifying reliable news as such; however, they frequently make errors when classifying false news. The research also examined the capacity of LLMs to revise false news items in ways that make them appear more credible.
Keywords: artificial intelligence, large language models, fake news, fact-checking
DOI: 10.55959/msu.vestnik.journ.5.2025.233247Iuliia S. Leonova, Denis N. Fedyanin, Alexander G. Chkhartishvili 233

