Unpacking public trust in AI-generated news: constructing and validating a three-factor measurement tool
Download paperPhD, Lecturer, Department of Media and Communications, Ziane Achour University, Djelfa, Algeria; ORCID 0009-0009-7294-3060
e-mail: n.mazari@univ-djelfa.dzSection: Artificial Intelligence in Media and Communication Studies
This methodological investigation sought to establish and validate the construct structure of a robust psychometric instrument for measuring public trust in AIgenerated news content, addressing a critical gap in digital media studies regarding the impact of automation. The research employed a sequential two-phase validation approach across two distinct, non-probability samples (N1= 850; N2= 900). In Study 1, Exploratory Factor Analysis (EFA) on an initial item pool identified a clear, three-factor configuration, which was subsequently streamlined and tested in Study 2 using Confirmatory Factor Analysis (CFA) on a 10-item scale. The confirmed dimensions were identified as: Trust in Content Reliability/Accuracy, Trust in Impartiality/Objectivity, and Risk Perception of Automation. The CFA model fit indices confirmed the dimensional fidelity of the instrument (x2/df=2.95; CFI=0.95; RMSEA=0.05), which also demonstrated exceptional reliability (α ≥0.79) and strong external evidence through anticipated correlations with media skepticism and technological awareness. Critically, the final scale successfully predicted key behavioral outcomes related to content adoption and sharing. Consequently, the validated Trust in AI News Scale offers a highly reliable and valid tool, essential for future analysis of socio-technical trust dynamics in automated journalism.
DOI: 10.55959/msu.vestnik.journ.6.2025.5379References:
Baptista J. P., Rivas-de-Roca R., Gradim A., Pérez-Curiel C. (2025) Human-made news vs AI-generated news: a comparison of Portuguese and Spanish journalism students’ evaluations. Humanities and Social Sciences Communications 12, Article 567. DOI: 10.1057/s41599-025-04872-2
Chung W. Y., Nam J., Ryong K., Lee D. (2022) When, how, and what kind of information should Internet service providers disclose? A study on the transparency that users want. Telematics and Informatics 70, Article 101799. DOI: 10.1016/j.tele.2022.101799.
Choung H., David P., Ross A. (2022) Trust in AI and Its Role in the Acceptance of AI Technologies. International Journal of Human-Computer Interaction 39 (9): 1727–1739. DOI: 10.1080/10447318.2022.2050543
Hair J. F., Black W. C., Babin B. J., Anderson R. E. (2019) Multivariate Data Analysis. 8th ed. Andover, UK: Cengage Learning.
Hu L., Bentler P. M. (1999) Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal 6 (1): 1–55. DOI: 10.1080/10705519909540118
Lee J. D., See K. A. (2004) Trust in automation: designing for appropriate reliance. Human Factors 46 (1): 50–80. DOI: 10.1518/hfes.46.1.50_30392
O’Neil C. (2016) Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown Publishing Group.
Sanchez-Acedo A., Carbonell-Alcocer A., Gertrudix M., Rubio-Tamayo J.-L. (2024) The challenges of media and information literacy in the artificial intelligence ecology: deepfakes and misinformation. Communication & Society 37 (4): 223–239. DOI: 10.15581/003.37.4.223-239
Temir E. (2020) Deepfake: New Era in The Age of Disinformation and End of Reliable Journalism. Selçuk İletişim Dergisi 13 (2): 1009–1024. DOI: 10.18094/JOSC.685338
Versteegh M. (2019) Trust in automated vehicles: a systematic review. Bachelor’s thesis. University of Twente. Available at: https://essay.utwente.nl/78372/1/Versteegh_BA_Psychology.pdf (accessed: 06.06.2025).
Zhao X., You W., Zheng Z., Shi S., Lu Y., Sun L. (2025) How do consumers trust and accept AI agents? An extended theoretical framework and empirical evidence. Behavioral Sciences 15 (3): 337. DOI: 10.3390/bs15030337
To cite this article: Mazari N. (2025) Doverie auditorii k novostyam, generiruemym II: konstruirovanie i validatsiya trekhfaktornogo instrumenta izmereniya [Unpacking public trust in AI-generated news: constructing and validating a three-factor measurement tool]. Vestnik Moskovskogo Universiteta. Seriya 10. Zhurnalistika 6: 53–79. DOI: 10.55959/msu.vestnik.journ.6.2025.5379

