Доверие аудитории к новостям, генерируемым ИИ: конструирование и валидация трехфакторного инструмента измерения
Скачать статьюPhD, преподаватель кафедры медиа и коммуникаций, Университет Зиана Ашура, г. Джильфа, Алжир; ORCID 0009-0009-7294-3060
Раздел: Искусственный интеллект в исследованиях медиа и коммуникации
В статье представлена конструкция надежного психометрического инструмента для измерения общественного доверия к генеративному новостному контенту. При этом использовался двухфазный подход к валидации с привлечением двух различных неслучайных выборок (N1=850; N2=900). В исследовании 1 с помощью эксплораторного факторного анализа (ЭФА) на основе первоначального пула пунктов была выявлена четкая трехфакторная конфигурация. Впоследствии эта конфигурация была оптимизирована и протестирована в исследовании 2 с использованием конфирматорного факторного анализа (КФА) на шкале из 10 пунктов. Подтвержденные измерения были определены как: (1) доверие к надежности/точности контента, (2) доверие к беспристрастности/объективности и (3) восприятие риска автоматизации. Индексы соответствия модели КФА подтвердили достоверность размерности инструмента (x2/df =2.95; CFI=0.95; RMSEA=0.05), который продемонстрировал исключительную надежность (≥0.79). Важно отметить, что итоговая шкала успешно предсказала ключевые поведенческие результаты, связанные с принятием и распространением контента. Следовательно, валидированная шкала доверия к новостям, генерируемым ИИ, представляет собой высоконадежный и валидный инструмент, необходимый для будущего анализа социально-технической динамики доверия в автоматизированной журналистике.
DOI: 10.55959/msu.vestnik.journ.6.2025.5379Trust is universally recognized as the bedrock of the relationship between citizens and news institutions, underpinning the credibility of information, methodological accuracy, and objective reporting. This fundamental belief system is vital for shaping public discourse, facilitating informed decision-making, and maintaining social stability. Traditionally, media trust has been cultivated through human mechanisms, relying on journalistic integrity, editorial accountability, and transparent source disclosure. The accelerating integration of Artificial Intelligence (AI) technologies across the news production pipeline – from automated data synthesis and drafting to personalized distribution – has introduced a pivotal epistemological challenge to this established trust model. The central question for contemporary media scholarship is how to conceptually define, reliably measure, and effectively cultivate trust in content that is partially or fully generated by non-human systems. This paradigm shift necessitates the development of new theoretical and methodological instruments capable of capturing the unique dynamics imposed by algorithmic oversight of news narratives.
In the context of this study, trust is defined as the public‘s general perception of content explicitly or implicitly attributed to AI creation. Crucially, this research does not empirically distinguish between news genuinely generated by an AI system and news merely attributed to AI (Attributed to AI). Given that trust evaluation is primarily driven by the awareness of the producer (human vs. machine), the scale is designed to measure public attitude toward content they perceive as AI-assisted or AI-generated. This aligns with research indicating that attribution is the most potent factor affecting credibility assessment.
While AI offers immense efficiency gains – automating routine tasks, enabling rapid data analysis, and customizing content via machine learning – it simultaneously generates profound anxieties. Concerns revolve around algorithmic opacity (the «Black-Box» problem), the potential for manipulation, and the propagation of inherent biases embedded in training data. Such systemic risks pose a direct threat to public confidence, particularly in democratic societies where news credibility is essential for a cohesive collective consciousness and vital decision-making (Baptista, Rivas-de-Roca, Gradim, P rez-Curiel, 2025: 4). The classic inquiry, “Do people trust what the media reports?” has evolved into the more complex, “Do people trust the algorithms that execute the media’s function?” This shift mandates the integration of specialized concepts like Trust in Automation and Algorithmic Trust, which fundamentally differ from traditional human-centric or institutional forms of trust. Automated journalism presents specific vulnerabilities, including the machine’s inherent difficulty in comprehending subtle cultural context, ethical complexity, and human nuance. Moreover, the increasing sophistication of AI-powered misinformation, notably Deepfakes ((Sanchez-Acedo, Carbonell-Alcocer, Gertrudix, Rubio-Tamayo, 2024: 225), poses an existential threat, capable of radically undermining the perceived veracity of visual and auditory information (Temir, 2020: 1012).
Consequently, developing a clear framework for trust in this domain is paramount. Faced with overwhelming and diverse information streams, the public requires sophisticated cognitive tools to evaluate automated content. This imperative is amplified across varied cultural contexts, where the standards for credibility assessment are often modulated by local values and socio-political dynamics. Without accurate and reliable psychometric instruments, researchers and practitioners remain ill-equipped to diagnose trust levels, identify influencing factors, and design targeted interventions for promoting media transparency and literacy. This study directly addresses this methodological deficit by developing and validating a scale specifically designed to measure this complex construct.
The conceptual foundation for this study is rooted in the Trust in Automation Systems Theory (John and Katrina, 2004: 63), which models trust in technical systems as a rigorous, multi-dimensional evaluation process. When applied to AI-generated news, public trust is conceptualized as an assessment based on three core pillars:
1. Competence (Accuracy and Reliability): This relates to the system‘s technical capability to perform its required task successfully. In journalism, this translates to the AI‘s perceived ability to generate content that is factually accurate, current, and free from error, encompassing the reliable processing of large datasets (Chung, Nam, Ryong, and Lee, 2022).
2. Integrity (Objectivity and Fairness): This addresses the system’s perceived honesty, adherence to fairness, and impartiality. The public assesses whether the AI-generated news avoids undue bias, represents viewpoints equitably, and resists reflecting systemic biases inherent in its programming or data (O’Neil, 2016: 52).
3. Benevolence (Transparency and Ethical Concern): This dimension reflects the perception that the system acts in the user‘s best interest, free from harmful or hidden agendas. This is crucial in media, requiring transparency regarding the automation process, adherence to ethical standards, and a commitment to preventing audience manipulation (Baptista, Rivas-de-Roca, Gradim, P rez-Curiel, 2025: 7).
The public‘s decision to rely on or reject AI-generated content is fundamentally an outcome of these implicit evaluative processes. A positive assessment of the AI‘s competence and integrity (Choung, David, Ross, 2023: 13) coupled with confidence in its benevolence leads to higher trust.
Alignment with the Proposed Scale Dimensions
The three dimensions extracted from the scale in this research naturally align with the theoretical framework:
Trust in Accuracy and Reliability: Directly reflects the assessment of Competence.
Trust in Objectivity and Fairness: Directly corresponds to the assessment of Integrity.
Perceived Potential Risks: Represents a negative evaluation of Benevolence and safety; the awareness of risks such as manipulation or deepfakes fundamentally undermines the belief that the system is acting in the user’s best interest.

The Distinctive Challenges of AI Trust
Trust in AI is a rapidly evolving area, extending across fields from autonomous vehicles (Versteegh, 2019: 5) to customer service (Zhao, You, Zheng, Shi et al., 2025: 128). In the media context, several key challenges distinguish AI trust from its traditional counterparts:
Transparency and Explainability: Research consistently highlights that user trust is highly contingent on the system’s transparency and the user’s ability to grasp how the AI operates or arrives at its content output, particularly in sensitive areas like news (Chung, Nam, Ryong, Lee, 2022).
The Credibility Premium of the Human Element: Early studies on automated journalism suggest a public bias, where content known to be machine-authored is often viewed with lower credibility compared to human-authored news, reflecting a perceived deficit in the “human touch” and contextual depth (Chung, Nam, Ryong, Lee, 2022).
The Imperative of AI Media Literacy: The fight against sophisticated AI-generated misinformation mandates the cultivation of AI Media Literacy skills. This involves empowering individuals to critically distinguish, evaluate, and understand the mechanisms of automated content production (Baptista, Rivas-de-Roca, Gradim, P rez-Curiel, 2025: 227).
The Research Gap
Despite the urgency of these challenges and the proliferation of AI in global media, a significant methodological and empirical gap exists: there is a severe lack of standardized, validated psychometric instruments specifically designed to measure public trust in AI-generated news content.
Systematic reviews of major scientific databases (Scopus, Web of Science, Google Scholar) up to the first third of 2025 confirm an insufficient body of work dedicated to the rigorous validation of such a scale in diverse public settings. This scarcity hampers the ability of researchers and media strategists to accurately quantify trust levels, understand the construct‘s dimensionality, and design effective strategies for responsible technology adoption. Furthermore, the understanding that trust factor structures can vary significantly across cultural and social contexts mandates localized validation studies to ensure the resulting tool accurately reflects the psychological construct in a given population. Therefore, this study is positioned to fill this gap by conducting a rigorous investigation into the psychometric properties (validity and reliability) of a newly developed scale for measuring public trust in AI-generated news content. Based on the theoretical framework and the identified research gaps, this study seeks to answer the following questions:
What is the factor structure of the «Trust in AI-Generated News Content» scale within a diverse audience sample?
What are the internal consistency indicators of the scale and its extracted factors?
Does the proposed factor structure of the scale demonstrate acceptable model fit with empirical data?
Are there statistically significant differences in trust levels and their factors based on demographic variables (e.g., age, gender, education)?
Are there statistically significant differences in trust levels and their factors based on news consumption habits (e.g., source preference)?
Does the scale demonstrate sufficient convergent and predictive validity (i.e., correlating with relevant constructs and predicting behavioral intentions)?
This research employed a two-phase, sequential psychometric design (Study 1: EFA; Study 2: CFA) to rigorously develop and validate the factor structure of the Trust in AI-Generated News Content Scale.
Study 1: Initial Scale Development and Exploratory Factor Analysis (EFA)
Sample and Data Collection
The initial phase utilized a non-probability convenience sample consisting of 850 diverse public individuals (N=850). Participants, recruited from various geographical regions, had an average age of 32.5 years (SD=8.75; range 18 to 65 years). The majority were male (55.8%), and educational levels were varied (university graduates 60%, secondary level 25%, less than secondary 15%).
Sampling Transparency and Limitations
Data collection employed a mixed-mode approach: surveys were distributed both online (via secure platforms and social media research groups) and through field distribution in public venues (e.g., libraries and shopping centers). This dual approach was chosen to enhance sample heterogeneity but resulted in a sample that is non-representative of the general population and is subject to selection bias (particularly skewing toward younger, digitally active individuals). This limitation is acknowledged and informed the subsequent interpretation of the EFA results. Inclusion criteria required participants to be adults who consented to participation; refusal or failure to meet the age criteria led to exclusion.
The study adhered to recognized scientific ethical standards. Prior to data collection, necessary approvals were obtained from the relevant institutional ethical committees. Participants were fully informed of the study‘s purpose and guaranteed confidentiality. Participation was voluntary, with no direct incentives or negative consequences, and participants were assured the right to withdraw at any time. The average completion time was 15-20 minutes. Data were collected between May 1 and June 30, 2025.
Trust in AI-Generated News Content Scale (Preliminary 35-Item Version): This novel instrument was developed based on a thorough review of literature concerning trust in AI and media. It comprised 35 items rated on a five-point Likert scale (1= Strongly Disagree to 5 =Strongly Agree). Theoretically, the items spanned five proposed dimensions: Accuracy and Reliability, Objectivity and Fairness, Transparency and Disclosure, Perceived Risks, and General Acceptance.
Demographic and News Consumption Questionnaire: Used to collect descriptive information (age, gender, education, location) and self-reported habits regarding media usage (e.g., traditional, digital, social media sources).
Preliminary Validation and Statistical Analysis
A preliminary semantic validation was executed with 50 public individuals (equally distributed by gender and education) to verify the clarity, comprehensibility of instructions, and appropriateness of the response scale. Data analysis utilized the Statistical Package for the Social Sciences (SPSS, Version 28). Descriptive statistics were calculated, followed by Exploratory Factor Analysis (EFA) using Principal Components Extraction with Varimax rotation to uncover the underlying factor structure. Internal consistency for the resulting dimensions was assessed using Cronbach’s Alpha (α) and average interitem correlation (ri.i).
Study 2: Construct Validation using Confirmatory Factor Analysis (CFA)
Sample and Measures
The second validation phase recruited a separate non-probability convenience sample of 900 public individuals (N=900) from various geographical regions. Inclusion criteria specifically targeted individuals who regularly consume news from digital sources or social media. The sample profile was similar to Study 1, with an average age of 33.1 years (SD= 8.50; range 18 to 68 years), and a majority being male (57.2%). Crucially, 20.5% of this sample reported recent exposure to questionable or low-credibility news content.
As in Study 1, this convenience sample is not statistically representative; however, the large sample size supports the robust statistical requirements of Confirmatory Factor Analysis (CFA).
Participants completed the shortened 10-item version of the Trust in AI-Generated News Content Scale. These 10 items were meticulously selected from the initial 35-item pool based on their high factor loadings and theoretical significance in the EFA (Study 1), ensuring that the three emergent dimensions were comprehensively represented.
Procedures and Statistical Analysis
Data collection adhered strictly to the ethical and procedural guidelines established in Study 1. Special emphasis was placed on conducting the surveys in environments conducive to accurate and focused responses.
Statistical Analysis
Descriptive statistics and internal consistency (α) were calculated using IBM SPSS. The primary analytic tool for Study 2 was Confirmatory Factor Analysis (CFA), performed using AMOS (Version 26), to formally test the fit of the three-dimensional model derived in Study 1.
Model fit was evaluated using multiple established fit indices:
Chi-square/degrees of freedom (chi2/df): Values between 2 and 3 indicate good fit, with values up to 5 being considered acceptable.
Comparative Fit Index (CFI) and Tucker-Lewis Index (TLI): Values ≥0.90 are considered acceptable indicators of model fit.
Root Mean Square Error of Approximation (RMSEA): Values between 0.05 and 0.08 (with a 90% Confidence Interval, IC90%) suggest a reasonable fit, and values up to 0.10 are marginally acceptable.
Results of Study 1 (Preliminary Scale Evaluation (EFA)
The preliminary evaluation of the scale utilized Exploratory Factor Analysis ({EFA) on the initial 35 items. As presented in Table 1, the analysis retained only 18 items, with 17 items being excluded due to factor loadings not meeting the required threshold of |0.40| or causing structural complexity. The EFA confirmed a simplified three-factor structure, deviating from the initial five-dimensional design. Internal consistency (Cronbach’s Alpha, α) and homogeneity (average inter-item correlation, ri.i) were calculated for the three emergent factors:
Factor 1: Trust in Accuracy and Reliability, α=0.92, ri.i= 0.48 (8 items).
Factor 2: Trust in Objectivity and Fairness, α=0.88, ri.i= 0.42 (6 items).
Factor 3: Perceived Potential Risks, α=0.81, ri.i= 0.35 (4 items).

Results of Study 2:
Study 2 aimed to gather comprehensive psychometric evidence using Confirmatory Factor Analysis (CFA) on a streamlined 10-item, threefactor model. This shortened version (AI Trust Scale-10) was selected based on the highest factor loadings (>|0.50|) and theoretical centrality derived from the 18-item preliminary version:
χ2/df = 2.95 CFI = 0.95
TLI = 0.94
RMSEA = 0.05 (IC90%)
The combined dimensions successfully accounted for 50.15% of the total variance, and all retained items maintained a minimum factor loading of |0.50|, attesting to the practical significance of the selected indicators. As illustrated in Figure 2, the CFA confirmed that this 10item scale provides a valid and reliable measurement tool for assessing public attitudes toward AI-generated news.

A1: I find that information in AI-generated news is reliable
A2: I trust that AI presents news based on facts
A3: I trust that AI-generated news is updated quickly
A4:I believe that news generated by AI is free from bias
B2:I trust that AI does not favor certain viewpoints in the news
B3:I believe that AI treats all news sources fairly
C1:I fear that the public will be manipulated through AI-generated news
C2:I am concerned that AI may spread misleading information
C3:I fear that AI may affect my privacy through targeted news
The ten items in Figure 2 were chosen for the shortened version (AI Trust Scale-10) based on the highest factor loadings and their theoretical importance from the preliminary 18-item version extracted in the exploratory study (Table 1), representing the core of each of the three factors in the context of this study.
Figure 2 illustrates the factor structure of the shortened version of the Trust in AI-Generated News Content scale (10 items), which was confirmed through Confirmatory Factor Analysis (CFA) using AMOS software in Study 2 (N = 900). This figure reflects the theoretical threefactor model that emerged from the exploratory factor analysis in Study 1, consisting of: Trust in Accuracy and Reliability, Trust in Objectivity and Fairness, and Perceived Potential Risks.
This visual model, in addition to the obtained model fit indices (χ2/df = 2.95, CFI = 0.95, TLI = 0.94, RMSEA = 0.05), confirms that the shortened scale (10 items) fits the empirical data from the studied audience well. This reinforces the conviction that the three-factor structure is the most appropriate representation of trust in this context, providing a valid and reliable measurement tool for assessing public acceptance and their ability to deal with AI-generated news content.
Finally, internal consistency coefficients, Cronbach’s Alpha (α) and homogeneity (average inter-item correlation = ri.i) , were calculated, and the results were as follows:
Factor 1: Trust in Accuracy and Reliability (4 items), α=0.91, ri.i= 0.69.
Factor 2: Trust in Objectivity and Fairness (3 items), α=0.85, ri.i= 0.62.
Factor 3: Perceived Potential Risks (3 items), α=0.79, ri.i= 0.55.
Based on the above, additional evidence for factorial validity and internal consistency was found for the Trust in AI-Generated News Content scale in a sample of the studied audience. Considering the results, it became clear that the shortened version (10 items) represents the concept more appropriately and effectively.

The demographic profile of participants in Study 1, detailed in the relevant table, indicated a primary concentration within the 26-35 age bracket. This concentration reflects a segment characterized by high engagement in digital news consumption. Furthermore, the sample exhibited robust representation across all educational levels, with the majority of respondents holding a university degree. This balanced distribution across life experience and educational attainment was crucial. It ensured sufficient heterogeneity to explore potential variations in trust perceptions across different educational backgrounds and age cohorts, thereby enhancing the contextual relevance of the scale validation results.

Analysis of the mean overall trust scores by gender in Study 1 (as presented in the relevant table) revealed a statistically significant difference. The mean trust score for males (M= 67.80) was marginally higher than the score recorded for females (M =65.20). Despite the small magnitude of this difference, the result (p = 0.021) confirmed statistical significance at the 0.05 level, indicating that males within this sample exhibited a greater disposition toward trusting AI-generated news content. This observation aligns with existing literature reporting gender disparities in technology acceptance rates or distinct news consumption patterns.

The correlation matrix between the three factors of the 10-item scale in Study 2 (Table 4) revealed essential internal relationships. A positive, moderate, and statistically significant correlation was observed between “Trust in Accuracy and Reliability” and “Trust in Objectivity and Fairness” (r = 0.58). This outcome suggests that public confidence in the performance aspects of {AI}-generated news is cohesive. Furthermore, statistically significant negative correlations were found between “Perceived Potential Risks” and the other two factors (r = -0.40 and r = -0.35 respectively). This confirms that an elevated perception of risks acts as an inhibitor, systematically reducing reported trust in both accuracy and impartiality. Collectively, these results attest to the internal coherence of the overall trust construct, demonstrating that the three dimensions are interdependent and interact dynamically to form the public’s viewpoint on automated news content.
Analysis of Convergent and Predictive Validity of the Scale:
To reinforce the scale’s psychometric evidence, additional analyses were performed to specifically assess its convergent validity – the degree to which the scale’s factors correlate with measures of theoretically related concepts. This involved examining the scale and its sub-factors against two relevant external variables. The first was AI Awareness in Media, which quantified the public’s understanding of AI technologies used in news production. The second was General Propensity for News Skepticism, a measure capturing an individual’s chronic tendency toward distrusting news content broadly. These analyses were essential for confirming that the AI Trust Scale accurately measures the intended theoretical construct by demonstrating expected relationships with established external measures.

The results confirmed expected theoretical alignments, providing robust evidence for convergent validity. Specifically, AI Awareness in Media exhibited significant positive correlations with both the Trust in Accuracy (r=0.45) and Objectivity (r=0.38) factors, indicating that greater understanding of AI operations in news production fosters higher positive trust. The concurrent negative correlation with Perceived Potential Risks (r=-0.30) suggests that increased awareness helps mitigate uninformed concerns. Conversely, General Propensity for News Skepticism correlated negatively and strongly with Trust in Accuracy (r=-0.52) and Objectivity (r=-0.48), confirming that pervasive news skepticism transfers to lower confidence in AI-generated content performance. Simultaneously, skepticism showed a positive correlation with Perceived Potential Risks (r=0.41), linking a generalized cautious attitude to heightened risk awareness. These strong, statistically significant findings affirm the scale’s alignment with established theoretical expectations.
Predictive validity assesses the extent to which a measurement tool can forecast relevant future behaviors or outcomes. To establish this validity for the AI Trust Scale, the capacity of the three derived trust factors to predict specific public behaviors regarding AI-generated news content was analyzed. The key dependent behavioral variables examined were: the Intention to Share AI-Generated News (capturing willingness to repost or disseminate content), Willingness to Pay for AI-Generated News (serving as an indicator of perceived acceptance and reliability), and Recommendation of AI-Generated Content to Others (signaling high trust and perceived dependability). A Multiple Regression analysis was employed, utilizing the established trust factors as the independent predictor variables against these three behavioral outcomes.

The Multiple Regression results demonstrate that the scale factors are significant predictors of public behavioral intentions. Specifically, “Trust in Accuracy and Reliability” (β= 0.35) and “Trust in Objectivity and Fairness” (β= 0.28) positively and significantly predicted the intention to share AI generated news. Conversely, “Perceived Potential Risks” (β= -0.22) predicted sharing negatively. This suggests that confidence in content quality drives dissemination, while risk perception suppresses it. A nearly identical pattern emerged when predicting willingness to pay, where positive trust factors (β= 0.40 and β= 0.32) significantly increased willingness, while perceived risks (β= -0.25) reduced it, indicating a public disposition to invest in content they deem objective and reliable. Similar associations were found for content recommendation to others (β= 0.38, β= 0.30, and β= -0.20 for positive and risk factors, respectively). The models’ R2 values, ranging from 28% to 35% of the variance in the behavioral outcomes, are both reasonable and statistically significant for behavioral research. Collectively, these consistent findings provide robust evidence for the predictive validity of the trust scale.

Table 7 displays the findings from a one-way Analysis of Variance (ANOVA), which tested for mean differences in the three trust factors based on participants’ preferred type of news source (Traditional, Digital, or Social Media) in Study 2.
The ANOVA results indicate that statistically significant differences exist across all three trust factors based on the source preference (p≤ 0.002):
Trust in Accuracy and Reliability (Factor 1): Participants who prefer Traditional news sources (TV, newspapers) reported the highest mean trust scores (M=18.20), followed by Digital news users (M = 17.50), whereas Social Media users reported the lowest trust (M = 15.80). This difference is highly significant [F(2, 897) = 5.10, p = 0.001].
Trust in Objectivity and Fairness (Factor 2): A similar pattern emerged regarding perceived fairness. Traditional news adherents scored highest (M = 13.50), significantly differing from Social Media users who scored lowest (M = 11.50), with the difference being statistically robust (F(2, 897) = 4.85, p = 0.002).
Perceived Potential Risks (Factor 3): This factor showed an inverse relationship. Individuals relying on social media perceived the highest potential risks from AI-generated content (M = 8.50), while Traditional news consumers perceived the lowest risks (M = 6.80). This variance is highly significant [F(2, 897) = 6.20, p < 0.001].
These findings strongly suggest that the mechanism of news consumption significantly influences an individual’s perception of AI-generated content. Specifically, individuals who gravitate towards traditional media demonstrate a higher generalized baseline of institutional trust, which appears to transfer to or interact favorably with their assessment of automated news accuracy and objectivity. Conversely, the elevated risk perception among social media users may reflect a higher level of critical awareness or media skepticism regarding the source complexity and frequent exposure to misinformation on those platforms. Despite the statistical significance across all factors, the observed effect sizes were small to moderate (Partial η2ranging from 0.010 to 0.013). This indicates that while the preferred news source is a reliable predictor of trust dimensions, it accounts for a modest portion of the overall variance in trust scores.
Analysis of Differences in Trust Factors by Demographic Variables and News Consumption Habits

Table 8 presents the findings of a one-way Analysis of Variance (ANOVA) conducted to explore differences in the mean scores of the three trust factors based on participants’ educational attainment. The analysis confirmed the existence of statistically significant differences across all three factors, evidenced by p-values significantly below the
0.05 threshold (p le 0.005 for all). Specifically, trust in both Accuracy and Reliability [F(2,897)=4.20, p=0.005] and Objectivity and Fairness [F(2,897)=3.90, p=0.009] showed a clear positive correlation with education; individuals with a University level education reported the highest mean scores for these two trust dimensions (M Accuracy = 18.00; M Objectivity = 13.50). Conversely, the factor measuring Perceived Potential Risks was inversely related to education, with lower educational groups (Less than Secondary) reporting the highest risk perception mean score (M = 8.80) compared to University graduates (M = 6.50), with a strong significance [F(2,897)=5.10, p=0.001].
The pattern suggests that a higher level of formal education may be associated with increased confidence in the fundamental capabilities and integrity of sophisticated systems like AI in news production, or possibly a higher acceptance of technological advancement in general. However, despite the strong statistical significance, the observed effect sizes were consistently small across all three factors (Partial 2 ranging from 0.008 to 0.011). This finding indicates that while educational level is a reliable predictor of variations in trust dimensions, its practical impact is limited, as it accounts for only a minimal percentage of the total variance in the trust scores. This necessitates further exploration of other, potentially stronger, cognitive or behavioral predictors not directly captured by simple demographic variables.

Table 9 summarizes the results of the one-way Analysis of Variance (ANOVA) comparing the mean scores of the three trust factors across four distinct age groups in Study 2. The analysis revealed statistically significant mean differences across all trust dimensions, with p-values highly significant (p l e 0.004 for all). A clear and consistent trend emerged: Trust in Accuracy and Reliability [F(3,896)=3.80, p=0.004] and Trust in Objectivity and Fairness [F(3,896)=4.10, p=0.003] both demonstrated a positive relationship with age. Older age cohorts (46 and above) consistently reported the highest mean scores for both dimensions (M Accuracy= 18.50; M Objectivity= 13.90), indicating greater acceptance and confidence in the capabilities and fairness of AIgenerated content compared to the youngest group (18-25). Conversely, the analysis of Perceived Potential Risks showed a significant inverse relationship [F(3,896)=5.50, p<0.001]. The youngest age group (18-25) reported the highest perception of risks (M = 7.90), while the oldest group (46 and above) reported the lowest (M = 6.00).
This suggests that younger, and typically more digitally native, audiences maintain a higher level of skepticism and critical awareness concerning the potential pitfalls and security issues associated with AI in media. Although all differences were statistically significant, the effect sizes were small (Partial η2 ranged from 0.013 to 0.018). This indicates that while age reliably predicts the direction of differences in trust, the magnitude of its influence on the total variance of trust scores is modest.

Table 10 presents the results of the one-way Analysis of Variance (ANOVA) examining differences in trust factor mean scores based on the self-reported frequency of exposure to AI-generated news content in Study 2. The analysis revealed highly statistically significant differences across all three trust factors (p < 0.001 for all). A robust positive dose-response relationship was observed for the two trust factors: individuals who reported consuming AI-generated content Often scored highest on Trust in Accuracy and Reliability (M=18.80) and Trust in Objectivity and Fairness (M=13.90). Conversely, those who reported being exposed Rarely or Never scored lowest on these dimensions (M Accuracy= 16.00; M Objectivity= 11.20). The factor measuring Perceived Potential Risks showed a significant inverse trend [F(2,897)=7.20, p<0.001]. Participants who rarely or never encounter AI-generated news perceived the highest risks (M=9.20), while those frequently exposed perceived the lowest risks (M=5.80).
This suggests that familiarity breeds confidence: increased frequency of exposure may lead to habituation, thereby reducing perceived psychological risk and simultaneously enhancing confidence in the systems’ performance. Although these differences were strongly significant, the overall effect sizes remained small (Partial 2 ranged from 0.013 to 0.016), indicating that while exposure frequency is a predictor, its practical influence on the overall variance in trust remains modest.
Psychometric Rigor and Factor Structure:
The two-phase methodological approach successfully established the psychometric properties of the Trust in AI-Generated News Content Scale. The Exploratory Factor Analysis (EFA) in Study 1 identified a robust, concise three-factor structure – “Trust in Accuracy and Reliability,” “Trust in Objectivity and Fairness,” and “Perceived Potential Risks” – which was subsequently validated in Study 2. This dimensionality decisively confirms that public trust in AI is a product of multifaceted evaluations, directly supporting the multidimensional tenets of the Trust in Automation Systems Theory (Lee, See, 2004: 63). The final shortened 10-item scale demonstrated exceptional rigor: Confirmatory Factor Analysis (CFA) yielded excellent model fit indices (χ2/df=2.95, CFI=0.95, TLI=0.94, RMSEA=0.05), aligning perfectly with established standards (Hu, Bentler, 1999: 29). High internal consistency was affirmed, with Cronbach’s Alpha (α) exceeding 0.79 for all factors, confirming high reliability. Furthermore, factor loadings exceeding 0.50 for all retained items validate the items’ practical significance and capacity to effectively measure their respective latent factors (Hair, Black, Babin, Anderson, 2019).
Internal Dynamics and Validity Evidence:
Analysis of the scale’s internal correlations strongly supports the theoretical framework. Performance dimensions (Accuracy and Objectivity) were positively correlated, while a critical negative correlation emerged with Perceived Potential Risks. This logical inverse relationship confirms the theoretical premise that elevated risk awareness acts as a powerful constraining factor on positive trust. The prominence of perceived Accuracy and Objectivity suggests they are the fundamental practical drivers of public trust, demanding prioritization by AI developers. The status of Perceived Potential Risks as a standalone dimension highlight that the public is not passive but critically aware of inherent threats (e.g., algorithmic bias). Furthermore, the study provided strong evidence for convergent validity through significant correlations with ‘AI awareness in media’ (positive) and ‘general propensity for news skepticism’ (negative), aligning the scale with theoretically consistent constructs. Crucially, predictive validity was established: the three factors significantly predicted key public behaviors, including willingness to share, pay for, and recommend AI-generated news, validating the scale’s utility beyond mere measurement.
Demographic and Behavioral Predictors of Trust Variability:
The study identified significant variability in trust scores based on demographic and consumption patterns:
Exposure and Habituation: A direct and significant relationship was confirmed between increased frequency of exposure to AI news and higher trust in its performance, while simultaneously reducing perceived risks (Table 10). This suggests a strong habituation effect where familiarity translates into acceptance and diminished skepticism, although the effect size (Partial η2) remains small.
Education and Age: Higher education was associated with increased trust in content performance and lower risk perception (Table 8), likely reflecting enhanced analytical and media literacy skills. Conversely, older age groups showed higher trust in AI performance, while younger, digitally native audiences were significantly more aware of potential risks (Table 9). This disparity reflects divergent generational experiences with technology and misinformation, where young users’ heightened caution contrasts with older users’ potential transfer of traditional media trust.
Consumption Preferences: The preferred news source significantly impacted trust (Table 7). Individuals preferring traditional media showed higher transferred trust in accuracy and objectivity, whereas social media users demonstrated a significantly higher perception of risks, aligning with literature linking social platforms to increased exposure to questionable content.
These findings, supported by statistically significant results across demographic and behavioral variables, underscore the paramount importance of segment-specific factors in understanding trust variability. The consistent observation of small effect sizes (Partial η2) highlights that while these factors are reliable predictors, they account for only a modest portion of the overall variance in trust. This necessitates the design of targeted media awareness campaigns tailored to the unique profiles (e.g., age, education, and consumption habits) of diverse public segments.
The paradigmatic shift driven by the increasing integration of Artificial Intelligence (AI) into the news production and distribution lifecycle served as the primary impetus for this study. Our central objective was to construct and rigorously evaluate the psychometric properties of a reliable and valid Trust in AI-Generated News Content Scale for a general public, thereby addressing a critical methodological gap in digital media scholarship. Utilizing a sequential two-phase design, including Exploratory Factor Analysis (EFA) in Study 1 and Confirmatory Factor Analysis (CFA) in Study 2, the study definitively demonstrated that trust in this complex context is embodied by a statistically significant three-dimensional factor structure.
This tripartite division – comprising «Trust in Accuracy and Reliability,» «Trust in Objectivity and Fairness,» and «Perceived Potential Risks» – provides a solid conceptual framework. It reflects the public‘s multifaceted evaluation, aligning closely with the Trust in Automation Systems Theory by incorporating competence, integrity, and safety/benevolence (John D and Katrina A, 2004, p. 63), The methodological rigor was affirmed by excellent CFA model fit indices (CFI=0.95 and RMSEA=0.05) on the shortened 10-item scale, confirming its strong agreement with empirical data. Furthermore, high factor loadings (|0.50|) and robust internal consistency ( ranging between 0.79 and 0.91) enhanced the scale‘s reliability. Beyond structural validation, the scale also provided evidence for strong convergent and predictive validity, confirming its utility in correlating with established theoretical constructs and anticipating critical public behaviors.
The scientific contribution extends beyond tool development, offering crucial insights into how trust is mediated by individual differences. Results concerning variations in trust levels based on gender, educational level, age group, and news consumption patterns are not mere statistical observations but calls for further reflection on the socio-demographic and behavioral factors shaping these perceptions. For instance, the heightened risk perception among social media users warrants in-depth study into the platform‘s environment effects on credibility. Similarly, the relationship between repeated exposure and increased trust suggests a «habituation» dynamic that requires further research to determine if it stems from genuine understanding or superficial acceptance.
In conclusion, this study successfully contributes a robust and highly credible psychometric tool to the literature of digital media and social psychology. The availability of this validated scale enables researchers and practitioners to systematically measure trust, facilitating the development of informed strategies aimed at enhancing credibility and transparency in AI-generated news. This not only fulfills the academic objective of research but is vital for empowering communities to adapt effectively to the challenges and opportunities posed by Artificial Intelligence, ultimately supporting enhanced media literacy and social wellbeing in the era of information overload.
ReferencesBaptista J. P., Rivas-de-Roca R., Gradim A., Pérez-Curiel C. (2025) Human-made news vs AI-generated news: a comparison of Portuguese and Spanish journalism students’ evaluations. Humanities and Social Sciences Communications 12, Article 567. DOI: 10.1057/s41599-025-04872-2
Chung W. Y., Nam J., Ryong K., Lee D. (2022) When, how, and what kind of information should Internet service providers disclose? A study on the transparency that users want. Telematics and Informatics 70: 101799. DOI: 10.1016/j.tele.2022.101799.
Choung H., David P., Ross A. (2023) Trust in AI and Its Role in the Acceptance of AI Technologies. International Journal of Human-Computer Interaction 39 (9): 1727–1739. DOI: 10.1080/10447318.2022.2050543
Hair J. F., Black W. C., Babin B. J., Anderson R. E. (2019) Multivariate Data Analysis. 8th ed. Andover, UK: Cengage Learning.
Hu L., Bentler P. M. (1999) Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal 6 (1): 1–55. DOI: 10.1080/10705519909540118
Lee J. D., See K. A. (2004) Trust in automation: designing for appropriate reliance. Human Factors 46 (1): 50–80. DOI: 10.1518/hfes.46.1.50_30392
O’Neil C. (2016) Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown Publishing Group.
Sanchez-Acedo A., Carbonell-Alcocer A., Gertrudix M., Rubio-Tamayo J.-L. (2024) The challenges of media and information literacy in the artificial intelligence ecology: deepfakes and misinformation. Communication & Society 37 (4): 223–239. DOI: 10.15581/003.37.4.223-239
Temir E. (2020) Deepfake: New Era in The Age of Disinformation and End of Reliable Journalism. Selçuk İletişim Dergisi 12 (2): 1009–1024. DOI: 10.18094/JOSC.685338
Versteegh M. (2019) Trust in automated vehicles: a systematic review. Bachelor’s thesis. University of Twente. Available at: https://essay.utwente.nl/78372/1/Versteegh_BA_Psychology.pdf (accessed: 06.06.2025).
Zhao X., You W., Zheng Z., Shi S., Lu Y., Sun L. (2025) How do consumers trust and accept AI agents? An extended theoretical framework and empirical evidence. Behavioral Sciences 15 (3): 337. DOI: 10.3390/bs15030337
Как цитировать: Мазари Н. Доверие аудитории к новостям, генерируемым ИИ: конструирование и валидация трехфакторного инструмента измерения // Вестник Московского университета. Серия 10. Журналистика. 2025. № 6. С. 53–79. DOI: 10.55959/msu.vestnik.journ.6.2025.5379
Поступила в редакцию 02.08.2025

