The rapid proliferation of artificial intelligence (AI) technologies has fundamentally transformed how humans interact with digital systems. Among these innovations, OpenAI's ChatGPT has emerged as a particularly influential tool, capable of generating coherent, contextually relevant text across diverse domains. From academic research and educational applications to creative writing and professional communication, ChatGPT offers unprecedented opportunities to enhance human productivity and knowledge acquisition. However, as AI systems increasingly mediate information, learning, and decision-making processes, understanding the dynamics of user trust becomes critical. Trust determines whether users will engage meaningfully with AI, rely on its outputs, and integrate it effectively into their cognitive and social routines. Without sufficient trust, even highly capable systems risk underutilization, misinterpretation, or rejection.
While studies on human-computer interaction have long emphasized trust, research focusing on large language models (LLMs) like ChatGPT remains nascent. Previous investigations often address generic or professional users, leaving significant gaps in understanding trust mechanisms among university students, who represent a pivotal demographic in educational technology adoption. Students encounter ChatGPT in varied academic contexts, from seeking assistance in assignments to exploring complex problem-solving tasks, each scenario potentially shaping trust differently. Moreover, trust is rarely unidimensional; cognitive evaluations of system competence, affective perceptions of reliability, and behavioral intentions to rely on outputs collectively contribute to a nuanced trust profile.
Adding further complexity, user attributes such as prior AI experience, disciplinary background, and technology familiarity can modulate trust perceptions. For instance, students with advanced digital literacy may be more confident in interpreting ChatGPT outputs, whereas novices might exhibit skepticism or overreliance. Similarly, task context plays a crucial role: tasks requiring creative ideation, factual accuracy, or collaborative problem solving may elicit distinct trust responses. In addition, social cognition, including peer influence, normative expectations, and perceived societal endorsement of AI tools, shapes how students internalize and act upon their trust judgments.
To address these intertwined factors, this study adopts a mixed-methods approach, integrating quantitative survey data, experimental task performance, and qualitative interviews. By examining cognitive, affective, and behavioral trust dimensions, alongside student attributes, task scenarios, and social influences, this research seeks to construct a comprehensive model of trust in ChatGPT among university populations. The study’s design enables not only the identification of direct relationships but also the exploration of mediating and moderating effects, offering richer explanatory power than single-method approaches.
The research is guided by three central questions:
How do university students’ personal attributes influence their cognitive, affective, and behavioral trust in ChatGPT?
In what ways do task contexts and specific use scenarios shape trust dimensions and reliance behaviors?
How does social cognition interact with personal and contextual factors to facilitate or hinder trust formation?
This study contributes to the literature in multiple ways. Theoretically, it extends trust models from general human-computer interaction to LLMs in educational settings, highlighting multidimensional and context-sensitive dynamics. Methodologically, the mixed-methods design combines statistical rigor with narrative depth, capturing both measurable patterns and experiential insights. Practically, findings inform the design of AI-mediated educational tools, guiding interventions to enhance trust, reduce misuse, and optimize student engagement.
In summary, as ChatGPT continues to permeate academic and professional environments, understanding why, when, and how students trust such systems is essential for maximizing their potential. By integrating personal attributes, task-specific contexts, and social cognitive processes into a unified framework, this study provides actionable insights for researchers, educators, and developers, advancing both the science and practice of AI-human interaction.
Literature Review
User trust has long been recognized as a critical determinant of effective human-computer interaction. In traditional settings, trust is conceptualized as a multidimensional construct, encompassing cognitive trust, affective trust, and behavioral trust. Cognitive trust refers to users’ rational assessment of a system’s competence and reliability, often grounded in prior experiences and perceived technical capability. Affective trust captures the emotional bond or comfort users feel when interacting with a system, reflecting perceived transparency, empathy, and responsiveness. Behavioral trust, in contrast, is observed through concrete actions, such as reliance on system outputs or frequency of use.
Recent studies on large language models (LLMs) like ChatGPT suggest that these dimensions interact dynamically. Users may cognitively recognize the system’s ability to generate accurate text yet remain affectively cautious, especially in high-stakes academic tasks. Conversely, strong emotional trust may encourage overreliance on outputs, highlighting the need to consider multiple trust dimensions simultaneously. Mixed-methods investigations have increasingly emphasized this multidimensionality, combining surveys, task-based experiments, and qualitative interviews to capture both subjective perceptions and observable behaviors.
Individual differences among users significantly shape trust in AI systems. Research indicates that demographic factors, such as age, gender, and disciplinary background, influence trust perceptions. For instance, university students in STEM fields often display higher cognitive trust in AI tools compared to peers in humanities disciplines, likely due to greater familiarity with algorithmic processes and computational thinking. Similarly, prior AI experience and digital literacy positively correlate with both cognitive and behavioral trust. Students who have previously engaged with AI tools are more adept at evaluating output accuracy and navigating system limitations.
Personality traits and cognitive styles also play a role. Studies in human-computer interaction suggest that risk-averse individuals may hesitate to rely on AI-generated outputs, even when objectively accurate. Conversely, those scoring high in openness or curiosity may approach AI with exploratory intent, forming trust more rapidly but potentially exposing themselves to overreliance. These findings underscore the importance of considering both stable attributes (e.g., personality, academic major) and modifiable skills (e.g., AI familiarity, digital literacy) in understanding trust formation.
Trust is not uniform across use scenarios; it is highly contingent on task characteristics. Empirical research demonstrates that the nature of tasks—ranging from information retrieval and summarization to creative writing and problem-solving—modulates trust perceptions. For factual or analytical tasks, users often prioritize cognitive trust, scrutinizing the accuracy and consistency of AI outputs. In contrast, for creative or brainstorming tasks, affective trust may dominate, with users valuing fluency, originality, and inspiration over strict correctness.
Task complexity further shapes trust dynamics. Complex or ambiguous tasks increase uncertainty, often prompting users to seek validation from multiple sources. In such cases, AI tools like ChatGPT are evaluated not only on accuracy but also on their ability to guide reasoning processes and support decision-making. Studies also highlight that repeated exposure to task-specific interactions can refine trust calibration: initial skepticism may decrease as users observe consistent system performance, whereas inconsistent outputs can rapidly erode both cognitive and affective trust.
Beyond individual and task factors, social cognition—how users perceive others’ opinions, societal norms, and shared expectations—plays a critical role in trust formation. Social cognitive theory suggests that individuals learn from observing peers, evaluating whether AI tools are accepted or endorsed within their community. For university students, peer influence, faculty recommendations, and broader societal narratives about AI reliability contribute to shaping trust attitudes.
Research on social information effects indicates that positive reinforcement from peers, demonstrable success stories, or institutional endorsements can enhance both affective and behavioral trust. Conversely, reports of AI errors, ethical controversies, or public skepticism may induce caution or resistance. These findings highlight that trust is not solely an internal cognitive or emotional judgment but is co-constructed through social interactions and perceived collective approval.
The literature increasingly suggests that user trust in LLMs like ChatGPT emerges from the interaction of multiple factors rather than any single determinant. Cognitive assessments are filtered through individual attributes, task demands, and social cues, producing complex patterns of trust that may vary across contexts. Mixed-methods research, combining surveys with task-based experiments and qualitative interviews, is particularly valuable for capturing this complexity. Quantitative data reveal correlations and causal pathways, while qualitative insights provide rich narratives of user experience, uncertainty, and decision-making strategies.
Despite growing attention, several gaps remain. First, empirical studies specifically targeting university student populations are limited, leaving uncertainty about how academic expertise, learning goals, and age-related digital skills shape trust. Second, few studies integrate cognitive, affective, and behavioral dimensions with task-specific and social influences in a comprehensive model, limiting the predictive power for real-world adoption. Third, longitudinal investigations examining how trust evolves with repeated interactions are scarce, particularly in educational contexts where student reliance on AI may fluctuate with academic pressures, deadlines, and feedback.
Addressing these gaps, the present study seeks to systematically explore how user attributes, task contexts, and social cognition collectively influence cognitive, affective, and behavioral trust in ChatGPT among university students. By integrating quantitative and qualitative approaches, the study aims to provide both generalizable insights and nuanced understandings, bridging theoretical development with practical implications for AI-mediated learning.
Theoretical Framework and Hypotheses
Understanding user trust in AI systems requires integrating insights from multiple theoretical perspectives. First, the Technology Acceptance Model (TAM) posits that perceived usefulness and perceived ease of use drive user acceptance of technology. In the context of ChatGPT, cognitive trust aligns closely with perceived competence, accuracy, and utility of AI-generated outputs, while affective trust is influenced by perceived ease of interaction and emotional comfort.
Second, Social Cognitive Theory (SCT) emphasizes that individuals learn and adapt behaviors based on personal experience, observation of others, and perceived social norms. For university students, this suggests that social cognition—peer recommendations, faculty guidance, and collective attitudes toward AI—can strongly modulate trust formation and reliance behavior.
Third, human-computer trust models highlight the multidimensional nature of trust, distinguishing between cognitive, affective, and behavioral components. Cognitive trust reflects rational evaluations of system capability, affective trust captures emotional attachment or comfort, and behavioral trust is manifested in observable reliance and continued usage. Integrating these frameworks enables a comprehensive approach to examining trust as an outcome influenced by individual, contextual, and social factors.
Based on the literature review and theoretical foundations, the study considers the following primary variables:
Independent Variables (IVs): User Attributes
Prior AI Experience: Frequency and depth of previous interactions with AI tools.
Digital Literacy: Ability to navigate and critically evaluate digital platforms.
Academic Background: Discipline (STEM vs. non-STEM) influencing familiarity with technical content.
Personality Traits: Particularly openness to experience and risk aversion.
Moderating / Mediating Variables:
Task Context: Type of task performed using ChatGPT (e.g., information retrieval, creative writing, problem-solving). Task complexity and ambiguity are considered moderators that shape trust responses.
Social Cognition: Perceptions of peer and faculty endorsement, societal narratives, and normative influence regarding AI usage.
Dependent Variables (DVs): Trust Dimensions
Cognitive Trust: Rational assessment of ChatGPT’s accuracy, reliability, and usefulness.
Affective Trust: Emotional comfort, confidence, and willingness to engage with the system.
Behavioral Trust: Observable reliance on ChatGPT outputs, including frequency of use and degree of reliance in academic tasks.
Prior research suggests that individual differences shape trust formation in nuanced ways. Based on empirical evidence:
H1a: Higher levels of prior AI experience positively influence cognitive trust in ChatGPT.
H1b: Greater digital literacy is positively associated with cognitive and behavioral trust.
H1c: STEM students exhibit higher cognitive trust compared to non-STEM students.
H1d: Openness to experience positively correlates with affective trust, whereas risk aversion negatively correlates with both affective and behavioral trust.
Task characteristics critically shape trust responses, with complexity and uncertainty serving as key moderators:
H2a: Trust in ChatGPT varies across task types; cognitive trust is stronger for factual tasks, while affective trust is more pronounced for creative or ideation tasks.
H2b: Higher task complexity moderates the relationship between cognitive trust and behavioral trust, such that reliance decreases under high uncertainty without adequate system transparency.
Social cues are likely to amplify or attenuate trust:
H3a: Positive peer and faculty endorsement increases affective and behavioral trust.
H3b: Perceived social approval mediates the relationship between cognitive trust and actual reliance behavior.
H3c: Negative social narratives or observed errors reduce affective trust and may weaken the influence of cognitive trust on behavior.
Integrating these dimensions, the model posits that:
User attributes directly influence cognitive and affective trust.
Task context moderates the translation of trust into behavioral reliance.
Social cognition mediates and moderates the relationships between trust dimensions and actual use.
Based on the above, a conceptual model can be visualized as follows:
User Attributes → Cognitive Trust → Behavioral Trust
User Attributes → Affective Trust → Behavioral Trust
Task Context moderates Cognitive/Affective → Behavioral Trust pathways
Social Cognition mediates the Cognitive/Affective → Behavioral Trust relationships and exerts direct influence on trust dimensions
This integrated model captures the dynamic interplay between individual characteristics, contextual factors, and social influences in shaping trust toward ChatGPT. It provides a framework for empirical testing using a mixed-methods approach, combining quantitative surveys, experimental task observations, and qualitative interviews.
This study adopts a mixed-methods research design to comprehensively investigate the factors influencing university students’ trust in ChatGPT. The rationale for this approach lies in the complexity of trust formation, which encompasses cognitive, affective, and behavioral dimensions influenced by individual attributes, task contexts, and social cognition. Quantitative methods allow measurement of correlations, causal pathways, and moderating or mediating effects, while qualitative methods provide rich insights into user experiences, perceptions, and nuanced trust dynamics that cannot be captured by surveys alone.
The research design includes three complementary components:
Quantitative survey to assess user attributes, trust dimensions, and perceived social influences.
Experimental tasks to observe trust-related behaviors in controlled ChatGPT interactions under different task contexts.
Semi-structured interviews to explore students’ subjective experiences, reasoning processes, and social considerations in depth.
This multi-pronged design ensures triangulation of data, enhancing both validity and explanatory depth.
Participants were undergraduate and graduate students from multiple disciplines at a large university. Recruitment aimed to achieve diversity in academic background, digital literacy, and prior AI experience. A total of 320 students participated in the survey, of which 120 were randomly selected for task-based experiments, and 30 participated in follow-up interviews.
Inclusion criteria included:
Age 18 or older
Regular use of digital technologies for academic purposes
Basic familiarity with ChatGPT or willingness to engage in a short orientation session
Sampling employed a stratified random approach, ensuring proportional representation of STEM and non-STEM students, as well as gender balance, to account for potential moderating effects of academic discipline and demographic factors on trust.
A structured questionnaire was developed, integrating validated scales from prior research. The survey measured:
User Attributes:
AI Experience (adapted from McKnight et al., 2011)
Digital Literacy (adapted from Ng, 2012)
Academic Background and prior coursework
Personality traits (openness, risk aversion; using Big Five Inventory subscales)
Trust Dimensions:
Cognitive Trust (perceived competence, accuracy, reliability)
Affective Trust (comfort, confidence, emotional security)
Behavioral Trust (self-reported reliance frequency)
Social Cognition:
Peer influence, faculty endorsement, and normative expectations regarding AI use
All items were rated on a 5-point Likert scale, and the instrument was piloted with 30 students to assess clarity, internal consistency, and construct validity. Cronbach’s alpha values exceeded 0.80 for all subscales, indicating high reliability.
The survey was administered online via a secure university platform. Participants were informed about the purpose of the study, assured of confidentiality, and given instructions to answer honestly based on recent interactions with ChatGPT or similar AI tools. Completion time averaged 15–20 minutes, and participants received modest incentives for participation.
To capture behavioral trust in real-time interactions, participants completed three task types using ChatGPT:
Information Retrieval: Students asked ChatGPT to summarize complex academic texts.
Creative Ideation: Students requested ChatGPT to generate ideas for writing or project proposals.
Problem-Solving: Students engaged in analytical problem-solving with ChatGPT assistance.
Each task varied in complexity and ambiguity, allowing examination of how task context moderates the relationship between cognitive/affective trust and behavioral reliance. Behavioral trust was operationalized as:
Frequency of using ChatGPT outputs without verification
Degree of adaptation of AI suggestions in final responses
Task completion time relative to independent effort
Tasks were counterbalanced to control for order effects, and participants were debriefed about AI limitations after completion.
To explore students’ subjective experiences and social considerations, 30 participants were interviewed using a semi-structured protocol. Key areas included:
Perceptions of ChatGPT reliability and competence
Emotional comfort and confidence in using AI
Influence of peers, instructors, and societal narratives
Reflections on specific task experiences and trust calibration
Interviews lasted 30–45 minutes, were audio-recorded with consent, and transcribed verbatim. Thematic coding was applied to identify recurrent patterns and illustrative narratives, complementing quantitative findings.
Survey and experimental data were analyzed using SPSS and AMOS. Analyses included:
Descriptive statistics for sample characteristics and variable distributions
Correlation and regression analyses to test relationships between user attributes, task context, social cognition, and trust dimensions
Structural Equation Modeling (SEM) to examine direct and indirect pathways, including mediating and moderating effects
ANOVA to assess differences in trust across task types and academic disciplines
Interview transcripts were analyzed using thematic analysis, following Braun and Clarke’s (2006) six-step procedure:
Familiarization with data
Generating initial codes
Searching for themes
Reviewing themes
Defining and naming themes
Producing a narrative report
Themes related to trust formation, task context perception, and social influence were integrated with quantitative results to provide a triangulated understanding of trust dynamics.
Quantitative and qualitative findings were synthesized using a convergent design, comparing statistical patterns with participant narratives. This approach allowed identification of:
Consistencies and divergences between reported attitudes and observed behaviors
Context-specific mechanisms shaping cognitive, affective, and behavioral trust
Practical insights for AI system design and educational interventions
Results
A total of 320 university students participated in the survey, with 52% male and 48% female, and an age range of 18–26 years (M = 20.8, SD = 2.1). Approximately 55% were STEM majors, and 45% were non-STEM majors. Regarding prior AI experience, 60% reported moderate to high familiarity with ChatGPT or other LLM tools, while 40% had limited experience. Digital literacy scores averaged M = 4.1 (SD = 0.6) on a 5-point scale, indicating a generally competent participant pool.
Trust measures demonstrated variability across dimensions: cognitive trust averaged 3.8 (SD = 0.7), affective trust 3.5 (SD = 0.8), and behavioral trust 3.6 (SD = 0.9). Correlation analyses revealed positive relationships between AI experience, digital literacy, and all trust dimensions (p < 0.01), suggesting that prior exposure and skill levels are meaningful predictors of trust.
Regression analyses indicated that prior AI experience significantly predicted cognitive trust (β = 0.34, p < 0.001) and behavioral trust (β = 0.21, p < 0.01). Digital literacy was positively associated with cognitive trust (β = 0.29, p < 0.001) and behavioral trust (β = 0.25, p < 0.01). STEM students exhibited higher cognitive trust (M = 4.0) than non-STEM students (M = 3.6; t(318) = 4.12, p < 0.001). Personality traits showed that openness predicted affective trust (β = 0.22, p < 0.01), whereas risk aversion negatively predicted both affective (β = -0.18, p < 0.05) and behavioral trust (β = -0.16, p < 0.05). These findings support hypotheses H1a–H1d, indicating that user attributes shape trust formation across multiple dimensions.
ANOVA results revealed significant differences in trust across task types:
Information retrieval tasks elicited the highest cognitive trust (M = 3.9), reflecting confidence in ChatGPT’s factual accuracy.
Creative ideation tasks generated higher affective trust (M = 3.7), consistent with the emotional appeal of generative suggestions.
Problem-solving tasks displayed intermediate trust levels (cognitive M = 3.7, affective M = 3.5), with greater variability reflecting task complexity.
Task complexity moderated the relationship between cognitive trust and behavioral trust: in high-complexity tasks, reliance decreased unless participants possessed higher digital literacy or prior AI experience (H2b supported).
Regression and mediation analyses confirmed that peer and faculty endorsement positively influenced both affective (β = 0.28, p < 0.001) and behavioral trust (β = 0.23, p < 0.01). Mediation analyses using SEM indicated that social cognition partially mediated the relationship between cognitive trust and behavioral trust (indirect effect = 0.12, p < 0.05), supporting H3b. Negative social narratives—such as awareness of AI errors or ethical concerns—reduced affective trust (β = -0.19, p < 0.05) and weakened the translation of cognitive trust into behavioral reliance (H3c confirmed).
A comprehensive SEM model integrating user attributes, task context, social cognition, and trust dimensions demonstrated good model fit (χ²/df = 2.01, CFI = 0.95, RMSEA = 0.048). Key pathways included:
User Attributes → Cognitive Trust → Behavioral Trust (significant, standardized path coefficient = 0.31, p < 0.001)
User Attributes → Affective Trust → Behavioral Trust (significant, coefficient = 0.27, p < 0.001)
Social Cognition exerted both direct effects on trust dimensions and indirect effects on behavioral reliance, confirming its mediating and moderating role.
Task Context moderated trust-behavior pathways, particularly in creative and problem-solving tasks.
These results support the integrated conceptual model proposed in the theoretical framework.
Semi-structured interviews with 30 students provided deeper insight into the mechanisms behind trust formation:
Cognitive Trust Drivers
Participants emphasized accuracy and relevance of outputs: “I trust ChatGPT when it summarizes articles correctly, but I double-check numbers and citations.”
Familiarity with AI algorithms enhanced evaluative confidence: “Knowing how AI predicts text makes me more confident in its reasoning.”
Affective Trust Drivers
Emotional comfort was tied to user-friendly interactions: “The conversational style makes me feel comfortable asking questions, even if I’m unsure.”
Positive experiences reinforced confidence and reduced anxiety during academic tasks.
Behavioral Trust Drivers
Reliance varied by task type: students used ChatGPT extensively for brainstorming but cautiously for graded assignments.
Social context influenced reliance: “If my classmates recommend it, I feel safer using it for essays.”
Role of Social Cognition
Peer endorsement and faculty guidance shaped trust calibration: students followed both positive and cautionary cues.
Observing errors or hearing negative stories reduced affective trust, sometimes prompting manual verification of all outputs.
Task-Specific Nuances
Creative tasks encouraged experimentation and emotional engagement.
Analytical tasks elicited critical evaluation, often blending AI outputs with independent reasoning.
By combining quantitative and qualitative findings, several overarching patterns emerge:
Multidimensional trust is dynamic: cognitive, affective, and behavioral trust interact but are influenced differently by user attributes, task contexts, and social cognition.
User attributes provide a foundational baseline: prior AI experience, digital literacy, and disciplinary background consistently predict trust formation.
Task context shapes trust expression: cognitive trust dominates in factual tasks, affective trust in creative tasks, and task complexity modulates behavioral reliance.
Social cognition amplifies or dampens trust: peer and faculty endorsement strengthens trust, whereas negative narratives reduce it, highlighting trust as a socially co-constructed phenomenon.
Behavioral trust reflects calibrated reliance: students demonstrate strategic use of ChatGPT, balancing confidence in AI with critical evaluation, particularly in high-stakes academic contexts.
These findings provide empirical support for the integrated conceptual model, demonstrating that trust in ChatGPT among university students is a product of interacting individual, contextual, and social factors, rather than a single-dimension judgment.
Discussion
The present study provides comprehensive insights into the factors shaping university students’ trust in ChatGPT, revealing the intricate interplay between user attributes, task context, and social cognition. First, the results confirm that user attributes, such as prior AI experience, digital literacy, academic background, and personality traits, are significant predictors of both cognitive and affective trust. Students with greater familiarity and confidence in digital tools are more likely to evaluate ChatGPT outputs critically, exhibit comfort in interacting with the system, and rely on it effectively for academic tasks. These findings align with prior research in human-computer interaction (McKnight et al., 2011; Ng, 2012), emphasizing that trust formation is partially grounded in individual capabilities and experiences.
Second, the study highlights the context-dependent nature of trust. Cognitive trust is highest in factual or information retrieval tasks, whereas affective trust is more salient in creative or ideation tasks. Task complexity moderates the translation of cognitive and affective trust into behavioral reliance, suggesting that students calibrate their reliance strategically, depending on perceived risk, ambiguity, and potential consequences. This nuanced understanding supports the growing literature on context-sensitive trust in AI systems (Hoff & Bashir, 2015) and underscores the need to consider task specificity in educational and professional applications of ChatGPT.
Third, social cognition emerges as a powerful influence on trust formation. Peer and faculty endorsements enhance both affective and behavioral trust, whereas exposure to negative narratives or observed AI errors diminishes affective trust and can disrupt reliance behaviors. These results are consistent with social cognitive theory (Bandura, 1986) and prior studies on social influence in technology adoption, confirming that trust is not solely an individual evaluation but is co-constructed within social environments.
Finally, the integration of quantitative and qualitative findings reveals that behavioral trust reflects strategic reliance rather than blind acceptance. Students selectively use ChatGPT outputs, combining AI assistance with independent reasoning and verification, demonstrating calibrated trust that balances confidence and caution. This insight challenges simplistic assumptions that AI users either fully trust or distrust systems, highlighting the complexity of trust in real-world educational contexts.
The study contributes to the literature on human-AI trust in several ways. First, it extends existing trust models to large language models in educational settings, emphasizing multidimensionality and context sensitivity. Cognitive, affective, and behavioral trust dimensions are shown to interact dynamically, influenced by user attributes, task contexts, and social factors, supporting a more integrated theoretical framework.
Second, the findings demonstrate the mediating and moderating roles of social cognition and task context, providing empirical support for conceptual models that incorporate social and contextual variables into trust formation. This contributes to the refinement of human-computer trust theory, emphasizing that trust is both socially and contextually situated rather than purely individualistic.
Third, the study underscores the importance of mixed-methods research in trust investigations. Quantitative analyses elucidate general patterns and statistical relationships, while qualitative insights reveal subjective reasoning, emotional responses, and task-specific strategies. This methodological integration enhances explanatory depth and offers a model for future research exploring trust in complex AI-mediated environments.
The findings carry important implications for educators, AI developers, and policy makers.
Educational Design: Understanding the role of task context and user attributes can inform instructional strategies. For instance, AI-assisted learning activities can be tailored to balance cognitive and affective trust, scaffolding student interactions based on prior experience and task complexity.
System Development: Developers can enhance ChatGPT’s usability and trustworthiness by improving transparency, providing confidence scores or rationales for outputs, and designing interfaces that adapt to user expertise. Emotional design elements—such as friendly conversational styles—may bolster affective trust, particularly in creative tasks.
Social and Institutional Interventions: Universities and educators can influence trust formation through guidance, endorsements, and peer-led workshops. Positive social reinforcement encourages calibrated reliance, while awareness of AI limitations prevents overreliance.
Ethical and Responsible AI Use: Understanding the dynamics of trust allows for responsible deployment of ChatGPT, ensuring students rely on outputs judiciously, verify information, and develop critical thinking skills alongside AI assistance.
Despite its contributions, this study has several limitations.
Sample Constraints: Participants were recruited from a single university, limiting generalizability across cultural or institutional contexts. Future studies should include more diverse populations to assess cross-cultural variations in trust formation.
Task Scope: While the study incorporated multiple task types, other academic or professional scenarios—such as collaborative group work or high-stakes assessments—were not examined. Trust dynamics may differ in these contexts.
Self-Report Bias: Survey measures and some behavioral indicators rely on self-reported data, which may be influenced by social desirability or recall biases. Combining system-logged interaction data could provide more objective behavioral metrics.
Temporal Considerations: Trust is dynamic and evolves with repeated interactions. The cross-sectional nature of this study does not capture longitudinal changes in trust over time. Longitudinal designs would enhance understanding of trust development and decay.
AI System Limitations: Findings are specific to ChatGPT and may not generalize to other AI systems with different architectures, interfaces, or content domains.
The study suggests several promising avenues for further research:
Longitudinal investigations to track trust evolution and adaptation over multiple semesters or repeated AI interactions.
Cross-cultural and interdisciplinary studies to explore how societal norms, academic cultures, and disciplinary differences influence trust.
Advanced behavioral metrics using AI usage logs, eye-tracking, or response time to complement self-report measures.
Design interventions aimed at enhancing trust calibration, including transparency features, feedback mechanisms, and guided scaffolding.
Conclusion and Future Research
This study systematically examined the factors constituting university students’ trust in ChatGPT, integrating user attributes, task contexts, and social cognition into a comprehensive framework. Three key insights emerge:
First, user attributes play a foundational role in trust formation. Prior AI experience, digital literacy, and academic background significantly influence cognitive and affective trust, while personality traits such as openness and risk aversion modulate affective and behavioral trust. Students with higher digital literacy and prior AI exposure not only evaluate outputs more critically but also engage more confidently and consistently with ChatGPT, demonstrating that individual capabilities shape both perception and reliance.
Second, task context strongly moderates trust dynamics. Cognitive trust dominates in factual or information-focused tasks, whereas affective trust becomes more salient in creative or ideation activities. Task complexity and ambiguity influence the translation of trust into behavioral reliance, revealing that students strategically calibrate their engagement with AI based on perceived risk and task demands. This emphasizes that trust is context-sensitive, rather than uniform across interactions.
Third, social cognition—including peer influence, faculty endorsement, and broader societal narratives—significantly impacts trust formation. Positive social reinforcement strengthens affective and behavioral trust, while negative narratives or observed errors diminish it. This finding underscores that trust is socially co-constructed, highlighting the interplay between individual judgment and collective perceptions in shaping reliance on AI.
Finally, the study confirms that behavioral trust reflects calibrated reliance rather than blind acceptance. Students selectively adopt ChatGPT outputs, combining AI assistance with independent verification, demonstrating a sophisticated understanding of AI’s capabilities and limitations. This balance between confidence and caution is critical for effective and responsible use of AI in educational contexts.
The findings provide actionable guidance for educators, AI developers, and policy makers:
Educational Strategies:
Design AI-assisted learning activities that balance cognitive and affective trust.
Scaffold interactions according to students’ prior experience and digital literacy levels.
Encourage critical evaluation and independent verification of AI outputs to foster calibrated trust.
AI System Development:
Enhance transparency and explainability of outputs to support cognitive trust.
Incorporate user-friendly, conversational interfaces to strengthen affective trust.
Provide adaptive features that respond to user expertise and task context, supporting strategic behavioral trust.
Social and Institutional Interventions:
Utilize peer and faculty endorsements to reinforce positive trust behaviors.
Educate students about AI limitations and ethical considerations, preventing overreliance or misuse.
Create communities of practice that promote responsible AI adoption while acknowledging uncertainties.
Responsible AI Integration:
Recognize that trust is dynamic and multidimensional, requiring ongoing monitoring and guidance.
Encourage AI literacy programs to develop students’ evaluative skills and resilience in high-stakes scenarios.
The study contributes to human-AI trust literature in multiple ways:
Theoretical Contribution: Integrates cognitive, affective, and behavioral trust dimensions with user attributes, task contexts, and social cognition, providing a holistic model for understanding AI trust in educational settings.
Methodological Contribution: Demonstrates the value of a mixed-methods approach, combining surveys, experimental tasks, and interviews to capture both measurable patterns and nuanced user experiences.
Practical Contribution: Offers actionable insights for AI design, pedagogical strategies, and institutional policy, fostering responsible and effective adoption of ChatGPT among university students.
Despite the study’s contributions, several avenues remain for further investigation:
Longitudinal Studies:
Explore how trust evolves over time with repeated interactions and increasing AI sophistication.
Examine whether students’ reliance strategies shift as they gain experience or encounter new task types.
Cross-Cultural and Interdisciplinary Research:
Investigate how cultural norms, disciplinary differences, and institutional policies shape trust formation.
Compare patterns across universities, countries, and educational systems to identify universal versus context-specific dynamics.
Advanced Behavioral Metrics:
Utilize system-logged interaction data, response times, and eye-tracking to capture behavioral trust objectively.
Complement self-report measures to refine understanding of reliance and decision-making strategies.
Design and Intervention Studies:
Test interface enhancements, transparency features, and adaptive guidance mechanisms to optimize trust calibration.
Explore the impact of peer-led workshops and AI literacy programs on affective and cognitive trust dimensions.
Ethical and Societal Considerations:
Investigate how perceptions of AI ethics, fairness, and bias influence trust and usage behaviors.
Develop guidelines for responsible integration of LLMs in academic and professional contexts.
In conclusion, this study illuminates the complex, multidimensional, and context-dependent nature of trust in ChatGPT among university students. Trust is shaped not only by individual attributes but also by task-specific demands and social influences, resulting in nuanced behavioral reliance that balances confidence and caution. By integrating cognitive, affective, and behavioral perspectives with empirical evidence, this research provides a robust foundation for both theoretical advancement and practical application.
As AI systems like ChatGPT continue to permeate educational, professional, and social domains, fostering calibrated, informed, and socially aware trust will be essential to maximizing their benefits while mitigating risks. Future research and practice should prioritize dynamic, context-sensitive, and ethically grounded strategies, ensuring that AI serves as a reliable, empowering, and responsible partner in human learning and decision-making.
References
Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory. Englewood Cliffs, NJ: Prentice Hall.
Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. https://doi.org/10.1191/1478088706qp063oa
Hoff, K. A., & Bashir, M. (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors, 57(3), 407–434. https://doi.org/10.1177/0018720814547570
McKnight, D. H., Carter, M., Thatcher, J. B., & Clay, P. F. (2011). Trust in a specific technology: An investigation of its components and measures. ACM Transactions on Management Information Systems, 2(2), 1–25. https://doi.org/10.1145/1985347.1985353
Ng, W. (2012). Can we teach digital natives digital literacy? Computers & Education, 59(3), 1065–1078. https://doi.org/10.1016/j.compedu.2012.04.016
Parasuraman, A., & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse. Human Factors, 39(2), 230–253. https://doi.org/10.1518/001872097778543886
Parasuraman, A., Sheridan, T. B., & Wickens, C. D. (2008). Situation awareness, mental workload, and trust in automation: Viable, empirically supported cognitive engineering constructs. Journal of Cognitive Engineering and Decision Making, 2(2), 140–160. https://doi.org/10.1518/155534308X284417
van den Broek, E., & Schouteten, R. (2019). Determinants of trust in AI-based systems in educational settings. Computers & Education, 140, 103609. https://doi.org/10.1016/j.compedu.2019.103609
Xu, B., Teo, H. H., Tan, B. C., & Agarwal, R. (2010). The role of push-pull technology in privacy calculus: The case of location-based services. Journal of Management Information Systems, 26(3), 135–174. https://doi.org/10.2753/MIS0742-1222260305
Zhang, B., & Dafoe, A. (2019). Artificial intelligence: American attitudes and trends. Center for the Governance of AI, Future of Humanity Institute, University of Oxford.