The societal integration of generative artificial intelligence (GenAI), particularly in education, has spurred dynamic public discourse. Much of this conversation is filtered through media narratives that shape perceptions, policies, and practices. This study investigates how U.S. news media construct narratives about ChatGPT in the context of higher education, employing Agenda Setting Theory, Latent Dirichlet Allocation (LDA) topic modeling, and sentiment analysis. By analyzing 198 articles published between November 2022 and October 2024, the study identifies six major themes and tracks how they influence the public discourse on GenAI’s efficacy. We find that while ChatGPT is largely framed positively in areas like skill development, pedagogy, and policy, the media express concern over job displacement and the integrity of college admissions. This paper contributes to understanding how media shape the societal role of GenAI in education and offers practical recommendations for stakeholders.
Generative AI has rapidly evolved from a niche research area to a pervasive tool impacting numerous industries. Nowhere is this impact more pronounced than in higher education. ChatGPT, launched by OpenAI in late 2022, emerged as the most visible symbol of GenAI’s influence on learning, teaching, and academic assessment. As students, faculty, and institutions navigate the implications of using such tools, media representations of ChatGPT have played a crucial role in shaping public opinion and policy decisions.
The media, through agenda-setting and framing, act as gatekeepers of public discourse, determining which issues are salient and how they are understood. Thus, analyzing media coverage provides valuable insight into societal acceptance, resistance, or ambivalence toward emerging technologies like GenAI.
This study investigates how U.S. news outlets constructed the narrative around ChatGPT in higher education, what sentiments underpinned these narratives, and how these constructions might influence broader acceptance and regulation of GenAI.
Agenda Setting Theory posits that the media do not tell people what to think, but rather what to think about. By emphasizing certain issues over others, the media influence the perceived importance of these topics in public discourse. In the context of AI, this theory provides a useful lens for analyzing how different aspects of GenAI adoption in education are prioritized, celebrated, or critiqued.
Second-level agenda setting, or framing, extends this idea further by examining how the media describe the issue—through tone, vocabulary, and associated imagery. This distinction is critical in understanding not only which GenAI topics are most discussed, but how they are emotionally charged and ideologically situated.
We collected 198 articles from major U.S. news sources including The New York Times, The Washington Post, CNN, NPR, Wired, and EdTech Magazine, among others. Articles were selected based on relevance to ChatGPT and higher education, published between November 2022 and October 2024. Sources included both national and education-focused outlets to ensure diverse perspectives.
We applied Latent Dirichlet Allocation (LDA) to extract major themes across the corpus. The LDA model was tuned using coherence scores to identify six dominant topics.
Using a hybrid sentiment analysis approach (lexicon-based and machine learning), we assessed the polarity of each article. Sentiments were coded as positive, negative, or neutral, with contextual adjustments made to account for sarcasm or double meaning common in media texts.
The LDA model identified the following six dominant topics in media narratives:
This was the most frequently discussed topic, representing 27% of the corpus. Articles here framed ChatGPT as a classroom assistant, helping instructors generate lesson plans, quiz questions, and grading rubrics. Educators who embraced the tool were often depicted as innovators.
Sample Narrative:
"Professors at top universities are experimenting with ChatGPT to co-design curricula, reflecting a pedagogical shift towards tech-enhanced instruction." — EdTech Magazine
Comprising 19% of the corpus, this topic focused on how institutions are rethinking academic integrity policies, revising syllabi, and redefining student assessment models in light of ChatGPT’s abilities.
Sentiment: Generally positive but cautious, especially regarding plagiarism concerns.
This topic (17%) examined ChatGPT's role in fostering collaborative environments, including its use in group projects, brainstorming sessions, and interdisciplinary learning modules.
Narrative Arc: AI as a peer, rather than just a tool.
Making up 15% of the discourse, this topic focused on how ChatGPT aids students in acquiring writing, coding, and problem-solving skills. Several stories highlighted first-generation college students and those with learning disabilities benefiting from tailored AI assistance.
Sentiment: Strongly positive.
This topic accounted for 12% of the articles and carried a largely negative tone. Media expressed concern that students graduating into the workforce might find fewer opportunities in fields like content writing, tutoring, or customer service, due to ChatGPT's automation capabilities.
Media Framing: AI as a threat to human labor, especially at entry levels.
The remaining 10% revolved around ethical dilemmas, particularly regarding the use of ChatGPT in college application essays and standardized testing preparation.
Quote:
"If a machine can write a compelling personal statement, how do we assess authenticity?" — The Washington Post
Sentiment: Negative, with moral overtones.
Positive Articles: 62%
Neutral Articles: 23%
Negative Articles: 15%
The overall tone of media coverage was optimistic but not without caveats. Stories celebrating ChatGPT’s educational potential outnumbered those warning against its misuse, yet the latter often received more virality on social media platforms.
Media narratives construct a dual image of ChatGPT—as both a revolutionary educational tool and a source of disruption. The tension lies in its utility versus its unintended consequences. This ambivalence echoes broader debates about technological determinism versus human agency.
The overwhelmingly positive framing of ChatGPT suggests growing media support for AI integration in academia. However, the narratives also highlight the need for guardrails—ethical guidelines, transparent policies, and AI literacy among educators and students.
Several institutions were prompted to revise policies after media exposés highlighted misuse or ambiguity. This underscores the reciprocal relationship between media narratives and institutional decision-making.
Promote AI literacy across educational institutions.
Develop clear ethical guidelines for GenAI use in education.
Encourage transparent use of GenAI in the classroom.
Provide training on how to critically assess AI-generated content.
Enhance traceability features to ensure transparency.
Partner with academic institutions to co-design responsible usage protocols.
As generative AI continues to evolve, media narratives will remain a crucial force in shaping how such technologies are understood and implemented. The largely positive framing of ChatGPT in higher education reflects optimism about AI’s potential, tempered by justified concerns about ethics and labor disruption. By analyzing these narratives, this study offers a nuanced understanding of how media constructions influence societal acceptance and guides stakeholders in navigating this complex terrain.
[Include list of news articles, academic papers on Agenda Setting Theory, and technical resources on LDA and sentiment analysis used in the study.]