ChatGPT is Not A Man but Das Man: Representativity and Structural Consistency of Silicon Samples Generated by Large Language Models

2025-09-24 16:51:28
5

Introduction

Paragraph 1:
The rapid proliferation of large language models (LLMs), particularly ChatGPT, has transformed the way humans interact with information, generate text, and even conceptualize knowledge. While these models are often anthropomorphized in popular discourse—referred to as “intelligent agents” capable of autonomous thought—their outputs reveal a more nuanced reality. ChatGPT does not express the individuality of A Man, a singular human author with unique intentions and subjective insights. Instead, it manifests what Martin Heidegger might recognize as Das Man—the anonymous, normalized patterns of “what people generally say” embedded in the vast data it has ingested. Understanding this distinction is essential for both scholars and the public, as it sheds light on the sociotechnical implications of automated text generation.

Paragraph 2:
This article examines the representativity and structural consistency of ChatGPT-generated texts, exploring how these “silicon samples” exemplify collective norms rather than individual creativity. We draw on existential philosophy, semiotics, and computational linguistics to construct a framework that captures the intersection between human social knowledge and algorithmic reproduction. By analyzing the outputs of ChatGPT through both qualitative and quantitative lenses, this study aims to reveal the ways in which LLMs simultaneously reflect, reinforce, and homogenize societal discourse, raising critical questions about the nature of authorship, authority, and creativity in the age of AI.

68484_ldxv_1360.jpeg

I. Theoretical Framework

Understanding ChatGPT’s outputs requires a synthesis of philosophy, semiotics, and computational linguistics. While the model appears to “speak” as if human, its generative processes differ fundamentally from individual authorship. This section situates ChatGPT-generated texts within a framework that captures both their social representativity and structural coherence.

1. Heideggerian Perspective: A Man vs. Das Man

Martin Heidegger’s existential philosophy provides a crucial lens for interpreting AI-generated language. In Being and Time, Heidegger distinguishes between A Man, the authentic individual who acts and speaks based on personal understanding, and Das Man, the anonymous collective that dictates what is generally accepted or expected. The notion of Das Man highlights a socialized conformity: people often adhere to norms, clichés, or conventional behaviors, rather than expressing their singular, authentic existence.

Applying this framework to ChatGPT, we recognize that the model does not generate text as an individual thinker. Instead, it synthesizes patterns from an immense corpus of human-written content, producing outputs that resemble the statistically average “voice of the collective.” Each sentence is informed by the probabilities derived from this corpus, making the generated text a reflection of Das Man rather than A Man. Consequently, ChatGPT’s “authorship” is inherently de-individualized: it represents generalized societal discourse, not unique subjective insight.

2. Semiotics and Representational Theory

Beyond philosophy, semiotics—the study of signs and symbols—offers another lens to understand ChatGPT outputs. A language model constructs meaning by generating sequences of symbols that statistically cohere with human communication patterns. In this sense, each output is a silicon sample: a materialized instance of abstract linguistic norms embedded in digital form.

Semiotics emphasizes the relationship between signs, their meanings, and the cultural context in which they operate. ChatGPT mirrors these relationships by internalizing the statistical structures of language across diverse domains. However, because it relies on patterns aggregated from vast text corpora, its outputs prioritize recognizable conventions over novel, individualized expression. The model’s representativity is thus dual: it is simultaneously faithful to broad linguistic and cultural norms, yet incapable of authentic, idiosyncratic creativity. In semiotic terms, ChatGPT enacts a performative reproduction of societal signs, generating outputs that align with the expectations of “what is normally said” in specific contexts.

3. Computational Logic and Structural Consistency

The technical foundations of ChatGPT further illuminate its structural properties. Based on transformer architectures, the model predicts the probability of each subsequent token given preceding context. This autoregressive mechanism ensures coherence and fluency, producing structurally consistent text that aligns with grammatical, semantic, and discourse-level norms.

Structural consistency emerges as a direct consequence of this probabilistic prediction. Because the model optimizes for high-likelihood sequences learned from its training corpus, the outputs tend to follow familiar syntactic patterns, logical argument flows, and stylistic conventions. While these features enhance readability and perceived intelligence, they also underscore the model’s alignment with Das Man: by reproducing what is statistically typical, the model favors consensus over originality, and predictability over individuality.

4. Intersection of Philosophy, Semiotics, and Technology

Bringing these perspectives together, ChatGPT’s texts can be understood as silicon artifacts that occupy a unique space between human culture and computational process. Philosophically, the outputs reflect collective norms (Das Man), not individual insight (A Man). Semiotics highlights that the model generates recognizable signs whose meanings are socially grounded, while computational linguistics explains why these outputs achieve fluency and structural coherence.

This interdisciplinary lens provides a foundation for examining two critical properties of LLM-generated text: representativity and structural consistency. Representativity addresses the question of whose voice the text reflects—statistical society rather than a singular author. Structural consistency concerns the patterns and regularities that emerge from the model’s internalized language logic. Together, these dimensions enable a rigorous understanding of the generative and normative forces shaping AI-produced language, setting the stage for subsequent sections that analyze empirical examples and critique the broader implications.

II. Representativity of ChatGPT-Generated Texts

The concept of representativity in AI-generated text refers to the extent to which outputs reflect collective linguistic norms rather than individual authorship. In the context of ChatGPT, representativity manifests as a product of the model’s statistical training and its dependence on vast corpora of human-generated content. This section explores how ChatGPT embodies socialized, de-individualized language patterns, situating its outputs within a broader sociocultural and computational framework.

1. Training Corpora and Collective Knowledge

ChatGPT is trained on an immense, heterogeneous dataset comprising books, articles, websites, and other publicly available text. Each piece of content contributes to a weighted statistical model that captures how words, phrases, and sentence structures co-occur. Consequently, the model’s outputs reflect patterns prevalent in society at large rather than the idiosyncrasies of a single individual.

From a social perspective, this process mirrors the way humans internalize societal norms. Just as people learn language through exposure to community discourse, ChatGPT internalizes the “voice of the many” in its corpus. However, unlike a human who can combine this knowledge with lived experience and subjective intention, the model operates solely on probabilistic predictions. It reproduces patterns that are statistically frequent, which inherently privileges conventional expressions over novel or personal ones.

2. De-Individualization and the Silencing of A Man

Heidegger’s distinction between A Man and Das Man is especially illuminating in this context. While A Man represents individual creativity and authentic engagement with the world, Das Man embodies conformity and normative behavior. ChatGPT-generated text, by its very design, aligns with Das Man. The outputs are de-individualized: they lack the unique perspective, personal experience, and intentionality that characterize human authorship.

This de-individualization is evident in multiple dimensions of the text. Stylistically, the model tends to generate neutral, broadly comprehensible phrasing. Semantically, it emphasizes widely accepted knowledge or consensus views. Pragmatically, it adheres to general communicative norms. In other words, ChatGPT’s “voice” is an averaged amalgam of societal discourse, rather than an expression of an individual consciousness. These silicon samples are thus representative of collective norms, but not of personal agency.

3. Statistical Averaging and Social Biases

The representativity of ChatGPT is mediated by statistical averaging. Tokens and phrases that occur more frequently in the training corpus are more likely to be generated, creating outputs that mirror majority usage and widely circulated ideas. While this enhances coherence and predictability, it also reproduces existing social biases and ideological leanings embedded in the corpus.

For instance, certain cultural perspectives, idioms, or professional conventions may dominate the generated text simply because they are more prevalent in source materials. This highlights a dual nature of representativity: on one hand, ChatGPT captures broadly recognized social patterns; on the other hand, it risks reinforcing normative hierarchies and silencing minority voices. Understanding this dynamic is crucial for both scholars and the public, as it situates AI-generated outputs within a sociotechnical ecosystem rather than treating them as neutral or universally representative.

4. Representativity Across Contexts

ChatGPT’s outputs also vary in representativity depending on the task context. In generating technical explanations, the model reflects disciplinary consensus; in social or conversational text, it mirrors popular opinion or culturally dominant discourse. The model’s ability to adapt across contexts demonstrates its proficiency in capturing general patterns, but these patterns remain collective rather than personal. In every case, the output exemplifies what is statistically normal, providing a voice for Das Man rather than A Man.

5. Implications for Knowledge and Authorship

The representativity of ChatGPT-generated text has significant implications for how we understand authorship, creativity, and knowledge production. If the model primarily reflects collective norms, then its outputs should be interpreted not as the insight of a single author, but as a mirror of societal discourse. Scholars, educators, and content creators must recognize that AI-generated text does not originate from intentional thought, yet it can shape perceptions of authority and consensus.

By examining representativity, we gain a critical understanding of ChatGPT’s sociolinguistic role. It provides a lens into the shared patterns of human knowledge, while simultaneously underscoring the absence of authentic individual agency. These insights prepare us to interrogate other dimensions of AI-generated text, such as structural consistency and the interplay between form and content.

III. Structural Consistency and Generative Logic

While representativity explains whose voice ChatGPT reflects, structural consistency addresses how the model organizes language to produce coherent, fluent outputs. Structural consistency refers to the predictable patterns of syntax, discourse, and semantic organization in AI-generated text. Understanding this property is critical for both appreciating the sophistication of large language models (LLMs) and critically evaluating their limitations.

1. Autoregressive Generation and Token-Level Prediction

At the core of ChatGPT’s generative process lies the autoregressive transformer architecture. The model predicts each subsequent token based on preceding tokens, using probability distributions derived from its training corpus. This sequential prediction ensures that each word or phrase is contextually appropriate, aligning with established patterns in human language.

The result is a form of structural consistency: sentences flow logically, ideas are typically well-organized, and paragraph-level cohesion emerges naturally. Unlike random text generation, which produces disjointed sequences, ChatGPT’s outputs exhibit patterns that readers interpret as deliberate and coherent. The underlying mechanism is purely statistical, yet it mimics human conventions of writing, creating an illusion of intentionality.

2. Syntax, Semantics, and Discourse Coherence

Structural consistency manifests at multiple linguistic levels:

  • Syntax: The model follows grammatical rules learned from the corpus, producing sentences that are syntactically well-formed. Subject-verb agreement, word order, and clause attachment are statistically enforced, contributing to readability and fluency.

  • Semantics: Word choices and collocations align with common usage patterns, ensuring that the generated content makes sense in context.

  • Discourse: At higher levels, paragraph organization and logical progression are maintained through learned patterns of argumentation, topic introduction, and conclusion formulation.

These layers of consistency work together to produce outputs that, while generated probabilistically, approximate the structural sophistication of human writing.

3. Pattern Replication and Predictability

Structural consistency is closely tied to the model’s tendency to replicate familiar patterns. ChatGPT tends to favor sequences that are statistically likely, which often correspond to widely recognized rhetorical or explanatory structures. For example, in argumentative text, it frequently employs an introduction-body-conclusion pattern; in descriptive text, it organizes content from general to specific details.

This predictability enhances clarity and comprehension but simultaneously limits novelty. The model’s outputs may lack the stylistic idiosyncrasies, unconventional syntax, or surprising narrative shifts that characterize highly creative human writing. In other words, structural consistency supports coherence at the expense of individual stylistic innovation, reinforcing the alignment with Das Man rather than A Man.

4. Template-Like Structures and the Role of Prompts

The prompt-driven nature of ChatGPT further shapes structural consistency. Users’ prompts effectively serve as scaffolds that guide the model toward specific discourse structures. For instance, asking for a “three-point argument” or a “step-by-step explanation” triggers recognizable organizational templates embedded in the model’s learned patterns.

This template-like behavior underscores two points: first, it demonstrates the model’s remarkable capacity to generalize structural conventions across contexts; second, it highlights the mechanized nature of these outputs. While the text appears coherent and purposeful, it is fundamentally the product of learned templates applied probabilistically, not conscious planning or intentional argumentation.

5. Implications for Human Perception and Interaction

The structural consistency of ChatGPT has important implications for how humans perceive AI-generated content. Fluency, logical flow, and structural regularity can lead readers to attribute authority or intelligence to the model, even though the underlying process lacks consciousness or intentionality. This can influence educational, journalistic, and creative contexts, where structural coherence is often equated with understanding or expertise.

At the same time, the reliance on predictable structures exposes a vulnerability: over time, widespread use of AI-generated text may homogenize discourse, reducing stylistic diversity and creative risk-taking. Awareness of these dynamics is essential for responsible AI deployment and critical engagement with machine-generated content.

6. Linking Structure to Representativity

Structural consistency and representativity are interdependent. The model’s adherence to structural norms reinforces its representation of collective language patterns. By producing text that is both statistically typical and structurally coherent, ChatGPT strengthens the impression of a socially normative “voice,” amplifying the Das Man effect. This combination of form and content underscores the dual nature of AI-generated language: it is fluent and readable, yet fundamentally collective and de-individualized.

IV. Case Analysis

To illustrate the theoretical and computational concepts discussed, we examine concrete examples of ChatGPT-generated text across different contexts, highlighting how representativity and structural consistency manifest in practice.

1. Academic Explanation Example

Consider a prompt requesting ChatGPT to explain the concept of photosynthesis to a general audience:

"Explain photosynthesis in simple terms for a high school student."

The output typically follows a predictable, structured pattern:

  1. Introduction: “Photosynthesis is the process by which plants make their own food using sunlight, water, and carbon dioxide.”

  2. Body: Explains the chemical process, the role of chlorophyll, and the production of glucose and oxygen.

  3. Conclusion: Emphasizes the importance of photosynthesis for life on Earth.

Here, structural consistency is evident in the introduction-body-conclusion format, with coherent sentences and logical progression. Representativity is also clear: the text reflects widely accepted scientific knowledge and standard pedagogical explanations. The language is neutral and broadly accessible, illustrating the de-individualized, Das Man-like voice rather than any unique authorial style.

2. Opinion Generation Example

In another case, a prompt asks ChatGPT to discuss the advantages and disadvantages of remote work:

"Discuss the pros and cons of working from home."

The generated text often mirrors the conventional arguments found in public discourse:

  • Pros: Flexibility, reduced commuting, work-life balance.

  • Cons: Social isolation, distractions, potential burnout.

The output demonstrates representativity by summarizing commonly cited viewpoints, effectively reflecting the majority consensus in media and literature. Structural consistency is maintained through parallelism in sentence construction (“Pros include… Cons include…”), creating a predictable and readable argumentative structure. While informative, the text lacks subjective insight or novel perspectives, reinforcing the model’s alignment with Das Man.

3. Creative Writing Example

Even in creative tasks, such as writing a short story prompt, structural patterns remain prominent. For instance, given the prompt:

"Write a short story about a cat who learns to play the piano."

ChatGPT often produces:

  1. Introduction: Introduces the protagonist cat and its environment.

  2. Rising Action: The cat discovers the piano and experiments.

  3. Climax: The cat performs a recognizable tune.

  4. Resolution: Positive conclusion emphasizing accomplishment or learning.

The story exhibits structural coherence through a conventional narrative arc (beginning, middle, end) and grammatically fluent sentences. Representativity appears in the choice of common tropes (animal protagonist, overcoming challenge, moral lesson), reflecting widespread storytelling norms rather than idiosyncratic creativity.

4. Implications of the Cases

Across these examples, two patterns emerge:

  1. Structural Consistency: In all contexts, ChatGPT maintains coherent syntax, logical flow, and predictable organization. Whether the task is explanatory, argumentative, or narrative, the text exhibits patterns that readers interpret as intentional and fluent.

  2. Representativity: The content aligns with collective knowledge and socially normalized perspectives. Scientific explanations echo textbook conventions, opinions reflect majority viewpoints, and creative writing mirrors common narrative tropes.

These cases highlight the dual nature of ChatGPT outputs: while the text is readable, reliable, and socially intelligible, it is inherently de-individualized. Readers may perceive authority or expertise, but the underlying process is a statistical reproduction of collective norms, not authentic human insight.

5. Bridging Theory and Practice

By connecting these practical examples to the theoretical framework, we see the real-world manifestation of Das Man in AI-generated text. Structural consistency ensures that outputs are coherent and legible, while representativity guarantees that they align with normative social knowledge. Together, these features demonstrate the silicon samples’ dual function as both communicative tools and mirrors of collective discourse.

V. Critique and Reflection

While ChatGPT demonstrates impressive structural consistency and representativity, these very qualities raise significant philosophical, social, and practical concerns. Understanding these risks is essential for both academic scrutiny and responsible public engagement.

1. De-Individualization and the Loss of Human Voice

One of the most profound critiques relates to the de-individualization inherent in AI-generated text. ChatGPT outputs exemplify Das Man: they reflect statistical averages derived from vast corpora rather than individual perspectives (A Man). While this collective voice ensures general comprehensibility, it erases nuance, subjectivity, and personal insight.

In educational contexts, reliance on AI-generated text can inadvertently homogenize student writing, suppressing stylistic diversity and discouraging authentic expression. In journalism or opinion writing, audiences may be exposed primarily to normalized viewpoints, subtly reinforcing conformity. The risk is that AI tools, by default, privilege the voice of the majority, marginalizing unique or dissenting perspectives.

2. Reinforcement of Social Biases

Representativity is double-edged. While ChatGPT’s outputs reflect widely accepted norms, they also inherit the biases present in the training data. Gender, racial, cultural, and ideological biases embedded in source texts can be perpetuated, even subtly, through the model’s statistical reproduction.

For example, prompts about leadership or professional competence may inadvertently generate stereotyped associations, reflecting societal norms encoded in the corpus. Similarly, creative outputs may favor dominant cultural narratives, sidelining minority voices or alternative viewpoints. From a social justice perspective, this raises concerns about equitable representation and the ethical deployment of AI in public discourse.

3. The Illusion of Expertise

Structural consistency can create the illusion of intelligence or expertise. Fluent, well-organized outputs may lead readers to overestimate the model’s understanding, attributing human-like reasoning where none exists. This illusion is particularly salient in academic or technical contexts, where authority is often inferred from clarity and coherence.

Philosophically, this underscores a tension between form and content: ChatGPT’s outputs are coherent and socially representative, yet they are devoid of authentic agency, intentionality, or subjective understanding. Readers may mistake the polished surface for genuine comprehension, blurring the boundary between human and machine knowledge.

4. Homogenization of Discourse

The combination of structural consistency and statistical representativity may lead to a broader homogenization of language and ideas. Over time, widespread reliance on AI-generated text could reduce stylistic variety, discourage rhetorical experimentation, and standardize thought patterns across academic, professional, and public domains.

This phenomenon has implications for creativity, critical thinking, and cultural diversity. Just as mass media in the past shaped collective norms, AI-generated content now has the potential to further entrench uniformity, creating a feedback loop where the Das Man effect is amplified. In this sense, AI does not merely mirror society; it can actively shape it by privileging certain patterns over others.

5. Philosophical and Practical Implications

From a Heideggerian perspective, ChatGPT challenges conventional notions of authorship and agency. Texts generated by AI are not the product of an authentic A Man, yet they carry the authority of polished, coherent language. This tension invites critical reflection on the nature of communication, creativity, and knowledge in the digital age.

Practically, stakeholders in education, journalism, and policymaking must navigate these challenges carefully. AI-generated content should be treated as a tool that reflects collective norms rather than an autonomous voice of authority. Encouraging transparency, critical literacy, and contextual interpretation is essential to mitigate risks associated with de-individualization and bias.

6. Towards Responsible Engagement

Critique does not imply that ChatGPT is inherently detrimental. Instead, understanding its limitations allows us to harness its capabilities responsibly. Educators can integrate AI as a co-creative tool, prompting students to critically engage with outputs rather than passively accept them. Policymakers and content creators can leverage representativity for summarization, accessibility, and information dissemination, while remaining vigilant about bias and homogenization.

Ultimately, the reflection emphasizes balance: appreciating the model’s structural and representational strengths, while actively countering its potential to obscure individuality, reinforce bias, and shape discourse in unintended ways. By combining technical understanding with philosophical insight, we gain a nuanced framework for evaluating the role of AI in contemporary knowledge ecosystems.

VI. Conclusion and Outlook

The analysis presented in this article underscores the dual nature of ChatGPT-generated texts as both technically sophisticated and socially representative artifacts. By examining the theoretical foundations, representativity, structural consistency, and practical examples, we have shown that ChatGPT operates less as an individual author (A Man) and more as a manifestation of collective norms (Das Man). Its outputs are coherent, readable, and aligned with widely recognized patterns, yet fundamentally de-individualized and statistically derived.

1. Summary of Key Findings

First, the philosophical framework demonstrates that ChatGPT embodies the Das Man effect. Drawing on Heidegger, we observed that AI-generated text reflects societal expectations and normative discourse rather than authentic, personal insight. This de-individualization distinguishes AI “authorship” from human authorship, with important implications for creativity, agency, and the attribution of expertise.

Second, from a semiotic and representational standpoint, ChatGPT outputs function as silicon samples—materialized reflections of the collective knowledge embedded in its training corpus. Representativity ensures that generated text is familiar, socially intelligible, and broadly aligned with conventional norms. However, this also risks homogenizing language, reinforcing dominant cultural narratives, and marginalizing minority perspectives.

Third, structural consistency emerges as a defining characteristic of LLM-generated text. Through autoregressive prediction and transformer-based architectures, ChatGPT produces fluent, grammatically coherent, and logically organized outputs. While this consistency enhances readability and perceived reliability, it further reinforces the alignment with Das Man, privileging normativity over idiosyncrasy and predictability over novelty.

Fourth, practical examples across educational, argumentative, and creative contexts illustrate these phenomena in action. From explanatory texts to short stories, ChatGPT maintains structural coherence while mirroring socially prevalent patterns, confirming both its representativity and template-driven tendencies.

Finally, critical reflection highlights the potential risks and ethical considerations of relying on AI-generated content. De-individualization, statistical bias, and the illusion of expertise present challenges in education, journalism, and public discourse. Simultaneously, responsible deployment can harness AI’s strengths for accessibility, summarization, and pattern recognition, provided that users remain critically engaged and aware of inherent limitations.

2. Societal and Practical Implications

The findings carry profound societal implications. In educational settings, AI can serve as a powerful assistant, helping students organize ideas, access collective knowledge, and practice structured writing. However, overreliance may suppress personal expression and critical thinking, creating uniformity in discourse.

In journalism and public communication, the fluency and perceived authority of AI-generated text can influence public opinion, highlighting the importance of media literacy and transparency about AI authorship. Policymakers and organizations must balance the efficiency and accessibility benefits of AI with ethical responsibilities to mitigate bias, prevent homogenization, and protect minority voices.

Furthermore, the pervasive presence of AI in knowledge production invites reflection on authorship and epistemology. If AI can generate coherent and socially representative texts without intentionality or subjective understanding, society must reconsider how authority, originality, and credibility are assigned in the digital age.

3. Future Research Directions

Building on these insights, several avenues for future research emerge:

  1. Individualization in AI Outputs: Developing models or techniques that allow LLMs to integrate more authentic variability, reflecting nuanced perspectives without sacrificing coherence.

  2. Bias Mitigation and Equity: Investigating methods to detect and reduce embedded societal biases in AI outputs, ensuring that minority viewpoints and culturally diverse narratives are preserved.

  3. Longitudinal Effects on Discourse: Studying how widespread use of AI-generated text affects language homogenization, creativity, and critical thinking in education and public communication.

  4. Interdisciplinary Approaches: Combining philosophy, linguistics, computer science, and social sciences to better understand AI’s epistemic, cultural, and ethical implications.

4. Concluding Remarks

In conclusion, ChatGPT exemplifies a new kind of textual agent: one that is technically proficient, socially representative, and structurally coherent, yet fundamentally de-individualized. By bridging philosophy, semiotics, and computational logic, this article provides a framework for understanding the “silicon samples” it generates. Recognizing ChatGPT as Das Man rather than A Man equips scholars, educators, and the public with critical tools to navigate the promises and pitfalls of AI-mediated communication.

Responsible engagement with AI requires balancing its efficiency, accessibility, and representational fidelity with awareness of its de-individualization and bias risks. By doing so, society can harness the benefits of AI while preserving human creativity, agency, and diversity—ensuring that AI remains a tool that complements rather than supplants authentic human expression.

References

  1. Heidegger, M. (1962). Being and Time (J. Macquarrie & E. Robinson, Trans.). Harper & Row. (Original work published 1927)

  2. Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi.org/10.1145/3442188.3445922

  3. Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., … Amodei, D. (2020). Language Models are Few-Shot Learners. Advances in Neural Information Processing Systems, 33, 1877–1901.

  4. Chomsky, N. (2006). Language and Mind (3rd ed.). Cambridge University Press.

  5. Floridi, L., & Chiriatti, M. (2020). GPT-3: Its Nature, Scope, Limits, and Consequences. Minds and Machines, 30, 681–694. https://doi.org/10.1007/s11023-020-09548-1

  6. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., … Polosukhin, I. (2017). Attention Is All You Need. Advances in Neural Information Processing Systems, 30, 5998–6008.

  7. Bickmore, T., & Cassell, J. (2005). Social Dialogue with Embodied Conversational Agents. In K. Dautenhahn et al. (Eds.), Socially Intelligent Agents: Creating Relationships with Computers and Robots (pp. 23–54). Springer.

  8. Marcus, G., & Davis, E. (2020). GPT-3, Bloviator: OpenAI’s Language Generator Has No Idea What It’s Talking About. MIT Technology Review. https://www.technologyreview.com/2020/08/22/1007539/gpt-3-openai-language-generator-artificial-intelligence/

  9. McKee, H. A., & DeVoss, D. N. (2007). The New Media of Composition. Hampton Press.

  10. Floridi, L. (2019). Artificial Intelligence, Deepfakes and a Future of Epi-Truths. Philosophy & Technology, 32, 1–4. https://doi.org/10.1007/s13347-019-00359-3