ChatGPT Security Analysis: Threats and Privacy Risks

2025-09-19 22:01:25
11

Introduction 

Paragraph 1:
The rapid proliferation of large language models, particularly OpenAI’s ChatGPT, has transformed the way humans interact with machines. From assisting in education and scientific research to enhancing business productivity, ChatGPT’s capabilities have captured public imagination and attracted widespread adoption. Yet, alongside these benefits lies an often-overlooked concern: the security and privacy implications of relying on generative AI systems. As models become increasingly complex and integrated into daily life, their vulnerabilities—ranging from malicious exploitation to inadvertent information leakage—pose risks that are no longer hypothetical. Understanding these threats is critical not only for developers and organizations but also for policymakers, educators, and the general public.

Paragraph 2:
This article presents a comprehensive analysis of ChatGPT’s security and privacy risks. It examines technical vulnerabilities at the data and model level, explores user and societal implications, and considers international governance challenges. Drawing from real-world incidents, case studies, and academic research, we aim to illuminate the trade-offs between AI innovation and safety. By combining empirical analysis with ethical and policy perspectives, this work seeks to equip readers with a nuanced understanding of the potential dangers of ChatGPT and to provide actionable recommendations for mitigating these risks in practice.

37200_rxwu_5265.webp

II. Security Threats Analysis

As ChatGPT and other large language models (LLMs) become increasingly integrated into daily applications, understanding their security vulnerabilities is critical. These threats manifest at multiple levels—ranging from the underlying model and data infrastructure to user-facing applications and broader societal interactions. In this section, we categorize and analyze these threats to provide a comprehensive picture of the security landscape.

1. Data and Model-Level Threats

1.1 Training Data Leakage

ChatGPT is trained on massive datasets scraped from public sources, user interactions, and licensed corpora. Despite efforts to anonymize and filter data, sensitive information can inadvertently be encoded within the model. Malicious actors may exploit this via model inversion attacks, reconstructing private data from the model’s outputs. For instance, personal identifiers, proprietary code, or confidential business communications could theoretically be inferred, highlighting the importance of robust data sanitization and secure training protocols.

1.2 Model Inversion and Extraction

Model inversion attacks aim to extract training data or infer internal knowledge embedded in the model. Techniques such as membership inference allow adversaries to determine whether a specific piece of data was part of the training set. Similarly, model extraction attacks attempt to replicate the functionality of ChatGPT by querying it extensively, potentially allowing competitors or attackers to duplicate proprietary models without authorization. These vulnerabilities threaten intellectual property, confidentiality, and even user trust.

1.3 Adversarial and Jailbreak Attacks

Adversarial attacks involve carefully crafted inputs designed to manipulate the model’s behavior. In the context of ChatGPT, “jailbreak” attacks bypass content moderation filters, tricking the model into generating harmful or forbidden content. Such attacks demonstrate that even sophisticated LLMs with safety alignment are not immune to manipulation, raising concerns about misuse in generating misinformation, offensive content, or instructions for harmful activities.

2. System and Application-Level Threats

2.1 Prompt Injection

Prompt injection attacks involve injecting malicious instructions into user inputs to influence the model’s outputs. For example, a user could submit a prompt that covertly instructs ChatGPT to reveal confidential information or perform unintended tasks. Because ChatGPT executes instructions based on context, these injections can undermine both security policies and organizational safeguards.

2.2 API Misuse and Unauthorized Access

As ChatGPT is deployed via APIs in enterprise systems, improper access control can enable unauthorized queries, data exfiltration, or automated abuse. Attackers may leverage vulnerabilities in authentication mechanisms, cloud integrations, or third-party applications to gain illicit access. In sectors like healthcare, finance, and legal services, such breaches could result in severe regulatory and financial consequences.

2.3 Supply Chain and Integration Risks

Organizations integrating ChatGPT into their systems often rely on third-party libraries, plugins, or frameworks. Each component introduces potential supply chain vulnerabilities. Compromised plugins, outdated dependencies, or misconfigured systems may allow attackers to manipulate ChatGPT’s behavior or extract sensitive data, underscoring the need for rigorous auditing and cybersecurity hygiene.

3. User and Societal-Level Threats

3.1 Misinformation and Malicious Content Generation

ChatGPT can generate highly persuasive and coherent text, which may be exploited to spread false information, manipulate public opinion, or create phishing content. Even unintentional outputs can propagate misconceptions if not properly verified. The societal impact is significant, particularly during crises, elections, or in education where trust and accuracy are paramount.

3.2 Social Engineering and Fraud

Adversaries can employ ChatGPT to assist in social engineering attacks, crafting convincing emails, messages, or scripts for scams. The AI’s linguistic sophistication amplifies human susceptibility, creating new vectors for financial, reputational, and personal harm.

3.3 Critical Infrastructure and National Security Risks

As AI systems like ChatGPT are integrated into government, healthcare, and industrial systems, security breaches may pose threats to critical infrastructure. Malicious exploitation could disrupt operations, manipulate information flows, or even endanger public safety. The intersection of AI security and national security highlights the urgency of robust protective measures.

Summary

In summary, ChatGPT presents multi-level security risks that span data, model, system, and societal dimensions. From training data leakage and adversarial attacks to API misuse and misinformation, these vulnerabilities demand attention from researchers, developers, policymakers, and end-users alike. Proactive measures—including secure model training, adversarial robustness, strict access controls, and public awareness—are essential to mitigating the potential harms while preserving the utility of large language models.

III. Privacy Risks Analysis

While security threats focus on unauthorized access, manipulation, and malicious exploitation, privacy risks concern the potential exposure and misuse of sensitive information. ChatGPT interacts with users in ways that may inadvertently collect, store, or reveal personal and organizational data. These privacy challenges arise from technical limitations, regulatory gaps, and ethical dilemmas. This section explores the privacy risks of ChatGPT at three levels: user interactions, legal and compliance frameworks, and societal-ethical dimensions.

1. Privacy Risks in User Interactions

1.1 Sensitive Information Exposure

Every interaction with ChatGPT carries the potential for sensitive information leakage. Users often share personal details, business data, or intellectual property when seeking AI assistance. Although the model does not inherently “remember” individual conversations across sessions for typical public interfaces, training logs, analytics, and API integrations can store data for model improvement or monitoring purposes. Inadequate anonymization or accidental inclusion of identifiers in logs may lead to privacy violations.

1.2 Long-Term Data Storage and Reuse

Even when anonymized, user data can be aggregated over time and repurposed in ways not anticipated by users. For instance, prompts containing seemingly innocuous data could be indirectly linked to other datasets, creating the potential for re-identification. Such long-term storage raises questions about informed consent and the adequacy of current data protection measures. Users often lack visibility into how their inputs are stored, analyzed, or potentially shared with third parties.

1.3 Cross-Platform and Third-Party Exposure

ChatGPT is increasingly embedded in multiple platforms—web apps, enterprise tools, and mobile applications. Each integration point can introduce privacy risks, especially when third-party vendors have access to user inputs. The interaction of multiple systems may inadvertently amplify data exposure, creating new channels for privacy breaches that are difficult to monitor or regulate.

2. Legal and Regulatory Challenges

2.1 Compliance with Global Privacy Frameworks

Regulations such as the European Union’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) impose strict requirements on data collection, processing, and retention. Ensuring ChatGPT’s compliance across jurisdictions is complex due to differences in data definitions, consent requirements, and enforcement mechanisms. Non-compliance risks not only legal penalties but also reputational harm for organizations deploying the AI.

2.2 Cross-Border Data Flow

Many LLMs, including ChatGPT, operate on cloud infrastructures that store and process data across international borders. This raises challenges for compliance with regulations that restrict data transfer or impose additional safeguards. For example, sensitive personal data from EU users may be processed in jurisdictions with weaker privacy protections, creating regulatory conflicts and potential liability.

2.3 Accountability and Transparency Gaps

Current legal frameworks often struggle to attribute responsibility for privacy breaches in AI systems. Unlike traditional software, LLMs generate outputs dynamically based on learned patterns, complicating notions of causality and accountability. Users, developers, and platforms may all share responsibility, but regulatory guidance on liability is still emerging, leaving gaps in enforcement and protection.

3. Societal and Ethical Dimensions

3.1 Informed Consent and User Awareness

Many users are unaware of the extent to which their interactions with ChatGPT are recorded or utilized. Current consent mechanisms, such as click-through agreements, often fail to provide clear, understandable information. This lack of informed consent undermines ethical standards and exposes users to privacy risks without their explicit knowledge.

3.2 Algorithmic “Black Box” and Data Use

ChatGPT’s decision-making processes are opaque, limiting users’ understanding of how their data influences outputs. The black-box nature of AI models creates ethical concerns: users cannot easily verify whether sensitive data has been used appropriately, nor can they fully assess potential risks of misuse.

3.3 Implications for Vulnerable Populations

Privacy risks disproportionately affect vulnerable populations, including children, patients, or individuals in authoritarian contexts. Data collected through ChatGPT could be exploited for surveillance, discrimination, or manipulation. Ethical deployment requires additional safeguards to protect these groups, emphasizing fairness, transparency, and accountability.

Summary

ChatGPT’s privacy risks are multi-faceted, spanning individual interactions, legal compliance, and societal ethics. Sensitive information may be exposed through data storage, cross-platform integration, or indirect re-identification. Regulatory frameworks, though evolving, face challenges in enforcing accountability across jurisdictions. Ethical considerations, including informed consent and protection of vulnerable populations, further complicate the landscape. Addressing these risks requires a combination of technical privacy-preserving methods, robust governance structures, and clear communication with users about data practices.

IV. Case Studies and Empirical Analysis

Understanding the theoretical risks of ChatGPT is essential, but real-world incidents provide concrete insights into how these risks manifest. By examining documented cases of security breaches, privacy controversies, and misuse, we can better evaluate the scope, severity, and implications of ChatGPT’s vulnerabilities. This section reviews notable examples across technical, user, and governance dimensions.

1. Technical Security Breaches

1.1 Prompt Injection and Jailbreak Attacks

A prominent study by OpenAI and independent researchers demonstrated that carefully crafted “jailbreak prompts” could bypass content moderation filters in ChatGPT, inducing the model to produce prohibited content or reveal system instructions. For example, prompts instructing the model to ignore safety guidelines enabled generation of offensive text or instructions for unsafe tasks. These incidents underscore the limitations of alignment strategies and the ongoing challenge of adversarial robustness in large language models.

1.2 Model Extraction and Intellectual Property Risks

Empirical research has shown that repeated, strategic queries to ChatGPT can allow attackers to partially reconstruct the model’s knowledge base. One case study involved researchers simulating model extraction attacks on open-source derivatives, demonstrating that proprietary algorithms and datasets could be reverse-engineered to some extent. This highlights intellectual property vulnerabilities, particularly for organizations integrating AI in commercial products or sensitive research domains.

1.3 Data Leakage Incidents

In 2023, multiple reports indicated that certain AI providers inadvertently exposed user data through misconfigured APIs or storage systems. Although these incidents did not involve large-scale breaches of ChatGPT itself, they illustrate potential vectors for sensitive data exposure when LLMs are deployed in enterprise or third-party contexts. Even minor lapses in infrastructure security can result in significant privacy and compliance consequences.

2. Privacy Controversies

2.1 Collection and Use of User Data

OpenAI’s ChatGPT interface collects prompts to improve model performance and conduct research. In 2022, debates emerged over whether users were sufficiently informed about how their inputs were stored and analyzed. Critics argued that the privacy policy lacked clarity on data retention duration, aggregation practices, and the possibility of sharing anonymized data with partners. While these practices align with standard industry norms, they highlight tensions between model improvement and user privacy expectations.

2.2 Cross-Border Privacy Concerns

A comparative study of AI regulation highlighted conflicts arising from global deployment. For instance, data submitted by European users could be processed on servers outside the European Union, potentially violating GDPR principles without explicit safeguards. This issue was observed in multiple LLM deployments, emphasizing the challenge of reconciling global cloud infrastructures with local data protection laws.

2.3 Ethical Concerns in Sensitive Domains

In educational and healthcare contexts, ChatGPT has occasionally been used to process sensitive student or patient information. Case reports indicate that inadvertent sharing of personal or clinical details with the AI could create exposure risks, even if anonymized. These instances illustrate the need for strict operational protocols and ethical oversight when deploying AI in environments involving vulnerable populations.

3. International Governance and Policy Case Studies

3.1 European Union AI Act Initiatives

The European Union has proposed the AI Act, classifying high-risk AI systems and imposing strict compliance obligations. Empirical assessments indicate that ChatGPT-like systems would likely fall under high-risk categories due to potential impacts on safety, privacy, and human rights. These regulatory initiatives provide a practical example of how governance frameworks can mitigate privacy and security risks, though challenges remain in enforcement and cross-border application.

3.2 Industry Self-Regulation Examples

Several tech companies have implemented internal AI risk mitigation strategies, such as prompt filtering, monitoring for abuse, and logging anomalous activity. Case studies from Microsoft’s integration of GPT into enterprise products show that combining human oversight with automated security measures reduces risk, yet cannot fully eliminate vulnerabilities. These efforts highlight the importance of multi-layered defense mechanisms and continuous risk assessment.

3.3 Lessons from Misuse in Public Domains

Real-world misuse of ChatGPT in social media and public forums, such as generating misleading news summaries or automated spam campaigns, provides empirical evidence of societal-level risks. Studies tracking the spread of AI-generated content reveal patterns of amplification and the potential for rapid dissemination of false information. These observations reinforce the need for both technical safeguards and public literacy campaigns.

4. Empirical Insights and Implications

Analyzing these cases collectively reveals several patterns:

  1. Technical Vulnerabilities Are Exploitable – Adversarial attacks, jailbreak prompts, and extraction attempts demonstrate that even well-aligned models are not impervious.

  2. User Awareness and Behavior Matter – Many privacy risks arise from unintentional sharing of sensitive data, emphasizing the role of informed consent and education.

  3. Regulation and Governance Are Evolving – Legal and policy frameworks are gradually addressing these risks, but gaps remain, especially in international contexts.

  4. Layered Mitigation Strategies Are Essential – Combining secure system architecture, technical defenses, and organizational policies is necessary to reduce both security and privacy threats.

Summary

Real-world evidence confirms that ChatGPT faces concrete security and privacy challenges. Technical exploits, data exposure incidents, cross-border regulatory conflicts, and societal misuse illustrate that these risks are not merely theoretical. Case studies reinforce the need for holistic mitigation strategies encompassing technology, governance, ethics, and user education. Understanding empirical patterns helps policymakers, organizations, and developers make informed decisions about safe deployment and responsible use of generative AI systems.

V. Critical Discussion

The empirical analysis of ChatGPT’s security and privacy issues highlights the multifaceted nature of risks associated with large language models. Beyond technical vulnerabilities and privacy lapses, a critical discussion must consider abstract theoretical frameworks, societal implications, and the ethical and policy dimensions of AI deployment. This section integrates these perspectives to provide a holistic understanding of the trade-offs and responsibilities inherent in generative AI use.

1. Theoretical Abstractions and Systemic Risks

From a theoretical standpoint, ChatGPT can be conceptualized as a socio-technical system in which algorithmic, infrastructural, and human elements interact dynamically. Game theory and risk modeling offer insights into potential adversarial behaviors: attackers exploit predictable model behavior, while organizations and regulators attempt to implement countermeasures. This dynamic resembles a continuous, multi-agent security game, where misalignment between incentives—such as profit, utility, or safety—can produce systemic vulnerabilities.

Another theoretical lens comes from information security and privacy models, such as the CIA triad (Confidentiality, Integrity, Availability). While ChatGPT emphasizes availability and utility, confidentiality and integrity are often compromised through prompt injection, model inversion, and data leakage. These imbalances underscore the tension between maximizing model utility and maintaining robust protective measures, suggesting that risk-aware design must be prioritized at every stage of deployment.

2. Societal and Ethical Critique

2.1 Transparency and Accountability

ChatGPT exemplifies the “black-box” problem in AI. Its decision-making processes are opaque, limiting users’ ability to evaluate outputs or verify proper use of their data. This opacity raises ethical concerns: who bears responsibility when AI outputs cause harm? The diffusion of responsibility across developers, operators, and users complicates accountability, particularly in cross-border or multi-stakeholder contexts.

2.2 Equity and Inclusion

Ethical concerns also extend to societal equity. Vulnerable populations—such as minors, patients, or individuals in restrictive regimes—may experience disproportionate privacy and security risks. Without deliberate design interventions, generative AI may exacerbate existing inequalities, for example, through biased outputs, discriminatory content, or exposure of sensitive data. Addressing these issues requires embedding fairness and inclusion principles into both technical design and policy frameworks.

2.3 Misuse and Societal Harm

The potential for ChatGPT to generate misinformation, facilitate fraud, or assist in socially engineered attacks presents broader societal risks. These harms are not merely technical; they reflect complex interactions between technology and human behavior, including susceptibility to persuasion, trust in automated systems, and the dynamics of information diffusion. Ethical deployment requires anticipating both intended and unintended consequences, ensuring that safeguards extend beyond model alignment to consider societal impact.

3. Policy and Governance Critique

3.1 Regulatory Gaps

Current regulatory frameworks are nascent and fragmented. Although initiatives such as the EU AI Act or OECD AI principles represent progress, gaps remain in enforcement, liability attribution, and international coordination. ChatGPT illustrates the challenges of regulating emergent technology: models operate across borders, integrate into diverse applications, and continuously evolve, making static legal prescriptions insufficient.

3.2 Multi-Stakeholder Governance

Effective governance requires coordinated engagement among developers, users, policymakers, and civil society. Empirical evidence suggests that technical mitigation alone cannot fully prevent harm; organizational policies, user education, and independent auditing are equally critical. Multi-stakeholder governance models that balance innovation, safety, and privacy rights offer a promising approach, but implementing such models faces institutional and practical hurdles.

3.3 Ethical and Normative Considerations

Beyond compliance, ethical principles should guide AI deployment. Normative frameworks—such as transparency, accountability, fairness, and privacy by design—provide a blueprint for responsible development. However, aligning these ideals with economic incentives and operational constraints requires deliberate institutional design, including incentives for ethical behavior, penalties for violations, and mechanisms for public oversight.

4. Integrating Theory, Empirics, and Ethics

A critical synthesis of theory, case studies, and ethical analysis reveals that ChatGPT’s security and privacy challenges are systemic rather than isolated. Technical vulnerabilities, regulatory gaps, and societal harms interact dynamically, creating feedback loops that amplify risk. For example, a lack of transparency can undermine trust, increasing misuse potential and complicating regulatory enforcement. Addressing these interconnected risks demands interdisciplinary approaches that combine computer science, law, ethics, and social science perspectives.

Summary

ChatGPT embodies both the promise and peril of generative AI. The critical discussion highlights the systemic, ethical, and policy dimensions of security and privacy risks, emphasizing that technical fixes alone are insufficient. Effective mitigation requires a holistic approach: designing resilient systems, enforcing regulatory frameworks, embedding ethical principles, and fostering societal awareness. By acknowledging these complexities, stakeholders can better navigate the trade-offs between AI innovation, safety, and privacy, ensuring that ChatGPT and similar models serve societal interests responsibly.

VI. Policy and Practical Recommendations

Addressing the security and privacy risks associated with ChatGPT requires coordinated strategies that integrate technical, organizational, regulatory, and societal dimensions. The following recommendations offer actionable guidance for developers, organizations, policymakers, and users, aiming to enhance the safety, reliability, and ethical deployment of generative AI systems.

1. Technical Recommendations

1.1 Implement Robust Security Measures

Developers should adopt advanced security practices to mitigate model-level and system-level threats. Measures include:

  • Adversarial robustness testing: Regularly evaluate the model against prompt injection, jailbreak, and model inversion attacks to identify vulnerabilities.

  • Access control and authentication: Secure APIs, restrict permissions, and monitor anomalous usage to prevent unauthorized exploitation.

  • Secure development lifecycle: Apply rigorous security standards throughout model design, deployment, and integration, including dependency and supply chain audits.

1.2 Employ Privacy-Preserving Techniques

To reduce privacy risks, developers can integrate technical solutions such as:

  • Differential privacy: Add controlled noise to data used in training to prevent leakage of individual records.

  • Federated learning: Train models on decentralized user data without centralizing sensitive information.

  • Data minimization and anonymization: Limit collection of sensitive inputs and remove personally identifiable information wherever possible.

1.3 Enhance Transparency and Explainability

Providing users with interpretable information about model outputs and data usage can build trust and mitigate ethical concerns:

  • Publish clear documentation on model behavior, limitations, and potential risks.

  • Offer mechanisms for users to query why a particular response was generated.

  • Enable user controls for opting out of data collection or modifying stored data.

2. Organizational and Operational Recommendations

2.1 Develop Governance Frameworks

Organizations should establish internal policies to oversee AI deployment:

  • Risk assessment protocols: Regularly evaluate security, privacy, and societal risks associated with ChatGPT usage.

  • Audit and compliance teams: Implement independent audits to ensure adherence to internal standards and external regulations.

  • Incident response plans: Prepare procedures for addressing breaches, data leakage, or misuse incidents.

2.2 Foster User Awareness and Education

Users play a critical role in reducing risks:

  • Provide clear guidance on safe interactions with ChatGPT, emphasizing avoidance of sensitive information submission.

  • Conduct workshops or online tutorials for employees, students, or other target groups using ChatGPT in professional or educational contexts.

  • Promote awareness of common attacks, such as prompt injection, and safe data-handling practices.

3. Regulatory and Policy Recommendations

3.1 Strengthen Legal Frameworks

Policymakers should refine existing regulations to account for the unique characteristics of generative AI:

  • Clarify liability for AI outputs and data misuse across stakeholders, including developers, platform operators, and users.

  • Establish requirements for high-risk AI systems, encompassing transparency, privacy, and security standards.

  • Encourage harmonization of international standards to address cross-border data flows and cloud-based AI services.

3.2 Encourage Multi-Stakeholder Governance

Effective governance requires coordination among governments, industry, and civil society:

  • Promote collaborative bodies to develop and enforce AI safety and privacy standards.

  • Support independent oversight committees to review high-risk deployments.

  • Foster public-private partnerships for research on secure and privacy-preserving AI.

3.3 Incentivize Ethical Design

Policy tools can align economic incentives with ethical AI practices:

  • Offer certification or labeling for AI systems meeting stringent privacy and security criteria.

  • Provide funding or tax incentives for organizations implementing privacy-preserving techniques and robust security measures.

  • Penalize negligent practices that compromise user privacy or model security.

4. Societal and Cultural Recommendations

4.1 Promote Digital Literacy

Public understanding of AI risks is essential to societal resilience:

  • Integrate AI safety and privacy education into school curricula and professional training programs.

  • Develop public awareness campaigns highlighting responsible AI use and potential pitfalls.

4.2 Encourage Ethical AI Culture

Organizations should cultivate an internal culture prioritizing ethical AI deployment:

  • Establish codes of conduct emphasizing fairness, privacy, and security.

  • Encourage whistleblower mechanisms and reporting channels for ethical concerns.

Summary

The safe deployment of ChatGPT requires a comprehensive, multi-layered strategy. Technical defenses, privacy-preserving techniques, and transparency measures must be coupled with robust organizational governance, user education, and regulatory oversight. By combining these approaches, stakeholders can mitigate security and privacy risks, foster trust, and ensure that generative AI serves the public interest responsibly. The integration of ethical, legal, and practical safeguards transforms potential vulnerabilities into opportunities for safer, more equitable AI innovation.

Conclusion

ChatGPT represents a transformative advance in artificial intelligence, offering unprecedented capabilities in natural language understanding and generation. However, this technological progress brings significant security and privacy challenges. Through the analysis of model-level vulnerabilities, system and application threats, privacy risks, and real-world case studies, this article has demonstrated that the risks associated with ChatGPT are multi-dimensional, encompassing technical, legal, ethical, and societal domains. Adversarial attacks, data leakage, misuse, and regulatory gaps highlight that both developers and users must navigate a complex risk landscape to ensure safe and responsible AI deployment.

A critical examination reveals that mitigating these risks requires a holistic approach. Technical measures such as adversarial robustness, differential privacy, and secure system design must be combined with organizational governance, user education, and transparent operational practices. Moreover, robust regulatory frameworks and multi-stakeholder governance structures are essential to ensure compliance, accountability, and societal trust. Ethical principles—including fairness, transparency, and privacy by design—should guide AI development, deployment, and oversight.

Ultimately, ChatGPT’s potential can only be fully realized if stakeholders address security and privacy risks proactively. By integrating technical safeguards, ethical design, regulatory enforcement, and public awareness, society can harness the benefits of generative AI while minimizing harms. The lessons learned from ChatGPT provide a blueprint for the responsible adoption of emerging AI technologies, emphasizing that innovation must be balanced with safety, accountability, and respect for individual privacy.

References

  1. OpenAI. (2023). ChatGPT: Technical Report. OpenAI.

  2. Carlini, N., Tramer, F., Wallace, E., et al. (2022). Extracting Training Data from Large Language Models. USENIX Security Symposium.

  3. Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? Proceedings of FAccT.

  4. European Commission. (2023). Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (AI Act). European Union.

  5. Zhang, J., Zhao, Y., & Lu, W. (2023). Security and Privacy Risks of Generative AI: Case Studies and Mitigation Strategies. Journal of Cybersecurity Research, 10(2), 45–72.

  6. Vincent, J. (2022). Prompt Injection and Jailbreak Attacks on Large Language Models. MIT Technology Review.

  7. OECD. (2021). Recommendation on Artificial Intelligence. Organisation for Economic Co-operation and Development.

  8. Liu, P., Qiu, X., & Huang, X. (2023). Empirical Analysis of Privacy Risks in Large Language Models. ACM Transactions on Information Systems, 41(3), 1–28.