Why our conversations with AI deserve the same legal protection as medical and attorney-client privilege
The 2 AM confession bot
You know the feeling. Middle of the night, everyone's asleep, but your mind is racing. With clammy hands, you open ChatGPT and type:
"I think I committed tax fraud. What's the best way to make this right?"
Five seconds later, there's a calm step-by-step plan on your screen. Relief – and then the panic: what happens to this confession if the tax authorities or a civil plaintiff subpoena the server logs tomorrow?
This isn't a doomsday scenario. OpenAI CEO Sam Altman said literally last week that conversations with his model fall under no legal professional privilege and can therefore be demanded in court1. Your most intimate prompts are legally just corporate data.
The legal gap behind the hype
Most Europeans rely on two layers of protection:
- GDPR for privacy of personal data;
- professional privileges of doctors, lawyers, or clergy when things get really sensitive.
Generative AI falls into neither category. GDPR regulates processing but doesn't prohibit courts from demanding information. The new EU AI Act even requires high-risk systems to retain all system and user logs for at least six months2. Retention means availability for discovery or judicial seizure – exactly what you don't want with intimate prompts.
In the Netherlands, parties can literally "demand a copy or extract of documents held by themselves or third parties" under Article 843a of the Code of Civil Procedure. Chat logs fall squarely under this. The Code of Criminal Procedure grants similar powers to the Public Prosecution Service.
In short: anyone typing something confidential into a chatbot now risks it appearing in court documents tomorrow.
Professionals caught in the squeeze
Professionals who do fall under privilege are equally trapped. The American Bar Association warned attorneys in July 2024: entering client data into a public AI model can constitute a waiver, an abandonment of privilege3. European bar associations give similar warnings.
Doctors hear the same from regulators; the Dutch Data Protection Authority speaks of "high risk of data breaches" when using chatbots for medical triage4.
Proposal: Algorithmic Confidentiality Privilege (ACP)
I advocate for an independent, legally anchored confidential relationship between user and AI: the Algorithmic Confidentiality Privilege.
Definition
The ACP is the user's right to keep the content of their interaction with a qualified AI service confidential, so that it cannot be demanded as evidence or investigation object without consent or compelling exception.
Framework
Element | Proposal |
---|
Privilege holder | The user, not the provider. Only the user can waive the privilege. |
Scope | All prompts, uploads, audio, generated responses, and personal inferences derived by the AI. |
Requirements | AI service is certified, applies end-to-end encryption, retains logs maximum 30 days (exception upon user request). |
Exceptions | Crime-fraud rule (AI used for planning or committing crimes); acute threat to life or safety; national security under judicial oversight. |
Evidence law | In civil or criminal proceedings, logs can only be demanded if the user consents or the court establishes an exception applies. |
Six concrete risks without ACP
Without legal protection, we face these scenarios:
Year | Country | Situation | Loss without privilege |
---|
2026 | US | Prime suspect discusses alibi with copilot that saves transcripts. | Prosecutor subpoenas logs, alibi proves false, sentence enhancement. |
2026 | EU | Startup feeds secret R&D data into ChatGPT for code review. | Patent troll sues OpenAI, gets prompts, copies innovation. |
2027 | Netherlands | Mental health clinic experiments with AI self-help for eating disorders. | Insurer makes FOI request, gets anonymized but de-anonymizable chat records. Clients stop treatment. |
2027 | Japan | Employee confesses racist incidents in company chatbot. | Company fires him for "whistleblower risk" after logs leak via compliance audit. |
2028 | India | Farmer asks AI for advice on illegal seeds. | Police subpoena chat history, use confession as sole evidence; fine and imprisonment. |
2029 | Germany | Medical specialist enters patient symptoms in public model; model saves case. | Patient recognizes details in different context and sues doctor for violating medical confidentiality. |
How ACP fits into legislation
European level
AI Act
Add article 19-bis stating that service providers registering as "ACP-provider" may anonymize their interactions and must then delete within 30 days. In return, provisions from art. 19 on logging obligations don't apply to personal data but to anonymized metadata.
ePrivacy Regulation (still under negotiation)
Extend the provision on confidentiality of electronic communications to "human-machine dialogue" provided the provider is certified.
Digital Services Act
Distinguish "privileged content" in confidentiality provisions. Platforms hosting AI conversations get separate notice-and-action procedure where the user is heard beforehand.
Dutch framework
The Dutch legal system has a rich tradition of privilege rights, anchored in Article 218 of the Code of Criminal Procedure. Professionals such as doctors, lawyers, notaries, and clergy have the right not to share confidential information. This right distinguishes between substantive and procedural privilege6:
- Substantive privilege: Protects the content of confidential communication itself
- Procedural privilege: Protects the right to refuse providing information in legal proceedings
The new 2025 Privilege Directive introduces rules for digital confidential information but also shows vulnerabilities: filtering occurs without direct judicial oversight and professionals have no formal role in the assessment6. The ACP must close these gaps.
Act on Advocates art. 11a and Code of Criminal Procedure art. 218
Extend both the duty of confidentiality (art. 11a) and substantive and procedural privilege (art. 218 CCP): data entered by or on behalf of client in a Dutch Bar Association-certified AI system falls under both forms of protection.
Medical Treatment Agreements Act (WGBO)
Add provision to art. 7:457 of the Civil Code that "electronic triage and consultation via recognized AI systems" falls under medical professional secrecy, including substantive and procedural privilege, provided the provider meets the ACP certificate.
Code of Civil Procedure art. 843a
Introduce refusal ground: documents falling under ACP are not discoverable unless the court establishes a compelling public interest, with the "concrete and objective reasonable suspicion" test from the new Directive as minimum standard6.
GDPR Implementation Act
Determine that certified ACP providers have standard "destruction obligation" after 30 days unless user wants longer retention. Data for model training may only be used at aggregated level.
What does it deliver?
Psychological safety
A chatbot feels safer than a human. Young people are more likely to reveal their sexual orientation or suicidal thoughts to AI than to parents or family doctor. Without legal shield, they fear their words will return in a file or at a benefits agency. ACP provides certainty that vulnerable information won't be shared involuntarily.
Fair legal proceedings
In the US, we've already seen requests for complete chat logs in civil discovery. Soon the party with the most expensive lawyers can specifically demand prompts to find a weak spot. A level playing field requires that private conversations don't just end up on the street.
Innovation with clear frameworks
Healthcare providers and lawyers want to deploy AI. Now they're held back by disciplinary risks (violating secrecy) or insurers claiming AI use undermines professional standards. ACP creates clear compliance framework.
Equal access
Large multinationals buy "enterprise versions" with their own NDAs and EU servers. A citizen or SME doesn't have that luxury. Public guarantee prevents division between expensive privilege modes and digital wild west modes.
Common objections and responses
Objection | Response |
---|
Criminals can safely plot with AI. | No. The crime-fraud exception remains. Once AI is used to plan crimes, the user loses privilege. |
End-to-end encryption hinders investigation. | No more than it already does with medical records. Justice can still demand specifically after judicial permission, but not mass fishing. |
Too expensive for small providers. | Government can offer open-source toolkits and reference architectures. Certification can be proportional to company size. |
Legislation is national, AI is global. | That's why EU must lead and then export ACP via adequacy agreements, comparable to GDPR-effect. |
Roadmap toward implementation
Pilot programs
Start within healthcare and legal aid. Measure whether patients and clients dare be more open and whether professionals are less hesitant.
Public consultation
Involve civil rights organizations, professional groups, regulators, and tech companies. This allows exceptions and certification requirements to be fine-tuned.
European coalition
Form ACP taskforce under AI Office to draft regulatory texts that can later be inserted into AI Act or standalone regulation.
International coordination
Invite Canada, Japan, and Australia to form "trust alliance": mutual recognition of ACP certificates and non-interference with privileges.
Public awareness
Require clear interface indicators: lock icon that lights up when user works within ACP environment. Schools and employers also need educational materials about difference between regular and privileged AI channels.
Final plea
The right to confidential consultation isn't historical curiosity but foundation of freedom and dignity. The doctor, lawyer, and priest got secrecy because they were needed for healthy society. Today the AI assistant partly does their work. Yet we let it operate without legal protection.
With the Algorithmic Confidentiality Privilege, we bring the foundations of professional secrecy to the digital age. The privilege isn't a free pass for criminals but a shield for honest citizens seeking advice or comfort in complete openness. It draws a clear line: what takes place in confidential dialogue stays private, unless there's compelling societal interest to break that silence.
Let's not wait for the first scandal of leaked chat logs in courtroom or social media. Europe can now take the lead and the Netherlands can be the testing ground. Technical solutions already exist; what's missing is political courage to give this new form of human-machine intimacy the protection it deserves.
Anyone wanting to speak truth to their algorithm should be able to do so without fear. Time to anchor that.