
In a widely discussed podcast episode between comedian Theo Von and OpenAI CEO Sam Altman, Altman raised a striking point that people are increasingly confiding in ChatGPT, sharing personal stories, emotional struggles, and even seeking legal or psychiatric guidance. Altman warned that these conversations are not covered by any form of legal privilege, unlike speaking to a lawyer or doctor, and could potentially be used in legal proceedings if requested by authorities.
We have recently heard from our surroundings that people are increasingly using ChatGPT as a companion, seeking psychiatric comfort, informal legal counsel, or daily advice. This blurring of lines between “tool” and “trusted confidant” makes Altman’s warning highly relevant for Saudi Arabia: What legal status does these AI conversations hold here, and could they be used as evidence in court?
I. Current Saudi Data Protection & AI Governance Framework
Saudi Arabia has made sweeping moves to regulate data and AI through the Personal Data Protection Law (PDPL) (Royal Decree No. M/19 of 2021) and the Saudi Data & Artificial Intelligence Authority (SDAIA). The PDPL governs how personal data is collected, processed, and shared, imposing obligations on data controllers and giving individuals rights to consent, access, and erasure.
Complementing the PDPL, the National Strategy for Data & AI and SDAIA’s ethical AI principles promote transparency, accountability, and responsible use. But none of these framework’s address “legal privilege” in the evidentiary sense, they regulate data privacy, not whether an AI conversation is shielded from use in litigation or prosecution.
II. The Privilege Gap
Traditional legal privilege in Saudi Arabia is narrow and profession specific. Lawyer–client communications are treated confidentially under the Law of Advocacy; doctor–patient communications are similarly protected under health regulations. AI chats don’t fit this category. A conversation with ChatGPT is not the same as seeking advice from a licensed lawyer or physician. There is no statutory or doctrinal privilege that would prevent disclosure of AI chat content if requested by a court or regulator. The “trust” users feel toward AI is, legally, misplaced.
III. Can ChatGPT Conversations Be Used as Evidence in Saudi Courts?
The short answer is potentially, but it would depend on the circumstances.
Under the Saudi Law of Evidence (Royal Decree No. M/43 of 2021), which came into force in July 2022, “electronic data, digital records, and communications” are broadly recognized as admissible if certain conditions are met. This category covers emails, text messages, and app chats and, by extension, could encompass AI generated chat logs. However, admissibility is not automatic.
Key considerations:
- Authenticity & Integrity: Any party wishing to submit a ChatGPT transcript would need to demonstrate that the record is genuine, complete, and untampered, something that may require technical evidence, provider authentication, or forensic support.
- Context of Use: Courts would weigh the relevance and probative value of such conversations. For instance, chats could be examined for intent, admissions, or patterns of behaviour, but their weight as evidence might be debated.
- Judicial Discretion: Saudi judges have wide latitude under the Law of Evidence to accept or reject materials depending on credibility and the surrounding facts.
Crucially, there is still no recognized “AI privilege” in Saudi law. If regulators such as the Public Prosecution, ZATCA or CMA formally request AI chat records, providers could be compelled to disclose them. The PDPL ensures that personal data is processed lawfully and securely, but it does not shield those records from lawful court or investigative demands.
Both the U.S. and EU are grappling with the same issue Saudi Arabia faces. AI chats are protected as “data” but not privileged as “confidential communications.” There is no jurisdiction yet that grants AI conversations the same legal shield as a doctor or lawyer discussion.
IV. Policy and Legal Reform Considerations
Sam Altman’s comments spotlight a legal vacuum that users often treat AI like a “trusted advisor,” yet the law sees it as an unprotected channel. Saudi policymakers may consider:
- Clarifying rules for “AI privilege” particularly where AI is used in legal, medical, or religious professional contexts.
- Enhancing PDPL guidance to require explicit user warnings about non privileged AI chats.
- Establishing procedural protections for certain categories of AI stored data (e.g., enterprise licensed legal tools) to avoid indiscriminate disclosure.
V. Conclusion
Sam Altman’s warning may have been global, but it resonates sharply in Saudi Arabia. The Kingdom has cutting edge data laws and an ambitious AI strategy, but no legal privilege yet attaches to AI conversations. Under Saudi law, a ChatGPT chat is an electronic record, and if its authenticity is proven, it can be admitted as evidence in court.
In the meantime, the safest stance is simple: treat every AI conversation as if it might one day be read aloud in court.