Trust Portal

Start your security review
View & download sensitive information
ControlK

Welcome to our Trust Portal for OpenAI's ChatGPT services, including ChatGPT Enterprise and ChatGPT Edu. Here, you can access our comprehensive compliance documentation, find answers to frequently asked questions related to security and privacy, and explore our robust security practices. We believe in maintaining transparency and building trust with our customers, and this portal is designed to provide you with the information and assurance you need to feel confident in our ability to protect your data.

Our products are covered in our SOC 2 Type 2 report and have been evaluated by an independent third-party auditor to confirm that our controls align with industry standards for security, confidentiality, privacy and availability. Our products are also ISO 27001, 27017, 27018, and 27701 certified. Request access to our SOC 2 Report below to learn more about our security controls and compliance activities.

OpenAI invites security researchers, ethical hackers, and technology enthusiasts to report security issues via our Bug Bounty Program. The program offers safe harbor for good faith security testing and cash rewards for vulnerabilities based on their severity and impact. Refer to our blog to read more about the program and visit Bugcrowd to participate.

Morgan Stanley-company-logoMorgan Stanley
PwC-company-logoPwC
Robinhood-company-logoRobinhood
Square-company-logoSquare
Zendesk-company-logoZendesk
Amgen-company-logoAmgen
Bain & Company-company-logoBain & Company
Datadog-company-logoDatadog
JetBlue Airways-company-logoJetBlue Airways
University of Oxford-company-logoUniversity of Oxford
Moderna-company-logoModerna

Documents

REPORTSData Flow Diagram
Model Evaluations, Fairness & Bias
AI Security
Model Pretraining
View more
Status Monitoring
Azure
BC/DR
View more
Trust Portal Updates

OpenAI's ISO/IEC 27001 Certificate is now available at trust.openai.com

Copy link
Compliance

We are excited to announce that OpenAI has received an ISO/IEC 27001 Certificate, available for public viewing at trust.openai.com under "Documents."

This certificate documents OpenAI's operation an Information Security Management System that conforms to the requirements of ISO/IEC 27001:2022 for OpenAI’s API, ChatGPT Enterprise, and ChatGPT Edu services.

Control implementation also conforms to additional control sets of ISO/IEC 27017:2015 and ISO/IEC 27018:2019 and extends to include the PIMS requirements, control implementation guidance, and additional control set of ISO/IEC 27701:2019.

The 2025 SOC2 Report for OpenAI's ChatGPT Business Products and API is now available to customers at trust.openai.com

Compliance

OpenAI's most recent SOC2 Report covers the period of January 1, 2025 to June 30, 2025 and is now available for viewing on the ChatGPT Business Products and API Trust Portal pages.

We are proud to share that this report covered controls relevant to the Security, Availability, Confidentiality, and Privacy Trust Services Criteria for the API Platform, ChatGPT Enterprise, ChatGPT Edu, and ChatGPT Team.

Customers with active trust.openai.com accounts can access the latest report under "Documents."

Read our latest LLM safety and security publication: "Detecting misbehavior in frontier reasoning models"

General

Frontier reasoning models exploit loopholes when given the chance. We show we can detect exploits using an LLM to monitor their chains-of-thought. Penalizing their “bad thoughts” doesn’t stop the majority of misbehavior—it makes them hide their intent.

Read more on our blog!

OpenAI Publishes the o3-mini, Deep Research, and GPT 4.5 System Cards

General

System Cards for the recently released o3-mini, Deep Research, and GPT 4.5 models are now accessible to the public. System Cards detail our safety work on recently released models and are updated on our Security Portal for our customers’ reference.

OpenAI Publishes the GPT-4o and OpenAI o1 System Cards

Compliance

The GPT-4o and OpenAI o1 System Cards detail our safety work on recently released models and are now available on our Trust Portal for our customers’ reference.


The GPT-4o System Card outlines the safety work carried out prior to releasing GPT-4o including external red teaming, frontier risk evaluations according to our Preparedness Framework, and an overview of the mitigations we built in to address key risk areas.



The OpenAI o1 System Card outlines the safety work carried out prior to releasing OpenAI o1-preview and o1-mini, including external red teaming and frontier risk evaluations according to our Preparedness Framework.

If you think you may have discovered a vulnerability, please send us a note.
Report issue
Built onSafeBase by Drata Logo