Trust Portal

Start your security review
View & download sensitive information
Search items
ControlK

Welcome to our Trust Portal for OpenAI's ChatGPT services, including ChatGPT Enterprise and ChatGPT Edu. Here, you can access our comprehensive compliance documentation, find answers to frequently asked questions related to security and privacy, and explore our robust security practices. We believe in maintaining transparency and building trust with our customers, and this portal is designed to provide you with the information and assurance you need to feel confident in our ability to protect your data.

Our products are covered in our SOC 2 Type 2 report and have been evaluated by an independent third-party auditor to confirm that our controls align with industry standards for security and confidentiality. Request access to our SOC 2 Report below to learn more about our security controls and compliance activities.

OpenAI invites security researchers, ethical hackers, and technology enthusiasts to report security issues via our Bug Bounty Program. The program offers safe harbor for good faith security testing and cash rewards for vulnerabilities based on their severity and impact. Refer to our blog to read more about the program and visit Bugcrowd to participate.

Morgan Stanley-company-logoMorgan Stanley
PwC-company-logoPwC
Robinhood-company-logoRobinhood
Square-company-logoSquare
Zendesk-company-logoZendesk
Amgen-company-logoAmgen
Bain & Company-company-logoBain & Company
Datadog-company-logoDatadog
JetBlue Airways-company-logoJetBlue Airways
University of Oxford-company-logoUniversity of Oxford
Moderna-company-logoModerna

Documents

REPORTSData Flow Diagram
Status Monitoring
Azure
BC/DR
View more
Trust Portal Updates

Read our latest LLM safety and security publication: "Detecting misbehavior in frontier reasoning models"

General
Copy link

Frontier reasoning models exploit loopholes when given the chance. We show we can detect exploits using an LLM to monitor their chains-of-thought. Penalizing their “bad thoughts” doesn’t stop the majority of misbehavior—it makes them hide their intent.

Read more on our blog!

Published at N/A

OpenAI Publishes the o3-mini, Deep Research, and GPT 4.5 System Cards

General
Copy link

System Cards for the recently released o3-mini, Deep Research, and GPT 4.5 models are now accessible to the public. System Cards detail our safety work on recently released models and are updated on our Security Portal for our customers’ reference.

Published at N/A

OpenAI Publishes the GPT-4o and OpenAI o1 System Cards

Compliance
Copy link

The GPT-4o and OpenAI o1 System Cards detail our safety work on recently released models and are now available on our Trust Portal for our customers’ reference.


The GPT-4o System Card outlines the safety work carried out prior to releasing GPT-4o including external red teaming, frontier risk evaluations according to our Preparedness Framework, and an overview of the mitigations we built in to address key risk areas.



The OpenAI o1 System Card outlines the safety work carried out prior to releasing OpenAI o1-preview and o1-mini, including external red teaming and frontier risk evaluations according to our Preparedness Framework.

Published at N/A

OpenAI API has attained SOC 2 Type 2 compliance

Compliance
Copy link

We are delighted to share the most recent SOC 2 Type 2 report, covering our ChatGPT and API products over January 1, 2024, to June 30, 2024. SOC 2 Type 2 compliance demonstrates our continued commitment to robust security and privacy practices. We have uploaded our latest SOC 2 Type 2 report to our trust portal for our customers' reference.

Published at N/A*

We are proud and excited to announce that the OpenAI API has achieved SOC 2 Type 2 compliance. SOC 2 Type 2 compliance requires an ongoing commitment to security and privacy practices and demonstrates our dedication to protecting our customers' data. We have uploaded the SOC 2 Type 2 report to our trust portal for our customers' reference.

Published at N/A*

OpenAI Achieves CSA STAR Level 1 Compliance

Compliance
Copy link

We are delighted to announce that the OpenAI API, ChatGPT Enterprise/Teams and DALL·E have achieved a CSA STAR Level 1 listing! CSA’s Security, Trust & Assurance Registry Level 1 demonstrates an organization’s commitment to securing cloud infrastructure and customer data, some of our foremost priorities at OpenAI. Our CSA STAR Registry listing is available in the Documents section of the OpenAI Trust Portal for our customers’ reference.

Published at N/A*

If you think you may have discovered a vulnerability, please send us a note.

Report Issue
Built onSafeBase by Drata Logo