Welcome to our Trust Portal for OpenAI's ChatGPT services, including ChatGPT Enterprise and ChatGPT Edu. Here, you can access our comprehensive compliance documentation, find answers to frequently asked questions related to security and privacy, and explore our robust security practices. We believe in maintaining transparency and building trust with our customers, and this portal is designed to provide you with the information and assurance you need to feel confident in our ability to protect your data.
Our products are covered in our SOC 2 Type 2 report and have been evaluated by an independent third-party auditor to confirm that our controls align with industry standards for security and confidentiality. Request access to our SOC 2 Report below to learn more about our security controls and compliance activities.
OpenAI invites security researchers, ethical hackers, and technology enthusiasts to report security issues via our Bug Bounty Program. The program offers safe harbor for good faith security testing and cash rewards for vulnerabilities based on their severity and impact. Refer to our blog to read more about the program and visit Bugcrowd to participate.
Documents
Read our latest LLM safety and security publication: "Detecting misbehavior in frontier reasoning models"
Frontier reasoning models exploit loopholes when given the chance. We show we can detect exploits using an LLM to monitor their chains-of-thought. Penalizing their “bad thoughts” doesn’t stop the majority of misbehavior—it makes them hide their intent.
Read more on our blog!
System Cards for the recently released o3-mini, Deep Research, and GPT 4.5 models are now accessible to the public. System Cards detail our safety work on recently released models and are updated on our Security Portal for our customers’ reference.
The GPT-4o and OpenAI o1 System Cards detail our safety work on recently released models and are now available on our Trust Portal for our customers’ reference.
The GPT-4o System Card outlines the safety work carried out prior to releasing GPT-4o including external red teaming, frontier risk evaluations according to our Preparedness Framework, and an overview of the mitigations we built in to address key risk areas.
The OpenAI o1 System Card outlines the safety work carried out prior to releasing OpenAI o1-preview and o1-mini, including external red teaming and frontier risk evaluations according to our Preparedness Framework.
We are delighted to share the most recent SOC 2 Type 2 report, covering our ChatGPT and API products over January 1, 2024, to June 30, 2024. SOC 2 Type 2 compliance demonstrates our continued commitment to robust security and privacy practices. We have uploaded our latest SOC 2 Type 2 report to our trust portal for our customers' reference.
We are proud and excited to announce that the OpenAI API has achieved SOC 2 Type 2 compliance. SOC 2 Type 2 compliance requires an ongoing commitment to security and privacy practices and demonstrates our dedication to protecting our customers' data. We have uploaded the SOC 2 Type 2 report to our trust portal for our customers' reference.
We are delighted to announce that the OpenAI API, ChatGPT Enterprise/Teams and DALL·E have achieved a CSA STAR Level 1 listing! CSA’s Security, Trust & Assurance Registry Level 1 demonstrates an organization’s commitment to securing cloud infrastructure and customer data, some of our foremost priorities at OpenAI. Our CSA STAR Registry listing is available in the Documents section of the OpenAI Trust Portal for our customers’ reference.
If you think you may have discovered a vulnerability, please send us a note.