OpenAI, the leading artificial intelligence research laboratory, has introduced a new archiving feature for its premium generative AI-based offering, ChatGPT. This update allows users to archive their conversations, providing a solution to declutter the sidebar without permanently deleting any chats. While this feature is currently available on the web and iOS versions of ChatGPT, OpenAI has assured users that support for the Android version will be rolled out in the near future.
A Closer Look at ChatGPT Archiving
In a recent announcement, OpenAI shared details about the newly added archiving feature in ChatGPT. According to the company’s post on platform X, users can now conveniently archive their chats, effectively removing them from the sidebar without erasing the entire conversation. Users can easily access their archived chats through the Settings option. The feature, initially launched for the web and iOS versions, is set to extend its availability to the Android version soon.
OpenAI’s Commitment to Mitigating ‘Catastrophic Risks’
Beyond the enhancement of ChatGPT, OpenAI has taken a significant step toward addressing potential risks associated with powerful language models. The company recently unveiled a comprehensive 27-page document titled ‘Preparedness Framework.’ This framework outlines OpenAI’s systematic processes for monitoring, assessing, predicting, and safeguarding against ‘catastrophic risks’ that may arise from advanced language models.
Defining Catastrophic Risks in the Preparedness Framework
OpenAI, in its Preparedness Framework, defines catastrophic risks as potential threats that could result in extensive economic damage or severe harm, including loss of life. The framework explicitly mentions existential risks, emphasizing that these encompass a range of possibilities beyond economic harm.
Focus Areas in Assessing Frontier AI Models
The preparedness team at OpenAI will assess the upcoming Frontier AI models in four key categories:
Cybersecurity: Evaluating the models’ resilience against potential cyber threats.
CBRN Threats: Addressing chemical, biological, radiological, and nuclear threats to ensure safety.
Persuasion: Examining the potential influence and persuasion capabilities of the language models.
Model Autonomy: Assessing the autonomy levels of the models to prevent unintended consequences.
OpenAI emphasises that only models with a risk score of medium or lower in these categories will be allowed to progress in development.
Strengthening Safety Measures with Advisory Oversight
In addition to the framework, OpenAI is implementing a cross-functional safety advisory group. This group will be responsible for reviewing all reports related to safety concerns and presenting its findings to the executive team and board of directors. While the executive team holds decision-making authority, OpenAI highlights that the board possesses the ability to overrule decisions if necessary.
Though OpenAI hasn’t explicitly named the decision-makers, it is reasonable to assume that key figures such as CEO Sam Altman, President Greg Brockman, and CTO Mira Murati will play pivotal roles in this decision-making process. As OpenAI continues to advance in AI development, these measures underscore the company’s commitment to responsible and safe AI deployment.