OpenAI has introduced new policies for ChatGPT to improve user safety and reduce misuse. As reported by Nexta, the update bans AI-based facial or personal recognition without consent and blocks any activity that could lead to academic dishonesty.
The company says the goal is to protect users and prevent people from relying on the system for things it was not designed to handle.

AI’s new role as a learning tool
Under the new rules, ChatGPT will focus on explaining concepts, outlining general methods, and guiding users to qualified professionals. It will no longer mention specific medicines or dosages, prepare legal documents, or suggest investment actions such as buying or selling shares.
Attempts to get around these restrictions, even by asking hypothetical questions, are now stopped by safety filters. These changes follow public concern about people using AI chatbots instead of experts in medicine, law, and finance.
ChatGPT conversations are not legally protected like those between doctors and patients or lawyers and clients. This means they could be accessed or used in court.
More safety features for users in distress
OpenAI has also added features to support users dealing with mental health challenges. The system is now better at recognizing signs of distress related to psychosis, mania, self-harm, or suicidal thoughts and will direct users to professional help rather than attempting to handle such cases.
Nexta noted that ChatGPT will no longer name medications, give dosages, create lawsuit drafts, or offer investment tips.
Why these limits matter
ChatGPT is still valuable for learning, explaining ideas, and summarizing information. But it cannot replace professional judgment. It cannot read emotions, show empathy, or guarantee safety.
Anyone facing a crisis should reach out to trained professionals such as by calling 988 in the United States instead of turning to an AI chatbot. The same applies to financial or legal issues. While ChatGPT can define terms and explain general concepts, it cannot understand your personal situation or regional laws.
Relying on AI to prepare legal or financial documents can lead to errors that may cost money or cause legal problems. ChatGPT also cannot respond to emergencies, detect hazards, or provide live updates. Its online information may be incomplete or outdated.
Users are strongly advised not to share personal or sensitive information such as financial details, medical records, or private contracts, as there is no guarantee that this data will stay secure.
