CHATGPT is about to get a whole lot naughty.
The hugely popular smart chatbot is getting ready to reverse its ban on AI .


, the boss of owner , said it was time to start treating “adult users like adults”.
The saucy shift comes as the biggest name in fights to stay relevant amid a surge of rivals.
Over the summer, ‘s AI unleashed the ability to generate virtual girlfriends that users can have sexual conversations with.
Now OpenAI says it’s able to “safely relax the restrictions” meaning users can engage in erotic chats with the bot.
“We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues,” Altman explained.
“We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right.
“Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases.”
Among the changes coming in a few weeks is the ability to make ChatGPT respond in a very human-like way if desired.
Then the more adult shake-up will follow in December.
“As we roll out age-gating more fully and as part of our “treat adult users like adults” principle, we will allow even more, like erotica for verified adults,” Altman added.
But the features will be restricted to accounts owned by adults.
A rep for the firm told TechCrunch that it’ll use an age-prediction system to ensure that adult accounts are being used by over 18s.
Back in April, the publication found that users registered as minors were able to generate graphic erotica.
It was identified as a bug that was later fixed by OpenAI.

What are the arguments against AI?
Artificial intelligence is a highly contested issue, and it seems everyone has a stance on it. Here are some common arguments against it:
Loss of jobs – Some industry experts argue that AI will create new niches in the job market, and as some roles are eliminated, others will appear. However, many artists and writers insist the argument is ethical, as generative AI tools are being trained on their work and wouldn’t function otherwise.
Ethics – When AI is trained on a dataset, much of the content is taken from the internet. This is almost always, if not exclusively, done without notifying the people whose work is being taken.
Privacy – Content from personal social media accounts may be fed to language models to train them. Concerns have cropped up as Meta unveils its AI assistants across platforms like Facebook and Instagram. There have been legal challenges to this: in 2016, legislation was created to protect personal data in the EU, and similar laws are in the works in the United States.
Misinformation – As AI tools pull information from the internet, they may take things out of context or suffer hallucinations that produce nonsensical answers. Tools like Copilot on Bing and Google’s generative AI in search are always at risk of getting things wrong. Some critics argue this could have lethal effects – such as AI prescribing the wrong health information.