OpenAI makes its flagship conversational AI available to anyone, even people who haven’t bothered to create an account. However, it won’t be the same experience – and of course all your chats will still be included in your training data unless you opt out.
Starting today in several markets and gradually rolling out to the rest of the world, visiting chat.openai.com will no longer require logging in – although you still can if you want. Instead, you’ll be taken straight to a chat with ChatGPT, which will utilize the same model as logged-in users.
You can chat as much as you want, but remember that you don’t get the same set of features as people with accounts. You won’t be able to save or share chats, utilize custom instructions, or other things that typically need to be associated with a persistent account.
That said, you still have the option to opt out of having your chats used for training purposes (which, as you might suspect, undermines the whole reason the company is doing this). Just click the little question mark in the lower right corner, then click “Settings” and turn this feature off. OpenAI offers this helpful gif:
More importantly, this ultra-free version of ChatGPT will have a “slightly more restrictive content policy.” What does it mean? I asked and received a long-winded but largely meaningless response from a spokesman:
The logged-out environment will utilize existing security mitigations that are already built into the model, such as denial of malicious content generation. In addition to existing solutions, we also implement additional security features designed specifically for other forms of content that may not be suitable for people who are not logged in.
We considered potential misuse of the unlogged service based on our knowledge of GPT-3.5’s capabilities and the risk assessments we performed.
So… I really have no idea what exactly these stricter rules are. We’ll no doubt find out soon as an avalanche of randos descends on the site to kick the tires on this novel offering. “We know additional iteration may be needed and we welcome feedback,” the spokesperson said. And they will receive it – in abundance!
At this point I also asked if they had any plan to deal with attempts to misuse and weaponize the model on an unprecedented scale. Inference is still costly and even the improved low-lift GPT-3.5 model requires power and server space. People will hammer him for all it’s worth.
They also had a long-winded response to this threat:
We have also carefully considered how we can detect and stop abuse of the unlogged experience, and the teams responsible for detecting, preventing and responding to abuse have been involved in the design and implementation of this experience and will continue to inform its design moving forward.
Notice the lack of anything resembling specific information. They probably have no idea what people will put this matter to, any more than anyone else, and will have to be reactive rather than proactive.
It’s unclear which areas or groups will be the first to get access to the ultra-free ChatGPT, but it starts today, so check back regularly to see if you’re among the lucky ones.
Why you can’t review AI and why TechCrunch even does it