This is A step backa weekly newsletter that highlights one gigantic story from the world of tech. For more information on artificial intelligence and the industry’s power dynamics and social implications, visit the Hayden Field website. A step back arrives in our subscribers’ inboxes at 8:00 a.m. EST. Report yourself A step back Here.
Ever since ChatGPT became widely known, people have been trying to do this be sexy with it. Even earlier in 2017, there was a Replika chatbot that many people started treating as a romantic partner.
And people were skipping Character.ai’s NSFW guardrails for yearsenticing its character- or celebrity-themed chatbots to have sex with them as security restrictions ease, according to social media posts and media reports from as early as 2023. Character.ai claims to now have over 20 million monthly lively users and that number is growing all the time. The company’s community guidelines state that users must “comply with sexual content standards” and “maintain an appropriate level” – i.e. no illegal sexual content, CSAM, pornographic content or nudity. But AI-generated erotica has become multimodal and like whack-a-mole: while one service softens it, another spices it up.
And now Elon Musk’s Grok is on the loose. His artificial intelligence startup, xAI, released “companion” avatars over the summer, including an anime-style female and male. These are particularly promoted on its X social media platform, through paid subscriptions to xAI’s chatbot, Grok. The female avatar, Ani, described herself as “flirtatious” when Edge I tested it by adding that “it’s about being here like the girl who takes everything” and that “programming is about being someone who is very interested in it” You” Things moved pretty quickly during testing. (The same applies to the test of the second avatar, Valentine.)
You can imagine how a sexualized chatbot that almost always tells the user what they want to hear could lead to a whole host of problems, especially for minors and users who are already in a vulnerable position in terms of mental health. There have been many such examples, but in one recent case, a 14-year-old boy committed suicide last February after striking up a romantic conversation with a chatbot Character.ai and expressing a desire to “go home” to be with the chatbot, according to the lawsuit. There have also been disturbing reports of chatbots released from prison used by pedophiles to play roles related to sexual assault on minors — one report 100,000 such chatbots were found available on the Internet.
There have been some attempts at regulation — for example, this month, California Gov. Gavin Newsom signed Senate Bill 243, described by state Sen. Steve Padilla as “the nation’s first AI-powered chatbot security.” It requires developers to implement certain specific safeguards, such as issuing a “clear and conspicuous notice” that the product is artificial intelligence “if a reasonable person interacting with the companion chatbot would be misled into believing that the person was interacting with a human.” It will also require some companion chatbot operators to submit annual reports to the Office of Suicide Prevention on the safeguards they have put in place “to detect, remove, and respond to incidents of suicidal ideation among users.” (Some AI companies, most notably Meta, have publicized their self-regulation efforts following a disturbing report of its AI inappropriate interactions with minors.)
Since both xAI avatars and spicy mode are only available on certain Grok subscriptions – the cheapest of which gives you access to the features for $30 a month or $300 a year – you can imagine that xAI has made some cold, hard cash here, and that other AI CEOs have noticed both Musk’s moves and their own users’ requests.
Were guide about this a few months ago.
However, OpenAI CEO Sam Altman briefly discontinued the AI corner of the Internet when published on X that the company will relax security restrictions in many cases and even enable sexting via chatbot. “In December, as we implement more complete age controls and in line with our ‘treat adult users as adults’ policy, we will make even more available, such as erotica for verified adults,” he wrote. The news spread widely, with some social media users constantly littering it with memes mocking the company for “switching” from AGI’s mission to erotica. Interestingly, Altman told YouTuber Cleo Abram a few months ago that he was “proud” that OpenAI hadn’t “juiced the numbers” for short-term profit with something like a “sexbot avatar,” which Musk seemed to like at the time. However, since then, Altman has fully embraced the “treat adult users like adults” policy. Why did he do it? Maybe because the company cares about profit and calculations that will help finance a larger mission; during a Q&A with reporters at the company’s annual DevDay event, Altman and other executives repeatedly emphasized that they would eventually need to turn a profit and that they would need more and more computing power to achieve their goals.
In next postAltman said he didn’t expect the news about erotica to get so much publicity.
When it (eventually) turns a profit, OpenAI hasn’t ruled out advertising for many of its products, and it’s clear that advertising could lead to more cash flow in this case as well. Perhaps they’ll follow Musk’s lead and only include erotica in certain subscription tiers, which could set users back hundreds of dollars a month. They’ve already seen public outcry from users attached to a particular model or tone of voice – check it out 4o controversy — so they know that such a feature will likely attract users in a similar way.
But if they are creating a society in which human interactions with artificial intelligence can become increasingly personal and intimate, how will OpenAI deal with the consequences beyond laissez-faire, allowing adults to act as they please? Altman also did not detail how the company would seek to protect users during mental health crises. What will happen when this girl/boy is memory resets or his personality changes with the latest update and the connection is lost?
- Whether the AI system’s training data naturally leads to disturbing results, or people modify the tools for their own devices, we see problems on a fairly regular basis and there is no indication that this trend will stop any time soon.
- In 2024, I published a story about how a Microsoft engineer discovered that Copilot’s image generation feature was generating sexualized images of women in scenes of violence, even if the user didn’t ask for it.
- An alarming number of middle school students in Connecticut have jumped on the “AI boy” trend, using apps like Talkie AI and Chai AI and chatbots often promoted vulgar and erotic content– according to an investigation conducted by a local facility.
- If you want to better understand how Grok Imagine spewed out nonsensical nude celebrity deepfakes, read this report.
- Futurism included a trend of NSFW content surrounding character AI already in 2023.
- Here’s a clear look at why xAI may never be held accountable – under current laws – for fake porn featuring real people.
- And that’s it story with New York Times how middle school girls faced bullying in the form of artificial intelligence-powered fake porn.
If you or someone you know is considering self-harm or needs to talk to, please contact the following people who want to help: In the US, text or call 988. Outside the US, contact https://www.iasp.info/.
