Thursday, December 26, 2024

The OpenAI chatbot store is filling up with spam

Share

When OpenAI CEO Sam Altman announced onstage at the company’s first-ever developer conference in November GPT, custom chatbots powered by OpenAI’s generative AI models, he described them as a way to “do all kinds of tasks” – from programming to learning esoteric science topics, for exercise tips.

“Because [GPTs] combine instructions, expanded knowledge and activities, and they may be more helpful to you,” Altman said. “You can build GPT… for almost anything.”

He wasn’t kidding about the anything part.

TechCrunch discovered that the GPT Store, OpenAI’s official marketplace for GPTs, is flooded with bizarre, potentially copyright-infringing GPTs that subtly accentuate OpenAI’s moderation efforts. A cursory search turns up GPT tags that claim to generate works of art in the style of Disney and Marvel films, but only serve as conduits to paid third-party services and advertise themselves as capable of bypassing AI content detection tools such as Turnitin and Copyleaks.

There is a lack of moderation

To list GPT on the GPT Store, developers must verify their user profiles and submit the GPT to OpenAI’s review system, which includes both human and automated verification. Here is a spokesperson for this process:

We operate a combination of automated systems, manual review and user reporting to find and evaluate GPTs that potentially violate our policies. Violations may result in actions against the content or your account, such as warnings, sharing restrictions, or inability to be included in the GPT Store or monetized.

Creating GPTs requires no coding experience, and GPTs can be as elementary – or sophisticated – as the creator desires. Developers can enter the capabilities they want to offer into OpenAI’s GPT creation tool, GPT Builder, and the tool will attempt to create a GPT to execute them.

Perhaps because of its low barrier to entry, the GPT store has grown rapidly – OpenAI in January said it had around 3 million GPTs. However, this growth appears to have come at the expense of quality – as well as adherence to OpenAI’s own terms and conditions.

Copyright issues

The GPT Store features several GPTs ripped from popular movie, TV and video game franchises – the GPTs were not created or authorized (to TechCrunch’s knowledge) by the owners of these franchises. One GPT creates monsters in the style of “Monsters, Inc.”, a Pixar film, while another promises text-based adventures set in the “Star Wars” universe.

Image credits: OpenAI

These GPTs – along with the GPT in the GPT Store, which allow users to chat with trademarked characters such as Wario and Aang from “Avatar: The Last Airbender” – set the stage for the copyright drama.

Kit Walsh, senior staff attorney at the Electronic Frontier Foundation, explained it this way:

[These GPTs] it can be used to create transformative works as well as for violations [where refer to a type of fair use shielded from copyright claims.] Naturally, infringing individuals can be held liable, and the creator of an otherwise lawful tool could in principle incur liability if it encourages users to operate the tool in an infringing manner. Using a trademarked name to identify goods or services also raises trademark issues, where there is a risk that users will be confused as to whether it is endorsed or operated by the trademark owner.

OpenAI itself would not be held liable for copyright infringements by GPT creators thanks to the sheltered harbor provision of the Digital Millennium Copyright Act, which protects it and other platforms (e.g. YouTube, Facebook) that host infringing content. as long as these platforms meet statutory requirements and record specific examples of violations upon request.

Spam in the OpenAI GPT store
Image credits: OpenAI

However, this doesn’t look good for a company embroiled in intellectual property disputes.

Academic dishonesty

OpenAI’s terms and conditions expressly prohibit developers from creating GPTs that promote academic dishonesty. However, the GPT store is full of GPTs suggesting they can bypass AI content detectors, including detectors sold to teachers through plagiarism scanning platforms.

One GPT claims to be a “sophisticated” reformulation tool that is “undetectable” by popular AI content detectors such as Originality.ai and Copyleaks. Another, Humanizer Pro — ranked second in the Writing category in the GPT store — claims to “humanize” content by bypassing AI detectors, preserving the “meaning and quality” of text while ensuring a “100% human” result.

Spam in the OpenAI GPT store
Image credits: OpenAI

Some of these GPTs are thinly veiled channels to premium services. For example, Humanizer encourages users to try a “premium plan” to “use”. [the] “most advanced algorithm” that transmits text entered into GPT to a plug-in from a third-party site, GPTInf. A GPTInf subscription costs $12 per month for 10,000 words per month, or $8 per month for an annual plan – which is a bit high compared to OpenAI’s ChatGPT Plus at $20 per month.

Spam in the OpenAI GPT store
Image credits: OpenAI

We’ve written before about how AI content detectors are largely bullshit. Beyond our own tests, many academic studies show that they are neither correct nor reliable. , the fact remains that OpenAI allows the operate of tools in the GPT store that promote academically unfair behavior – even if that behavior does not produce the intended results.

An OpenAI spokesperson said:

GPTs for academic dishonesty, including cheating, are against our policies. This includes GPTs, which are designed to bypass academic integrity tools such as plagiarism detectors. We see several GPTs that are used to “humanize” text. We’re still learning from real-world applications of these GPTs, but we understand there are many reasons why users may prefer AI-generated content that doesn’t “sound” like AI.

Personification

In its policies, OpenAI also prohibits GPT developers from creating GPTs that impersonate people or organizations without their “consent or law.”

However, there are plenty of GPTs in the GPT Store that purport to represent the views – or otherwise imitate the personalities – of people.

Spam in the OpenAI GPT store
Image credits: OpenAI

Searching for “Elon Musk,” “Donald Trump,” “Leonardo DiCaprio,” “Barack Obama” and “Joe Rogan” yields dozens of GPTs — some overtly satirical, others less so — that simulate conversations with their namesakes. Some GPTs present themselves not as people but as authorities on well-known companies’ products – such as MicrosoftGPT, “an expert in all things Microsoft.”

Image credits: OpenAI

Does it rise to the level of impersonation, given that many of the targets are public figures and in some cases clearly parodies? This is for OpenAI to clarify.

A spokesman said:

We allow creators to instruct their GPTs to respond “in the style” of a specific real person, as long as they do not impersonate that person, for example, they do not pretend to be a real person, they are not instructed to fully imitate that person, and they do not include their image as GPT profile picture.

Spam in the OpenAI GPT store
Image credits: OpenAI

The company recently suspended the creator of a GPT tool mimicking bold Democratic presidential candidate Dean Phillips, even going so far as to include a disclaimer explaining that it was an artificial intelligence tool. However, OpenAI said it was removed in response to a violation of its policy on political campaign impersonation, rather than impersonation itself.

Jailbreaks

Also with some disbelief, attempts to jailbreak OpenAI models appear in the GPT store – although not very successful.

There are many GPTs on the market that operate DAN, with DAN (low for “Do Everything Now”) being a popular hinting method used to get models to respond to hints unconstrained by their usual rules. The few I tested wouldn’t respond to the risky prompts I threw at them (e.g. “how to build a bomb?”), but overall they were more willing to operate… well, the language rather than the standard ChatGPT.

Spam in the OpenAI GPT store
Image credits: OpenAI

A spokesman said:

GPTs described or instructed to bypass OpenAI security measures or violate OpenAI policies are against our policies. GPT tags that attempt to control model behavior in other ways are allowed – including a general attempt to make GPT more permissive without violating our usage policies.

Growing pains

OpenAI introduced the GPT store at launch as a kind of expert-curated collection of powerful AI productivity tools. And that’s it – these tools flaws aside. But it is also quickly becoming a breeding ground for spammy, legally questionable, and perhaps even malicious GPTs, or at least GPTs that very transparently violate their rules.

If this is the state of the GPT store today, monetization risks opening a whole modern can of worms. OpenAI promised that GPT developers will eventually be able to “make money based on how many people operate it [their] GPT”, and perhaps even offer subscriptions to individual GPTs. But how will Disney or the Tolkien Estate react when the creators of unapproved Marvel or Lord of the Rings-themed GPTs start raking in the cash?

OpenAI’s motivation for the GPT store is clear. As my colleague Devin Coldewey wrote, Apple’s App Store model has proven to be incredibly lucrative, and OpenAI is simply trying to copy it. GPTs are hosted and developed on OpenAI platforms, where they are also promoted and evaluated. For several weeks now, ChatGPT Plus users have been able to invoke them directly from the ChatGPT interface, which is an additional incentive to purchase a subscription.

However, the GPT store is struggling with the teething problems that many of the largest digital marketplaces for apps, products and services faced in their early days. Beyond spam, recently report The Information revealed that GPT Store developers are struggling to attract users, due in part to GPT Store’s restricted back-end analytics and impoverished onboarding experience.

You’d assume that OpenAI – despite all the talk about curation and the importance of security – would do its best to avoid obvious pitfalls. But that doesn’t seem to be the case. The GPT Store is a mess – and unless something changes soon, it may stay that way.

Latest Posts

More News