Thursday, April 3, 2025

The exposed AI image generator database reveals what people really used for

Share

In addition to CSAM, says Fowler, pornographic images and potential “jumping” images were generated in the database. Among the files he noticed what the photos of real people looked like, which were probably used to create “clear naked or sexual images generated by AI,” he says. “So they took real photos of people and turned their faces there,” he claims about some generated paintings.

When it was live, the side of the genome allowed clear images for AI adults. Many photos presented on their main page, and the AI ​​”Models” section included sexual images of women-non-factual were “photorealistic”, while others were fully generated AI or in animated styles. This also included the “NSFW” and “Marketplace” gallery, in which users could share photos and potentially sell the albums of photos generated by AI. Sland of the site’s line said that people can “generate unlimited” images and films; The previous version of the website from 2024 said that you can create “uncensored images”.

The principles of genome users have stated that only “content respect”, saying “explicit violence”, and hate speech is prohibited. “Children’s pornography and all other illegal actions are strictly prohibited in genome”, read social guidelines, claiming that bills publishing forbidden content will be resolved. (Scientists, victims’ supporters, journalists, technology companies and others have largely sent the expression “children’s pornography” in favor of CSAM, over the past decade).

It is not clear to what extent the genome used all tools or moderation systems to prevent or prohibit the creation of CSAM generated by AI. Some users published last year on their “community” website that they could not generate images of people having sex and that their hints were blocked to a non-sexual “dark humor”. Another account published on the community website that should be resolved the content of “NSFW” because “they can be viewed by federal”.

“If I managed to see these images with anything more than the URL, it shows that they do not take all the necessary steps to block this content,” says Fowler from the database.

Henry Ajder, a depth expert and consulting consulting founder of the Uklasie Space, says that even if the creation of harmful and illegal content was not allowed by the company, the website of the website – reference to the “unlimited” image creation and the “NSFW” section – according to the “NSFW” section – according to the “NSFW” section – in the case security content “.

Ajer says he is surprised that the website in English was associated with the subject of South Korea. Last year, the country was harassed by the unknown deep fake “emergency“It focused GirlsBefore he took the funds fight wave deep abuse. Ajder claims that more pressure should be exerted on all parts of the ecosystem that enable generation of senseless images using AI. “The more we see, the more it forces legislators, for technological platforms, hosting companies, payment suppliers. All people who in one form or another, consciously or in another way – which is unconsciously – facilitate and enable this event,” he says.

Fowler claims that the database also revealed files that seemed to include AI prompts. The researcher claims that no user data, such as logins or user names, says the researcher. Screenshots of the hints show the use of words such as “petite”, “girl” and reference to sexual acts between family members. The signatures also contained sexual acts between celebrities.

“It seems to me that technology was racing in front of any guideline or control,” says Fowler. “From a legal point of view, we all know that public children’s images are illegal, but this did not stop the technology from generating these photos.”

Because AI generative systems have significantly improved how easy it is to create and modify images in the last two years, there has been an explosion of CSAM generated by AI. “Websites containing the sexual materials of children generated by AI have increased more than four times since 2023, and the photorealism of this terrifying content also jumped in sophistication, says Derek Ray-Hill, short-lived general director of the Watch Internet Watch (IWF) Foundation, the British Non-Profit organization that favors CSAM online.

IWF documented how criminals more and more often create CSAM generated by AI and develop methods they operate to create it. “Currently using AI is simply too easy to generate and distribute sexual content of children on a large scale and with speed,” says Ray-Hill.

Latest Posts

More News