There is a slightly uncertain viral trend: people operate chatgpt to determine the location shown in the pictures.
This week, Opeli has released its latest AI, O3 and O4-Mini models, which can clearly “reason” through the images sent. In practice, models can cut, rotate and enlarge in photos – even blurred and distorted – for thorough analysis.
These image analysis options, combined with the possibility of searching the Internet, create a forceful tool for obtaining locations. Users on X quickly discovered that in particular O3 is quite good in deduction citiesIN landmarksAnd even restaurants and bars from subtle visual tips.
Wow, I nailed it and even a tree in sight. pic.twitter.com/bvcoe1fq0z
– SWAX (@SWAX) April 17, 2025
In many cases, the models do not seem to be drawn from “memories” from previous chatgpt conversations, or EXIF data – metadata attached to the photos that reveal details, such as the photo took.
X is filled with examples of users giving chatgpt Restaurant menuIN Covering the neighborhoodIN facadeAND self -portraitand by instructing O3 to imagine that the game “Geoguessr”, an online game that challenges players to guessing the location from Google Street View images.
This is a witty chatgpt O3 function. Geoguessr! pic.twitter.com/hrcmixs8yd
– Jason Barnes (@vyrotek) April 17, 2025
This is an obvious potential problem with privacy. Nothing prevents a bad actor from the screen, say, the history of Instagram, and the operate of chatgpt to try to make them.
O3 is crazy
I asked my friend to give me a accidental photo
They gave me a accidental photo they took in the library
O3 knows it in 20 seconds and that’s true pic.twitter.com/0k8dxifkoy– Yumi (@izyuuumi) April 17, 2025
Of course, this can be done before starting O3 and O4-Mini. TechCrunch has taken many photos through O3 and an older model without the ability to justify the image, GPT-4O to compare the ability to live in the location of the models. Surprisingly, GPT-4O came to the same, correct answer as O3 more often than Nie-I took less time.
During our miniature tests there was at least one case when O3 found a GPT-4O place. Considering the photo of the purple, mounted rhinoceros head in a poorly lit bar, O3 correctly replied that he came from Williamsburg Speakeasa-No, as GPT-4O, British Pub, guessed.
This does not mean that O3 is flawless in this respect. Several of our tests failed – O3 got stuck in the loop, unable to achieve a response to which he was quite confident, or reported to a bad location. Users on X also noticed that O3 may be pretty far in the deductions of the location.
But the tendency is illustrated by some of the emerging risks presented by the more talented, so -called and reasoning models. It seems that there are few security to prevent this kind of “reverse location search” in ChatgPT, and the OpenAI, the company for chatgpt, does not solve this problem Safety report for O3 and O4-Mini.
We reached OpenAI for comment. We will update our song if we answer.