ChatGPT helps to determine location from a photo

0
138
ChatGPT helps to determine location from a photo

There is a somewhat disturbing new trend that is going viral: People are using ChatGPT to locate people in photos.

This week, OpenAI released its newest artificial intelligence models, o3 and o4-mini, each of which can uniquely “reason” over uploaded images. In practice, the models can crop, rotate, and zoom in on photos, even blurry and distorted ones, to analyze them thoroughly.

These image analysis capabilities, combined with the models’ ability to search the web, create a powerful tool for location. Users on X quickly discovered that o3, in particular, is quite good at identifying cities, landmarks, and even restaurants and bars based on subtle visual cues.

In many cases, the models don’t rely on “memories” of past conversations in ChatGPT or EXIF data, the metadata attached to photos that reveals details such as where the photo was taken.

X is filled with examples of users providing ChatGPT with restaurant menus, neighborhood shots, facades, and self-portraits, as well as instructions for o3 to pretend it is playing “GeoGuessr,” an online game that asks players to guess locations from Google Street View images.

This is an obvious potential privacy issue. There is nothing stopping an attacker from taking a screenshot of, say, someone’s Instagram page and using ChatGPT to try a doxy attack.

Of course, this could have been done before the launch of o3 and o4-mini. TechCrunch ran several photos through the o3 and an older model without image recognition capabilities, the GPT-4o, to compare the models’ location skills. Surprisingly, the GPT-4o was more likely to give the same correct answer as the o3 and took less time.

During our brief testing, there was at least one instance where the o3 found a location where the GPT-4o could not. When presented with an image of a purple rhino head in a dimly lit bar, o3 correctly identified it as an image from a speakeasy in Williamsburg, not a British pub as GPT-4o guessed.

This is not to say that o3 is flawless in this regard. A few of our tests failed – o3 got stuck in a loop, failed to get an answer it was reasonably sure of, or suggested the wrong location. Users on X also noted that o3 can be quite wrong in its location inferences.

But this trend illustrates some of the risks posed by the emergence of more powerful, so-called “thinking” AI models. There appear to be few safeguards in ChatGPT to prevent this kind of “reverse location search” and OpenAI, the company behind ChatGPT, does not address this issue in its security report for o3 and o4-mini.

LEAVE A REPLY

Please enter your comment!
Please enter your name here