It’s easy enough to
stalk kitty cats or
track fugitives to, say, the jungles of Guatemala if you have photo EXIF data
.
After all, EXIF data reveals, among other things, GPS latitude and longitude coordinates of where a photo was taken.
But really. EXIF data? Bah!
Enter Google.
It don’t need no stinkin’ EXIF data.
Tobias Weyand, a computer vision specialist at Google, along with two other researchers, have trained a deep-learning machine to work out the location of almost any photo, just going by its pixels.
To be fair, the learning machine did get trained, initially, on EXIF data.
Make that a
huge amount of EXIF data: after all, imagine how many images Google can wrap its tentacles around.
It trained its system on 126 million of them.
The result is a new machine that significantly outperforms humans at determining the location of images – even images captured inside, without geolocation giveaways or hints such as palm fronds, street signs, local-language bearing billboards or Niagara Falls misting away in the background.