Use Google “search by image” to track your pictures (or to spot fake facebook profiles)

Google released two months ago an update to its image search engine: You can now search by providing an image as input.

I see some use case for this feature, a powerful one is to allow creators to track their pictures.

Track your pictures
Let’s take an example, I created this wallpaper 3 years ago:

Creative Commons BBB
A wallpaper using characters from Big Buck Bunny

I released it under a Creative Commons Attribution license and put it on flickr, so people were welcomed to use it.

I was pleased to see that it was used on hundreds of webpages (including WikiHow), even in languages I don’t understand like polish or finish, and most of them are giving attribution.

Similarly, a clipart of mine (released in the public domain) was used for various applications: create custom plates, make fan art of Alice in Wonderland, sell caligraphy, or cupcakes and promote a church.
Many of my wikipedia image contributions are also findable across the internet (for example to write about dinosaurs or to portrait Christophe Salengro).

Copyright infringement

Let’s take a well known copyrighted image, such as this portrait:

Google similar image for "Afghan Girl", copyright Steve McCurry

Well, seing all the Google Image results, National Geographic could easily annoy the website’s owners by asking them to remove it. That would be crazy? Well consider that this kind of removals are systematic on Youtube, and, as Eric Schmidt said at the eG8 summit in Paris, could be automated and applied to the whole internet (they have the technology, it’s called Google Search by image).

For sure, this new little feature can be very useful for photographers and artists who want to protect their creations by tracking who is illegally using them. Visually searching the entire web has never been possible before, now it’s as easy as a google search.

But let’s finish this article on a funnier note:

Facebook fakes

Friends of mine were arguing with a girl on facebook and they suspected her to be an impostor.

a suspicious facebook profile

A quick Google search by image showed us that this picture was called “Portrait of happy young lady smiling” and sold on sites like fotolia. Some minutes after briging this proof, he/she deleted his/her account. I love technology.

re-Captcha, nanojobs and GWAP

Probably the most clever idea I ever heard of.

This is not new but it keeps astonishing people when I tell them about it: Did you know that Captchas help scanning books? And they are doing it very very well.

a captcha seen on the facebook registration page

You all know about captchas: Images containing words that you are forced to type to make sure you are a human and not a robot when performing various actions on the Internet (create an account, write a comment…) in order to fight spam. Thousands of people are decoding them everyday. Everyone of them is doing a small mental effort to read the words.

a captcha from re-Captcha

And this is where the guys from the re-Captcha project had a brilliant idea: What if these thousands mental efforts could be used to actually do something useful? Like helping scanning books ?

Today, many organizations are scanning old books and transforming them in a digital format. The OCR software that transforms the scanned image to digital text may sometimes not be able to do its job correctly (however complex the software may be). re-Captcha uses the human brain to actually decode words the computer is not sure about:

  • What you see when you look at a re-Captcha are two words.
  • Among those two words, one is known by the server. The other one is a word the computer knows it didn’t managed to read properly.
  • You being a human or not will be judged on the first word, and the result you enter for the second word will be used to decode it.
  • Then we can imagine that if a certain number of users read the same for a given unknown word, it is most likely to be the right translation.
  • By using this technique, the system is able to digitalize texts with a higher precision that most of the other OCR systems.

The company behind re-Captcha, its data and its market-share were acquired by Google in 2009. (what else would you have imagined)

To me this system is brilliant: it solves a problem by dividing it in such simple tasks that they can be executed by people who don’t even notice that they are working. (And what’s nicer with this one is that it helps fighting spam and digitalizing books, two great causes.)

nano jobs
I don’t know if there is another term, but I call this nano jobs.

Let us take another example of nano job: in 2006 a professor released a fun tool where you can play with a random other player: an image was displayed and your goal was to find common words describing this image with the other remote player. Of course, you quickly realize that this was only done to help labeling the image base: Today, contrary to a human, a machine has difficulties to understand what an image represents (Image recognition). The “find common words to improve your score” is just a incentive to gather a lot of data. Google did the same to help labelling its image base.

Playing the ESP game with a random player, finding common labels for a given image.

This leads us to another important point in nano jobs: game mechanics.

You cannot force people into doing small tasks, they have to do them by themselves. In the case of re-Captcha, they understand the need of fighting spam, so they accept the task. In the case of the ESP game they want to do the best score or maybe to have fun with a random web user (this reminds me chatroulette).

These games are called games with a purpose (GWAP). Imagine, the workforce that these millions of people farming like zombies on Farmville represent (Unfortunately, Farmville business model is more in selling your data and selling stupid virtual stuff than making you doing nano jobs). Then, when we hear about Google investing in social game companies, I think nano jobs are part of their motivation (not the only one of course).

My conclusion
To conclude, I think this de-centralized and effortless way of solving problems is extremely powerful. Once again, divide and conquer seams to be the strategy to adopt, even for problems that don’t seam scalable.

Some more examples

  • gwap.com seams to specialize in Games With A Purpose. You can play to help tagging images, music, find synonyms, image tracing, emotions from images, image similarity and video labeling.
  • GOOG-411: Google opened a vocal search phone service to provide search results by phone. It seams that the goal was for Google to gather a lot of voice records to improve their speech recognition engine. (to provide data for machine learning)
  • On a similar way, Picasa performs face recognition, but it’s not perfect and you have to help it tagging your family and friends in your pictures. Well, the more you help it, the more it will be accurate later, and on a larger scale, the more Google is gathering learning data.
  • Google, once again, provides a free Translator toolkit to help doing sentence to sentence translation. This tool is free, but what you may not know is that I bet we are feeding Google with translation data by using this tool.

To another extent, Amazon is providing an online service called Amazon Mechanical Turk. It links nano-jobs providers with a widespread user base doing small tasks for money. I heard many companies are using this platform to help performing Human Intelligence Tasks.

Every team needs StatusNet

I’m sure you know about twitter. If you use it, you read and share links, ideas and statuses… with everyone.
Now imagine the same tool, but restricted to your team. This is StatusNet. (and it’s even much more than this actually)

Private timeline

At Beansight, we’ve been using StatusNet since the beginning. Examples: You start working on something? Take 3 seconds to post it on your StatusNet. You spotted an interesting link? Share it with your team on StatusNet. You’ll be late for the meeting? StatusNet. You’ve got a problem? StatusNet.
If it is well used, it’s like sharing the same mind. You are aware of what the others are doing or thinking and you can reply to them in real time. Isn’t this awesome when you work in an agile environment?

A snapshot of the Beansight timeline

Did I mentioned it costs nothing ? You can start using it privately with your team for free at status.net/cloud. (iPhone, Android or desktop app included)

Of course, we keep using e-mails for structured threads.

I know there are some other tools, and they may be better at doing this (Wedoist, Yammer…). So here comes the second part of this post:

Open

To be more precise, StatusNet is an open source microblogging platform. This means that you can deploy it on your own servers without having to rely on a particular service provider. It is branded under your name. And you own your data. Trust me, this is something large companies are looking for (A startuper thinks it’s Ok to use Google Apps for work, now try to explain this to a big company).

StatusNet can be federated, which means that public nodes can talk to each other. They all create a global and distributed network. Moreover, it uses an open protocol. You know, the same way it works for e-mails, where you can talk to someone who is not necessarily using the same provider than you. We tend to forget how important it is for innovation to keep the pipes opened.

Public microblogging: twitter and distributed StatusNet

I think open procotols matter. Can’t you see a problem in the previous image? Haven’t you learned that relying on one ressource is dangerous? Apparently many don’t see the issue and even try to build a business upon this closed API.

You can follow me (@steren) on identi.ca, a very popular public instance of StatusNet.

Last precision: Google Buzz is also promoting open protocols. Unfortunately, they are different from StatusNet. I hope these two systems will become compatible in a near future. Edit: Evan (StatusNet creator), tells us in the comments that you can already follow a Buzz user with StatusNet and that they are working together to support interoperability.