Photographic memory

The Narrative clip is a wearable camera that automatically takes a picture every 30 seconds. One year and a half ago, I decided to back this ambitious Kickstarter project. I will try to explain why I did so and share my experience using the «clip» for a few weeks.

My Black Narrative Clip
My Black Narrative Clip

Forget it to better remember

The clip doesn’t have any button. You don’t «switch it on». It is designed to be worn and to automatically take pictures every 30 seconds. To stop, just put it in your pocket or face down on a table. This simple interaction design, its weight and size make the result something that you don’t have to care about. There is a clever “double tap” feature to take a picture and star it, but this is not the main use case. You just wear it, and forget about it.

An experiment

I’ve chosen to give it a try, mostly as an experiment. I am aware that having the ability to record anything I see may have an impact on my personal life, that’s what I want to experience.

I always considered that the most precious things on my computer were my personal pictures (today stored privately online). I like taking a few pictures of events. I have been storing and paying attention to them for the last 15 years. Sometimes, it happens that I visually go into the past and remember a moment with more details that I could do by myself thanks to these images.

Human memory

As Wikipedia describes it, our memory doesn’t store every moment of our life with the same level of details. While our sensory memory let us recall exact details of an item we just saw, we are not storing them and are most likely to forget them in the next minutes (for example, the color the shirt of the person sitting next to you, the exact amount you just paid at a restaurant…). Our short-term memory allows us to explicitly remember an information for a short period of time, but we are most likely to forget it later. We do not remember everything, we are explicitely and implicitely storing scenes of our life in our long-term memory. The fact that we remember a scene more than an other has, I think, many causes: attention, frequency, emotional level…

Photographic memory

The Narrative clip is a different kind of memory, it is lossless. It stores the same level of detail for every picture, every moment. This is, for me, a new kind of memory. It is useful? probably not a lot, but these kind of devices may change the way we are able to remember the past. It is like having an eternal eidetic visual memory, like this Enki Billal character whose quest is to remember the first days after his own birth in Le Sommeil du monstre.

I think being able to go back in time is priceless. Being able to live a second time a given day you experienced a long time ago is an experience that is not possible today, but that we may see tomorrow, and Narrative goes into this direction.

Technically, the pictures taken by the clip are not ideal, the camera’s field of view (70°) is really small compared to the human eye (or a GoPro) and the image sensor could really be improved for dim light. This often leads to noisy pictures, not capturing the whole scene and of a strange framing. While these are negative points and could definitively be improved in future versions, they are also a way to be later surprised of what was extracted from a moment. And the real value is in the series of pictures, that together, create a moment. Most of the time, I am not focusing on one particular taken image, I prefer reviving a moment in a time-lapse mode with the Narrative application.

Wearing it

After a few weeks of wearing the clip when I felt confortable doing it, I have more answers to my initials questions: How will people react? Will people or I behave differently? When will I wear it?

First thing, most people that are not in direct interaction with me simply don’t notice it. I think it’s because it’s very discreet, and nobody knows (yet) that these kind of device exist. People I talk directly with notice, or I try to present them the device quite early when we meet. Most of them are curious, and once I explained the principle, they don’t mind if I continue wearing it and very often ask me to send them a “best-of” of the event. I was at first wearing it all the time except in professional situations, it continued but I realise now I sometimes forget it when performing routine tasks.

A lot of data

One pictures from the Narrative clip sizes 2048×1536 and weighs around 250KB. I notice I gather around 750Mo of data per day, which are automatically uploaded and stored encrypted, after analysis, on Narrative’s servers.

What’s next

What’s next is that by storing so much information, we will need powerful personal search engines. Not only being able to search by date in a efficient UI, but also by location, by action (sitting, walking, driving…), by people (face recognition), by image elements (food, grass…). The Narrative apps are far from this and I hope they will improve their mobile apps and offer a great desktop web portal over time. But maybe it’s not their job and should just open a clean API to let other people and companies organise the content.

This powerful search is a vision, but we are not that far from it when we see the power of Google+ personal photo search for example.

Not only pictures?

This idea  goes into the broader trend of lifelogging. I already passively record my position with Google Location History, the music I listen on, log a lot of things on Foursquare and Evernote… which tagline is, by the way, «Remember Everything». That says a lot on their vision, it could have been “The best app to take notes“, but no, they see larger than this, they want to help you remember, and first step into this direction is to help you take notes.

Except for a few notes, check-ins and pictures, gathering all this data is pretty useless today. But I am confident it may have more value in the future. It will be raw resource that other services will tap to generate real customised and personal value. And this value will certainly be more than editing a movie of your life after your death like Robin Williams is doing in The Final Cut.

Salvador Dalí’s tiger, quite inspired by an ad for Ringling Bros

Dali Ringling Bros tiger inspiration

Left: Ad for Ringling Bros. and Barnum & Bailey, Charles Livingston Bull 1915

Right: Dream Caused by the Flight of a Bee Around a Pomegranate a Second Before Awakening, Salvador Dalí 1944

A poster I found in my attic reminded me one of my favorite piece by Salvador Dali, I think we can agree one got inspired by the other.

re-Captcha, nanojobs and GWAP

Probably the most clever idea I ever heard of.

This is not new but it keeps astonishing people when I tell them about it: Did you know that Captchas help scanning books? And they are doing it very very well.

a captcha seen on the facebook registration page

You all know about captchas: Images containing words that you are forced to type to make sure you are a human and not a robot when performing various actions on the Internet (create an account, write a comment…) in order to fight spam. Thousands of people are decoding them everyday. Everyone of them is doing a small mental effort to read the words.

a captcha from re-Captcha

And this is where the guys from the re-Captcha project had a brilliant idea: What if these thousands mental efforts could be used to actually do something useful? Like helping scanning books ?

Today, many organizations are scanning old books and transforming them in a digital format. The OCR software that transforms the scanned image to digital text may sometimes not be able to do its job correctly (however complex the software may be). re-Captcha uses the human brain to actually decode words the computer is not sure about:

  • What you see when you look at a re-Captcha are two words.
  • Among those two words, one is known by the server. The other one is a word the computer knows it didn’t managed to read properly.
  • You being a human or not will be judged on the first word, and the result you enter for the second word will be used to decode it.
  • Then we can imagine that if a certain number of users read the same for a given unknown word, it is most likely to be the right translation.
  • By using this technique, the system is able to digitalize texts with a higher precision that most of the other OCR systems.

The company behind re-Captcha, its data and its market-share were acquired by Google in 2009. (what else would you have imagined)

To me this system is brilliant: it solves a problem by dividing it in such simple tasks that they can be executed by people who don’t even notice that they are working. (And what’s nicer with this one is that it helps fighting spam and digitalizing books, two great causes.)

nano jobs
I don’t know if there is another term, but I call this nano jobs.

Let us take another example of nano job: in 2006 a professor released a fun tool where you can play with a random other player: an image was displayed and your goal was to find common words describing this image with the other remote player. Of course, you quickly realize that this was only done to help labeling the image base: Today, contrary to a human, a machine has difficulties to understand what an image represents (Image recognition). The “find common words to improve your score” is just a incentive to gather a lot of data. Google did the same to help labelling its image base.

Playing the ESP game with a random player, finding common labels for a given image.

This leads us to another important point in nano jobs: game mechanics.

You cannot force people into doing small tasks, they have to do them by themselves. In the case of re-Captcha, they understand the need of fighting spam, so they accept the task. In the case of the ESP game they want to do the best score or maybe to have fun with a random web user (this reminds me chatroulette).

These games are called games with a purpose (GWAP). Imagine, the workforce that these millions of people farming like zombies on Farmville represent (Unfortunately, Farmville business model is more in selling your data and selling stupid virtual stuff than making you doing nano jobs). Then, when we hear about Google investing in social game companies, I think nano jobs are part of their motivation (not the only one of course).

My conclusion
To conclude, I think this de-centralized and effortless way of solving problems is extremely powerful. Once again, divide and conquer seams to be the strategy to adopt, even for problems that don’t seam scalable.

Some more examples

  • seams to specialize in Games With A Purpose. You can play to help tagging images, music, find synonyms, image tracing, emotions from images, image similarity and video labeling.
  • GOOG-411: Google opened a vocal search phone service to provide search results by phone. It seams that the goal was for Google to gather a lot of voice records to improve their speech recognition engine. (to provide data for machine learning)
  • On a similar way, Picasa performs face recognition, but it’s not perfect and you have to help it tagging your family and friends in your pictures. Well, the more you help it, the more it will be accurate later, and on a larger scale, the more Google is gathering learning data.
  • Google, once again, provides a free Translator toolkit to help doing sentence to sentence translation. This tool is free, but what you may not know is that I bet we are feeding Google with translation data by using this tool.

To another extent, Amazon is providing an online service called Amazon Mechanical Turk. It links nano-jobs providers with a widespread user base doing small tasks for money. I heard many companies are using this platform to help performing Human Intelligence Tasks.

Future of movies

Let’s imagine what movies could look like in the next years. I’m not going to talk about 3D without glasses or mega-super-HD, but some other new ideas realistically accessible with today’s technology.

Dawn of the Dead
We often say actors will be replaced by digital characters. I can assure you that today doing so costs a lot of money and that it will take time before these costs will be lower than fees of a not so famous actor. They won’t replace real actors, they will replace dead ones!
We saw movies with non-humans main characters (think about Jar Jar Binks or Golum). But today, technology is developed enough to be able to create virtual main human characters who do not fall in the “uncanny valley“.
But what about real actors who cannot act anymore ? Or what about dead movie legends ?
That’s why we may see some new epic Star Wars episodes with actors from the original trilogy. As an example, we recently saw Arnold Schwarzenegger acting in “Terminator Salvation”. He was entirely digital.
It raises some issues, can I decide to use Marilyn Monroe as a main character ? Who owns her image today ? Is this really ethical ? Can we credit with the real name of the dead actor ?

I bet Marilyn will be back in theatres

Dynamic content (i.e. better product placement)
Pixar constantly raises the quality of its movies. Recently I could see in “Up” that some shots were internationalized: For example, In the French version, French words were written on a crate. So I assume Pixar re-rendered this movie for every location.
Of course, this is something that can only be done using computer imagery. And what if this content wasn’t static ? What if it could be adjusted to the audience or the time dynamically?
From an artistic point of view that’s great, imagine a newspaper showing the headlines from today’s events, it improve the immersion of the spectator.
But this kind of innovation is rarely driven by art. On the contrary, advertisers will be able to do some localized product placement or advertising.
I think two actual factors will lead soon to streamed and localized content into movies:

  • screens will soon be entirely connected to the internet and thus content delivered can easily be adjusted by broadcasters
  • 3D image rendering operations are now cheap for broadcasters (in term of computer power needed).
The best product placement I have in mind: "The Fifth element"


It’s you, in the movie.
Then we could go deeper and imagine customizing the movie: take two or three pictures of your head, fill in some morphological details and enjoy a movie where the main character is yourself.
This can easily be done with computer generated movies. But I think it’s also feasible to replace a face in a traditionnal movie with technologies involving advanced face detection, realistic face reconstruction and complex compositing to mix the two. Example (watch until the end please):

It’s not new, the customization of the content can already be seen in video games.
Cinema may blend with video games, the story may not be linear. You may choose at some point the next part of the script, choose to save someone’s life or to let him die for example. (Remember this awesome Tipex Youtube advertisement ?)
This is something I experimented with in 2003 using simply Adobe Flash (in this project), and this is something that starts to be seen online now (using Youtube annotations or more complex systems)

All these ideas lead us to think about the definition of the cinema. Is it still the artist’s vision if we can change elements from the movie ?

Note: This article was drafted in November 2010, I found today that Georges Lucas announced in December 2010 to be planning to use dead actors in a movie. I guess that’s why I really need to use Beansight.

UPDATE 2011-01-24: Techcrunch spotted a German company doing custom product placement in video footage: Impossible Software – Is This What The Future Of Video Advertising Looks Like?.

Native applications are doomed

I take this for granted, but I realized many people are not aware of this change: Native applications are endangered species.

By native application, I mean any application intended to be run on the user’s computer that does not use web technologies. And by computer I don’t mean only the good old dektop computer. We have seen a proliferation of devices that can now be considered as computers: laptops of any sizes, smartphones, tablets, navigation systems…

Our relation to these computers has change over the previous decade: we use more than one computer everyday, and they are all connected to the same network named the Internet.

Here are some points that lead me to conclude native applications cannot survive in this environment :

1 Diversification of operating systems

I don’t think I’m mistaken if I say that all these computers will never share the same OS. Today developers of mobile applications have to face the same problem than with the desktop: what OS to support ? For one given platform, focusing the development for a specific OS is simply reducing the number of targeted users. Supporting a large number of operating systems has a real cost and duplicates efforts.

Choosing to build an application that runs with standardized web technologies assure you that anyone accessing the Internet with a recent browser will be able to run your application. Browsers are the new operating systems, and they all almost share the same API.

2 Web technologies gain power

Advanced functionnalities of web application are coming with HTML5 specifications. They are specifications pushed forward by the industry that need them and don’t want to rely on external proprietary solutions (such as Adobe’s Flash).

  • An online application does not necessarily require a constant Internet connection. Most of the application operations are done client-side (using Javascript). Structured data is exchanged with the server but can also be stored in a local data store and thus provide an offline exprience.
  • Advanced graphics animations are completely possible. And real time 3D  could even be used with the upcomming WebGL technology.
  • Video and sound are as simple to use as images.
  • Browsers also expose new types of information such as geolocalisation.

In the end, web technologies have the potential to be used for everyday tasks. We may wonder if heavier professional software can also be created using web technologies. Personally, I have no doubt that they could.

Of course, we will assume that users who use online applications will use a modern and standard compliant browser. I think the attractiveness of these apps is strong enough to make the user change his browser for something up to date.

3 Many computers implies simpler administration

If you multiply the number of machines the user has to care of, it is logical that configuring them should require less time than today. However easy it may be, installing a software should be considered as too much.

With web based applications, first access require only one single step : launch the application by following its URL. And no need to upgrade in the future: the deployment of a new version of the application is done instantaneously. Corrections can be done on the fly and it is not needed any more to wait for the user base to upgrade by themselves.

4 Freedom

Application Stores gained in popularity recently. They may pose a threat if they are the only way the user can install applications. In that case, the distribution is controlled by a single totalitarian company who can decide what can be distributed and what cannot.

The story of Google Voice on iPhone points out this issue and the power of modern web based application: Google’s application wasn’t accepted on the Apple Store, even after negotiation, so Google just came back a few months later with an online mobile version of Voice fully powered by HTML5 technologies. Google does not hesitate to claim that however successful they seam to be, app stores are not the future (consider that they also have their own store with Android Market).

However, App Stores brought the idea of giant directories from which user can choose his applications and discover new ones. This is in what I think they will evolve in the future, proposing links to web applications.

In the end, it may just be sad for fortunate developers who succeeded in making money by selling stupid applications that people would never have bought if they where online.


What was still a supposition for me four years ago is now a certitude: Computers will only be access point to the web, everything we will need to use will be online, be it Data or Applications and it’s coming faster that I would have imagined.