Blippar is an augmented-reality and image-recognition app that lets mobile users unlock content about real-world objects around them with a simple scan, or "blipp," as the company likes to call it. The app has more than 13 million users according to Blippar, and the developer has ambitious designs to offer consumers a whole lot more. Last month, the four-year-old UK-based company acquired Binocular, the Austin-based developer of the virtual try-on app Glasses.com and plans to turn Blippar into a fully realized interactive platform with visual search. Campaign US spoke with Omaid Hiwaizi, president of global marketing at Blippar, about the future of the app and what brands and consumers can expect from the next soon-to-be released iteration.
What will the user experience be in the upcoming version of Blippar?
If you're curious about something — any object in the world around you — you can point at it using your phone, and Blippar has the technology to identify what it is. It will explain that object to you, it will give you information about what it is and effectively satisfy the curiosity you're expressing in that moment.
Can’t that information be found using a typical search engine?
The world is much bigger than the web. What if you don’t know what question to ask or you can’t describe what’s in front of you? Often it is quite hard. If you're seeing pink flowers, what do you type in? "Pink flowers"? It's hugely important for a platform like Blippar to help in those kinds of moments when people don’t quite have the vocabulary to know what question to ask.
What kind of information does Blippar provide?
In the current app, the bulk of the interaction that's possible is based on branded images or pages added to a database. Blippar recognizes those and triggers lots of content. That might be augmented reality or different kinds of content bubbles based on that particular image. In the future Blippar will also be able to recognize everyday objects in the world, so it might recognize a dog or chair or a building. It's moving from a fairly large world of things that you can interact with to an almost infinite world of objects that could be interacted with.
How will brands be able to use that capability to reach audiences?
The default content will be curated by experts, and then we'll have a system by which the content is optimized based on user interaction. That's the future monetization model where the advertisers will pay on a media basis, on an interaction basis, for placing their content amongst or attached to objects in the world. A brand [will] be able to target people based upon the context of them expressing curiosity in the world around them. So for instance, Google allows advertisers and brands to target people based on keywords, and we'll allow brands to target people based on objects and their inferred curiosity.
How are people using the current app?
One of the statistics that is very, very interesting about Blippar as it is now is that when people blipp a branded product and interact with it, the data tells us that in the same session people blipp six or seven more times, and they blipp other objects around them because they find the initial blipp so interesting, and they can try it on other things. So we're very confident this innate curiosity is already being unlocked by Blippar as it stands, but the app as we're enhancing it will allow people to — in each of those additional blipps — get content that kind of fuels them on and helps them have a great experience.
So if two people blipp the same object, will they see the same thing?
No, not at all. What the app will be able to do is to understand people's preferences over time. It's potentially the style of data that they're interested in receiving, cached sources of data and so on. So people won't get the same experience because it'll understand their preferences.
It's a very ambitious project ...
Yes, but it works. The demos of the next stage work brilliantly. That's why I'm quite confident about it.
Do you envision opening up the content creation to the public?
Yes, that's quite possible. We may end up with a model not dissimilar to that of Wikipedia. It's about how people connect with the world around them, and that includes other people, but it starts with the objects around them. I think it's an interesting question how a platform like Blippar would affect people's relationships, but certainly we think that there are individuals out there in society who would enjoy and value enriching a platform like Blippar and be driven by the idea that they can be helpful to others.
So people could create their own blipps.
Yes, in that instance, it’s being able to tag an object to tell the platform what the object is, and the user would suggest content relevant to that object in that particular context. So it's a tagging process.
For those of us who remember the unfulfilled promise of the discontinued image recognition app Google Goggles, how is Blippar overcoming the technological challenges?
Google Goggles was only ever a product about matching similar images and not particularly about trying to understand them. A piece of the puzzle that we have adopted, and which we are very, very advanced with, is technology called "computer vision." That's the branch of artificial intelligence relating to seeing. So it’s not just being able to recognize images as being similar to others, but also to understand different objects and how they’re interacting in your field of view and then infer meaning from that, to understand the context and what's called "semantic segmentation," properly understanding what a scene means based on component parts. That kind of technology was never in Google Goggles. It was simply looking to do image recognition based on images in its database.
When will the new version of Blippar be released?
It's months … a few months. It's not very long; we're super excited.