Design Challenge: Rapid Prototyping a functioning Augmented Reality App

How to prototype an AR experience that works in the real world and solves a real problem.

Rapid Prototyping an Augmented Reality App

During one of our weekly design team meetings at Marino Software we discussed use cases for Augmented Reality (AR). One use case that stood out was a way for people with specific dietary requirements to find suitable products in a supermarket.

Discovering what products are suitable can be a real pain if you have any specific dietary needs not consistent with the mainstream. What if there was a quick and easy way to see if something is suitable? What if your phone could highlight products to make finding and choosing easier?

There are many types of diet, from dairy-free to gluten-free, paleo and more. This project focuses on vegan products. If successful this can expand to cater for more categories.

The User Story

As a vegan I want to see products that are suitable for me so that I can buy them and get on with my day.

The Concept

Point your phone at any product and know if it is suitable for your dietary requirements.

1st Proof of Concept

Can we create an app that allows someone to point their phone at a product and recognise if it’s vegan?


We wanted to follow a process that would allow a designer, with little programming experience, to build a functioning prototype in a day.


With a bit of Googling it was clear that our goal may be achieved by using existing tools to build a simple but functioning prototype. Here’s what we used to build it.

UnityThis software is for creating games. It is also used by many to create 3D environments for VR and AR.

VuforiaVuforia adds the AR and image recognition functionality to Unity.

XcodeUnity can build as an Xcode project. This can then be transferred onto an iOS device for testing.

Making the prototype

For this first demo we used a packet of “Nakd salted caramel” as the initial trigger object to be detected and tracked. A photo of the product was added to an image database using Vuforia. The Vuforia database was then imported into Unity and the trigger image placed into a 3D space. Another image, a tick icon, was then added on top of the trigger image. This tick icon should appear whenever a suitable product gets detected by the app.

The moment of truth

Despite having a few build issues this approach worked pretty well. If you launch the app and place the product in the camera frame the tick icon appears on top of the product instantly. This signifies to the user that it is suitable for their dietary requirements.

Proof of concept 1

2nd Proof of Concept

A simple demo with one product was a step in the right direction but how would the app cope with multiple products? How would this work in a shop where products are close together? What about lighting conditions? What about different sizes of products?

We used the shop/cafe in DCU Alpha as our lab. The goal was to select all the vegan products in the store and add them to the database. Luckily there is only a handful of products in the shop that are suitable. One of each product was photographed and their backgrounds cropped out. These images would be our triggers.

With several products in the database it seemed like a good time to test if things were working as expected. This test was successful. The app detected products and displayed the tick icon. There were some exceptions to this. Products that have curved, reflective surfaces were more difficult to detect.

After some research we found a method for adding 3D objects to the Vuforia database. This method seems like a promising way to overcome the issues of curved or reflective object recognition.

Proof of concept 2

Next Steps

If taken to its conclusion an app like this should work with all products in a large store or, if were are to be more ambitious, the world! Working at this scale raises some questions for us to address.

1. Collecting the data

Having added the products from one small shop it is clear that to catalogue the world’s products (even just the vegan ones) would be a huge undertaking. Crowdsourcing may be a workable way to achieve this. Leveraging machine learning may also help us to gather a richer database of images. Working with a retailer to catalogue their products may also provide the resources to gather large amounts of data.

2. Data download, storage and performance

Hundreds, thousands or even millions of products will mean huge amounts of data. This data will need to either be stored in the cloud or downloaded to a user’s device. It is unclear how this will affect performance.

3. Improving Recognition

A machine learning method may help to improve the experience. Many image variants are likely to make product recognition more sensitive and robust.

What we learned

Time-boxing a project to a day or so allows us to focus on key functionality. It frees us from the constraints of trying to launch a perfected, multi-featured product. It can be ugly, there can be unknowns but the goal is to make something realistic enough to learn from. Through this challenge we learned how to prototype an AR experience that works in the real world and solves a real problem. This gave us a good understanding of the immense potential as well as the limitations of this technology.

!@THEqQUICKbBROWNfFXjJMPSvVLAZYDGgkyz&[%r{\"}mosx,4>6]|?'while(putc 3_0-~$.+=9/2^5;)<18*7and:`#

Need a Quote for a Project?

We’re ready to start the conversation however best suits you - on the phone at
+353 (0)1 833 7392 or by email