Overview
ChowTime AR lets users generate recipes from ingredients in real-time by hovering over them using augmented reality. The ingredient scanning process is completely tap free, allowing users to add, remove, and view in one place.
My role
Interaction Designer
Visual Designer
Prototyper
Team
1 Designer (Me!)
2 Developers
Technologies
Figma
Flowmapp
React Native
Unity
ARKit
This was the old design.
ChowTime was an app idea I came up with at a hackathon which let's users generate real recipes from ingredients in pictures. It was released on both the App Store and Playstore during Summer 2020. However, ChowTime had poor UX and desperately needed a redesign.
What did users think of it?
The key problems.
Our current users overall liked the concept but weren't ready to use it over and over. I was able to breakdown the key insights from the user interviews to better understand user needs. I also gathered the "magic wishes" of users.
1. Many mistakes
Ingredients misidentified or not identified, caused them to restart the process.
2. Lot's of pictures
The current process caused users to fill up their camera roll with pictures of ingredients.
3. Unintuitive
The process was long and slow. The UI didn't follow follow modern design patterns.
Our goal.
1. Intuitive
Our new process needs to be fast and easy. Mistakes should be easy to resolve.
2. Lean
We need to think of a way to "scan" ingredients without using pictures on the fly.
3. Fun
It's kind of a slog to use ChowTime right now. Our team need to make a POC thats fun.
Ideation.
Each team member was tasked to come up with 2-3 ideas to help solve our problems in scoping meetings. The winning idea was a full app redesign with a “live” ingredient adding experience. We then explored potential "live" experiences.
AR vs Live Recognition
We looked at another recipe app that lets users add ingredients live. This app would track items with the camera, but was prone to lagging and ghost images. AR is more seamless and immersive, with no lag.
Initial idea for AR in ChowTime
Hover over ingredients and mark them each with small tags with real-time scanning. The virtual AR content would be the tags themselves. We would require an object recognition model trained by scanning raw 3D ingredients as reference objects.
Technical requirements
We need detection and recognition to have our tags interact and scan the ingredient itself. We need tracking to keep the ingredient tag in place even if a user picks it up or goes to another part of the room. Developers chose a Unity + ARKit combo.
Key decision #1.
Time to explore.
Each team member was new to AR. Developers only wanted to develop the application in Unity only after some user validation. So I took a strictly design-only route before anything was implemented to see if using AR was right for our users.
I started exploring how to design for AR, while reworking the the user flow and information architecture of the app.
Early design work.
After exploration and initial planning was done, it was time to move onto wireframes and initial designs. A mini-design system was also created.
There were two versions for the AR wireframes. However, we went with Version 2 since it was the least intrusive on-screen (especially if there were multiple ingredients). I also realized that users may need a way to learn how to use AR.
User testing setup.
After the first iterations of the designs were done, it was time to see how our users felt. However, we realized that it's really difficult to mimic the AR experience with a static prototype test without developing the app. We split our user test into 2 steps for each of the 4 users.
Step 1 — Gauge distance from ingredients
Demo apps like Measure to familiarize AR, asking questions like “How would you scan an ingredient”, “What do expect from a tap-free process?” etc.
Step 2 — Use the new recipe generating process
Told users that scrolling on this prototype was equivalent to moving the camera side-to-side, while asking them to select 3 of the ingredients they had “scanned before”.
User testing results.
Key decision #2.
Final designs and decisions.
Impact.
Learnings.
- Augmented Reality ended up being a fun and interactive solution
- Immersion matters — combining the ingredient scanning and viewing into one screen was essential
- I wish I dived into working with Unity at the start as it would’ve sped up many parts of the design process
Next steps
Handoff to developers + learn how do upgrade my design process for AR with Unity, to work fast with devs.
Scan more than just raw ingredients (recognizing packaging, barcodes etc) and allow the user to have recipe preferences (They may always want drinks, or food etc)