These past two week saw our broadest and longest user study to-date. This post covers the wrap up of it, analyzing the results, and nailing down the final presentation and deliverables for July 31st!
Diary Study - We had 20 study participants over 4 different tracks. Each one visited a new website daily to use a variety of checkout flows. The 8-10 day study wrapped up last week and we had post-interviews over the two weeks. This week we have been analyzing the results, from qualitative to quantitative, and distilling the insights that will go into the UX Guidelines we plan to deliver to the client.
UX Guidelines - This sprint we also have been developing the categories of the UX Guidelines, and designing the website that will communicate them. It appears roughly half will be informed by the diary study, and the other half will come from the prior 6 months of research. The difficulty with designing the website is it helps to have the content which we’re going to show, but it’s still in progress and not done. We have a site map and mock ups, and will slowly be filling it out as the findings from the diary study come together.
All four tracks (A, B, C and D) experienced the first three days using our website checkouts as the “normal” web checkout requiring the customer to enter all their info - name, shipping, billing, and credit card info on a web form, and submitting. We alternated the types of websites, and pre-selected generic objects for participants to checkout - a pillow, a basket, a mug, sandwiches, a camera.
Narrative and reasoning of each daily prototype:
The first three days were simplified checkouts where the user was expected to type in their provided credit card details by hand. This was designed to get users into the vein of the test without trying to test variables right away.
Day 1, Bed Bath & Beyond:
Simple checkout on Bed Bath & Beyond → Live Prototype Here
Day 2, Whole Foods:
Simple checkout on Whole Foods → Live Prototype Here
Day 3, Walmart:
Simple checkout on Walmart → Live Prototype Here
Day 4, H&M:
On the fourth day, every track experiences the first “autofill” day (simulating using Continuous Authentication for the first time). This is where it starts to differentiate between tracks.Day 4 - Track A - Autofilled, no explanation: → Live Prototype Here
These users are never told what is going on. No mention of any change to their authentication flow, them being enrolled in any new program, or changes to the payment process whatsoever.
Our hypothesis with them is that, potentially in non-EU markets, this data collection could be feasible, but customers may be sketched out when it autofills without explanation. This rang true and people were weirded out by it autofilling on day 4.Day 4 - Track B - NOT Autofilled, Opt-in: → Live Prototype Here
These users are politely asked if they would like to opt in to a service, Mastercard Instant Checkout, that pre fills in their data. Their data on Day 4 is NOT prefilled (the only track it’s not) and they must opt-in via a pop-up bubble. It is more convenient and secure. In the post interview no one could remember if they opted in or not, and regardless, we autofilled their data anyway. These users did mention the Mastercard branding and checkmarks we added, and “encrypted” text as reassuring.
Day 4 - Track C - Autofilled, opt-out: → Live Prototype Here
Track C users were very much like track B expect we assume they want it and opt them in from the start on day 4. Their info is prefilled and they can opt-out if they like. No one chose to opt-out, and this had the highest aggregate comfort score across the entire study.Day 4 - Track D - Autofilled, with progress bar=100%: → Live Prototype Here
Unlike the previous three tracks, track D is fully aware of what Continuous Authentication is, and have been fully educated. For the first few days, their authentication profile was “learning” and there was a green progress bar that showed how complete it was. It became complete on day 3 and the use the autofill feature on day 4. However, we cannot say overall their comprehension of the system was any better than the other tracks, and in some cases worse. They had the most varied comfort, surprisingly, and had some notable misunderstandings of the progress bar.
Day 5, Bed, Bath, and Beyond:Plain Autofilled: → Live Prototype Here
For day 5, everyone gets the same checkout with the information autofilled.
Day 6, Whole Foods:First Step Up: → Live Prototype Here
This day they are buying an expensive camera on Walmart.com. Everyone gets some type of step-up, a prompt for further authentication. We tested various types of step ups from a standard email sent code, to more out-there experiments. Most people verbally said these made them feel more secure, like Mastercard was watching out for them, while paradoxically selecting lower numbers on the comfort scale. We think this is because some of the experimental step ups were not very robust on different devices and hard to use.
Day 7, Whole Foods:Plain Autofill: → Live Prototype Here
Another plain day of autofilling.
Day 8, H&M:Second Step Up: → Live Prototype Here
We tested another type of step up with everyone, albeit on a cheaper item around $25.
Tested Step-Up ExamplesEmail Pincode→ Live Prototype Here
This is the most typical step up of the bunch and currently used today, where a pincode or link is sent to the user via text or email.Circle the Donut→ Live Prototype Here
Users with this step up identified “their” donut out of an array of 6 donuts. When prompted with the array, they had to select their predetermined donut. This is playing off a Microsoft interaction called Picture Password.Draw a Star→ Live Prototype Here
Using mouse or touchscreen input as a unique factor, users are prompted to draw a star in a canvas box. Some academic research shows drawing like this could be used as a unique identifying factor.Swipe through Gallery→ Live Prototype Here
To test “browsing behaviour” in a step up, this shows a gallery of images and ask users to swipe through them to browse. We realized too late some functional compatibility issues between mobile and desktop without
After the participants completed the checkout process, they would fill out a quick survey of their experience: Daily Survey [Google Forms]
With 20 participants, pre and post interviews, as well as the daily survey response feedback, we had tons and tons of data to sift through. The post interviews are where most of the meat came from, and we embarked on another large affinity diagram process, across tracks and days.
We also have aggregated graphs of each track and their comfort levels per day. There are definitely some interesting insights here that we’re still analyzing.
Overall, this was a great last sprint to end on and an educational experience from a research side. We learned tons about asking the right questions at the right time, setting up the right number