MORE DO'S AND DON’TS OF DIARY STUDIES

Almost two years ago, after seven months of repeating diary studies, I shared my Do’s and Don’ts of Diary Studies. After taking the lead on two more diary studies at the beginning of 2020 I’d like to add a few more tips to help ensure your diary studies are as effective as they can be.

Before I get into the additional do’s and don’ts let me give you a little context around the diary studies. Both studies were in support of the launch of a new fitness device and its companion mobile app. While my focus was on the companion app my colleagues led the first of what would be two rounds dedicated to the hardware device. After some time to address the issues and findings of the first round I led a second diary study concentrating only on the hardware device. The goal of all three of these diary studies was to assess the readiness of the device and app before the product’s launch. We wanted to understand the overall user experience of customers; from initial education of the device, through the purchase, delivery, and setup processes, and concluded after one month of using the device and app. Additionally, we sought to not only evaluate the usability of the app, but also to gather feedback related to the features and content users experienced while using both the app and the device. In short, there was a lot to study!

Illustration of a key finding from my companion app diary study. Created by Paige Doolin.

Illustration of a key finding from my companion app diary study. Created by Paige Doolin.

Using our team’s previous experience and knowledge around diary studies we were able to deliver three very insightful and impactful studies which helped the client confidently launch their product. In fact, at the end of our research, the company’s COO stated, "I want you to know the IMPACT of all of your work, because it's really important. Based on this research, we have decided to release the device to market. This gave us the confidence to do that, and we already have X deposits and Y devices in homes today." (The exact numbers need to remain confidential.)

do01.png

Supplement the study with observational (or qualitative) interviews - Diary studies do a great job of gathering quantitative and qualitative data over time (great for assessing attitudes and behaviors) but because this data is self reported by participants and it often doesn’t tell you the complete picture. For the companion app study I designated two small groups that would participate in observational or qualitative interviews in addition to the diary tasks they completed throughout the study. For the first group, I observed them going through the initial onboarding and first exploration process. For the second group, I interviewed them half-way through the study to have them walk me through how they use the app and asked specific questions around their use of key features. These sessions revealed several critical difficulties around the onboarding process as well as discoverability issues for numerous features. Without these sessions we most likely would not have identified these problems because it is not possible for participants to report that they cannot find a feature they do not know exists.

Repeat the study after fixes have been made - Repeating research can further validate findings from previous research but it also uncovers the issues or pain points left undiscovered from the first round(s) of research. The key to that second point is taking time between studies to fix the issues first discovered. By making these fixes, the next set of research participants will help you identify other remaining problems or pain points because they were not tripped up by those that have been fixed (something I discussed in a previous post). This benefit was realized when we repeated our diary study focused on the hardware device after the client had addressed many of the insights and findings we learned in the first round. In the first round participants were unclear on many of the onboarding steps because they could not be accessed via the device's display. Participants in the second round saw these steps on the display and smoothly sailed through the onboarding process. Because participants were able to quickly and easily set up and start using their device, this gave them an opportunity to discover that the content was not organized as they would have preferred; a finding that was not uncovered in the first round perhaps because participants were just happy to have finally concluded their struggle of the onboarding process. While not all issues can be addressed between rounds of research, repeating the study can help validate those found in the previous round(s) which further drives home the need to correct those earlier discovered problems.

don't02.png

Ask too many questions - Yes I mentioned this in my first post but I think it’s worth mentioning again but this time it’s on behalf of the researcher. Make sure you consider the number of questions you ask as well as the number of tasks participants perform. It’s up to you to look through all these data points and report out your findings. For my diary study focused on the companion app I had numerous stakeholders that each had numerous research questions and all were of equal importance, according to the client. I quickly learned that I was a little overzealous in my attempt to try to accommodate all their needs. While I feel I was able to deliver on the core questions they had I know there was still unexamined data that held more insights but time constraints kept me from analyzing all that I was collecting. So, take time to prioritize what’s most important and leave the less critical questions for the second or third round of research.

Wait to report findings until the end - Even after refining our study protocol to use the most critical questions for the second round of our device study, we still had an enormous amount of data coming in because we had approximately 75 participants spread across three US cities. To help tackle this large amount of data and to keep the client informed of what we were learning I delivered weekly topline reports which focused on key steps in the overall customer experience process (remember our study started at the initial education of the device, through the purchase, delivery, and setup processes, and concluded after one month of using the device). This proved to be a great way to divide the data into logical and manageable chunks and allowed us to report our issues and findings quickly and directly to the key stakeholders focused on specific steps in the customer journey. Because we delivered on this weekly routine the client could quickly begin implementing fixes and changes aimed at improving the overall user experience. Additionally, this weekly rhythm gave us great insight into how numerous metrics were tracking week after week. In fact we saw steady improvement for nearly all key metrics, including participant satisfaction, during the second round of our device diary study.

Much like the previous ‘Do’s and Don’ts’ I shared, these came from my desire to conduct effective and impactful research studies. Hopefully you can utilize them in your next diary study. And, as every study is different, you’ll likely discover other ways to run more effective and insightful research.

⬡ ⬡ ⬡

Do you have experience running diary studies? Please share your ‘Do’s and Don’ts’ by leaving a comment below. Thanks!