Case study by Attila Miklya, research by Attila Miklya & Fanni Szilvás

  1. Context. Why?

On the fall of 2019, the Demola Budapest research team was approach by Peter Pelles, an early-stage startup founder, who was working on a new web app. The client presented a clear solution to a customer problem we both could resonate with, however, he was looking for directions about how to segment a beachhead market or find a niche userbase to grow up on.

<aside> 💡 The web app is a link and file aggregator tool that, through cross-integration with all your internal and external communication and cloud storage accounts, indexes materials to increase accessibility. We use many channels for communication with the same people and often forget which channel was used to send a specific material.


Even though the core of the value proposition was pretty straightforward and technical, the user journey, potential context of use and states-of-mind of the potential users were unknown. Understanding all of these aspects would present opportunities to improve market fit.

2. Methodologies. How?

In this project I worked together with Fanni, my psychologist colleague at the innovtaion lab, with whom we had built up the research service line of the lab. I took client management and research lead responsibilities.

We implemented the project in 2 (plus one) sprint, closing in on the target group in a stepwise manner. In the first sprint, we took a diverse sample of people from 5 different occupations and chose the 2 that were the best fit. In the second, we zoomed in on those 2 personas and described them to the client with use case clusters (JTBD-s).

Research framework: Jobs-To-Be-Done (JTBD)

Even though JTBD was developed primarily for product development and customer discovery in corporate incremental innovation, the framework describes useful ways to segment the market, find underserved needs that helps in the prioritization of product strategy. The JTBD framework is a great fit in this case because at a pain point level users have very similar barriers: they want to find a file urgently and confuse communication channels. However, at a JTBD level, the use cases can be really diverse: urgency comes from the fact that they are on a meeting, they want to appear cool in front of their bosses or are in a state of fear about losing a stack of documents they promised to deliver.

Source: Jillian Wells

Sprint 1 methodologies.

  1. Scoping, desk research.

    The main challenge of this project was to segment the potential userbase of such a general product. We built a list of 5 protopersonas based on occupation and decided to do a heuristic validation with each to find the ones worth pursuing further. The protopersonas were:

    In addition, the we wanted to better understand the as-is process of searching for files and links and the psychological context of these use cases. We also wanted to get an impression on whether these situations are minor nuisances or major roadblocks for the users.

    We formulated explorative research questions for these topics and developed an interview guideline to cover them all.

2. Recruitment and data collection

In-depth interviewing is usually the first choice for generative research, however, in this study we wanted to cover as many personas as possible. So we decided to do microinterviews: these are low-effort 15-30 minute interviews that cover research questions heuristically.

We recruited friends of friends via the communities we were engaged in. Generally, microinterviews are easier to recruit for because of the smaller commitment.

The driving dynamic of these microinterviews were "haystack moments": we asked participants to recall situations when they were stuck with finding a specific file.

We discovered each participants' "haystack moments" by asking general questions first about their work routines and tangible outputs of their work. Some of the interviews (depending on the raport) also lead to the re-enactment of these 'haystack moments". We asked these participants to find specific files they mentioned before and comment on the process. This format offered a more fine-grained picture of the as-is user journey.

3. Data analysis, delivery.

We synthesized the interview transcripts to evaluate the fit of each occupation-based protopersona. We made a decision based on these transcripts whether a specific user would use the product or not. Aggregating these decisions across users and groups helped us emerge 2 personas that were better fit than the others (creative professionals and managers).

The two researchers made independent decisions about each user to reduce the subjectivity of the results. We wrote down the use case for each user based on the microinterview transcripts. After this independent work, we discussed our decisions user by user, marked our agreements and resolved our disagreements. This qualitative analysis is inspired from qualitative psyhometric best practices (eg. inter-rater reliability).

Results were delivered in an 8-page summary, with clear recommendations about next steps and some open questions about the product strategy.

Sprint 1 results and impact.

We found 2 promising occupations to focus or work on and recommended to take a look at these personas in depth (see sprint 2).

Beyond occupation-specific segmentation, we found personality characteristics and preferences that are connected to the pain points. For example, some people kept saving every asset to their hard drives in a Fear Of Missing Out. Similar insights are occupation-independent characteristics, we made the assumption that they predicted product use.

Aside these key findings about the target group, we also delivered a more detailed as-is user journey. User comments and our observations from the hands-on file search task led us to discover some hidden pain points which users might find hard to reflect on during an interview (such as high cognitive load caused by long lists of irrelevant results).

Sprint 2 methodologies.