Skills: UX, UI, Wireframing, Product Strategy, User Research, Usability Testing, Prototyping
Time: April-May 2021
Tools: Figma, Miro, Notion, Maze, Typeform
- When looking for specific information on a topic online, people often struggle to find exactly what they are after.
- The information they do find is often too specific, too generic or just not what they need.
- This causes immense frustration and confusion.
The solution was to design new features for improved content personalisation and filtering. Through a new (and unseen) onboarding process, users can input their topics of interest. These topics are then presented within the new filtering feature, as easily edited and selected pills - which changes the content the user sees. I also introduced more thoughtful ways to organise conversation cards.
What am I looking at?
To get a good foundational understanding of the product, I conducted a rapid usability review. By doing this, I quickly identified user pain points within the app, as well as more generally critically evaluate the current version of it. Anyone was still in a closed testing phase and was not the finished, shiny product - as you can tell from the red (pain point) to green (wow moment) post-it ratio!
I think I see something!
After the usability review, it was time to make some observations. What did I see when I peered into the app? I noticed that all the conversation topics were always available (maybe +100 total) - and there was no method for a user to filter it in line with their personal interests. The user was expected to do a huge amount of manual searching and scrolling, and therefore was the problem I decided to frame through my observation:
Key: when [situation], users [response], which causes [problem to business or experience].
Asking real people
I used my observation to guide the next activity, which was to get feedback from real, potential users via a user survey (on Typeform). I needed to see if real people would confirm or disprove my original observation, as well as uncover any further pain points. As this was in a bootcamp environment, the surveyed individuals were not real users of the platform. Rather, I would evaluate the survey insights out as if they were from the platform users. The responses resoundingly confirmed what I observed - 90% said they struggled to find relevant information on specific topics.
“Finding the niche topics is frustrating. In some cases it's easy to find an answer or tutorial, in others it simply doesn't exist.”
“It’s difficult to find exactly what I need.”
“Seeking expert knowledge on any topic can be frustrating.”
Turning ‘what’ into ‘why’
I had gained a glimpse of the perspective of the user, and the frustration they feel when they can’t find the exact information they need. But why? Why did this happen in the first place? To understand this, I created an affinity map to synthesise the user survey data, to pull it apart and to see if any explanatory insights would emerge. A notable finding was the apparent difference in type of person seeking expert or specific information - which I organised as ‘career learner’ vs ‘skill learner’.
I can definitely see something!
As themes emerged out of the affinity mapping process, I thought back and realised I can - with a reasonable degree of confidence - validate my initial observation about the problem facing users. Namely, they were frustrated when looking for knowledge by the inability to refine for their specific needs and interests. They often ended up with information that wasn't exactly what they were looking for, or that was either far too specific or far too generic. This experience often left them feeling confused and frustrated.
I captured this in the form of a frustration, to better understand specific pain points and build empathy with the user.
I estimated that providing a solution to this specific frustration would offer the greatest potential value to both user and business.
A question to incite ideation
Before jumping into coming up with ideas, I had to properly frame the opportunity. I did this by generating a how might we statement, which would be used to structure and bring direction to the subsequent ideation phase. How might we statements are excellent because they assume a solution exists, and that we can find it together. Remember: it must be broad enough that there are a wide range of solutions, but narrow enough that we have some helpful boundaries.
Juicing my brain
I took my how might we statement and tried to create as many solutions as possible. By conducting a series of ideation techniques (mind-mapping and crazy ideas) I came up a healthy range of crazy, creative ideas that addressed the unmet user needs identified in the define phase. This was a creative safe space - no idea was too far fetched!
Following this, I prioritised the ideas based on user value, business value, effort and time. This step allowed me to identify the sweet spot, the goldilocks zone, in which I decided on how I would address the unmet user need - by designing a better way to filter and personalise the app content.
To converge back again and to formally bring my focus back on the user and - crucially - business goals, I developed a hypothesis. This was effectively an educated guess about how I could solve the problem for users.
Key: Users goal, Business goal.
Ready, steady, sketch
With my hypothesis in mind, I rapidly sketched solutions. This helped me quickly map and understand the current product and consider how I could iterate directly in the product.
Sketches to skeletons
With the sketches done, I needed to accurately visualise designs in low fidelity wireframes - to act as the skeleton for my solution.
To avoid any decision bias I used a neutral colour palette, and then used the Figma Autoflow plugin to quickly convert the static pages into a clickable prototype with transitions and interactions. Following this, I got feedback internally from the design instructors to affirm the design’s effectiveness.
Styles and components
The low fidelity prototype helped me recognise frustrations with the experience that I improved at the high fidelity stage. To create the high fidelity prototype I inspected the products style and followed the 8pt rule to effectively and easily create a prototype that was consistent with the product styling. Before creating the prototype I defined styles and components to easily and quickly help me design consistently.
High fidelity prototype
Here is the final version of the prototype.
With the high fidelity prototype created I formed a testing script with scenario and tasks for the user to complete to validate the prototype with real users. To test the prototype I used Maze and gathered feedback following every task.
In addition to the written feedback from the testers, I would also look at Maze's 'Usability Score', which measures the ease of my maze by calculating key performance indicators: mission success & duration, test exits, and mis-clicks.
Overall completion rates of mission tasks:
- 90% success (80% direct, 10% indirect) and 10% give up/bounced
- 50% success (25% direct, 25% indirect) and 50% give up/bounced
- 71.4% success (57.1% direct, 14.3% indirect) and 28.6% give up/bounced
Maze Usability Score:
- 5/100 (massively let down by the 50% bounce rate)
The results from my unmoderated test were mixed and filled with learnings.
Firstly, the specific prototype design. It seems that many users would have used a search bar from the off set, particularly with some of the 'find X topic' tasks. As this flow wasn't designed in the prototype, this generated many mis-clicks when users tried - and failed - to interact with the search bar. This undoubtedly damaged the Usability scores. Quotes:
"I assumed I had to search in the bar"
"[it needed a] search function at the top to streamline process."
I should look at improving the overall topic selection flow, which also led to a high mis-click rate (33% on task 2). Currently, users must scroll to the intended topic (e.g. Business) and then select 'see all' to view all the topic conversation cards. Maybe this isn't the best solution. Maybe instead, there is one card per topic on the main page, which users must select. Once on that specific topic page (again e.g. Business), they can search with filters (e.g. recommended, popular etc).
Additionally, the tag filtering functionality could be enhanced. Rather than 'update', the CTA could have been a larger button that reads something like 'add new topic'. This would have made the function a bit clearer to the user.
Aside from the specific product design, user feedback showed some of the issues actually arose out of the mission task language, which I tried to be as non-leading as possible. It seems in my attempt to write in non-suggestive language, the task itself became unclear and hard to follow. Quotes:
"Slightly unclear question for the quiz itself..."
"Question itself was worded in a slightly confusing way..."
"Slightly unclear questions..."
It is slightly frustrating as these answers suggest better language would lead to a much more successful test. This will be a big lesson for next time.
There was also confusion over Maze itself. In my efforts to make the test as unmoderated as possible, I provided the minimum amount of guidance and advice - in the spirit of it being 'unmoderated'. But in hindsight, it seems I should have explained a little bit about how Maze works and how the user is expected to interact with the prototype. Quotes:
"the app interface but it was unclear that I was meant to be interacting with it"
"no indication that the screens were scrollable"