Informing, inspiring, and engaging society with EMBL’s research, services and training
Digital products are designed for use. Even simple, text-based websites are consumed by users with a task in mind. Mostly they want to find something specific. Sometimes they might want to get in touch with a real person to ask a specific question. Or apply for a job. ‘Just surfing around’, even ‘reading’ requires a user to navigate.
Modern digital design practice has user research at its core. By understanding the needs, motivations and behaviour of our users means that we can design and deliver the best experience to them. However, sometimes, those needs may be in conflict with organisation goals, product roadmaps, or as I indicated just now, perceived wisdom and stereotypes. It’s our business to challenge those falsehoods with insight and evidence from real people to place the user first in our priorities.
As we start to scale our digital design efforts towards a new EMBL.org and an organisation-wide Intranet, I want to outline our approach on the methods we use, as well as addressing some issues we will face along the way.
There are a lot of different ways we can test our digital products with users. We could analyse our website traffic. We could set up user flows within our content and follow our users as they work through these flows to complete a task. We can set up automated multi-variant testing to try different variations of an interface to see which performs more effectively. The array of tools, services and methods is quite dazzling.
But, we are a small team with a relatively small budget. The biggest barrier is time. User research takes time. Not only that, but the logistics typically take twice the time of the actual testing. In the user research industry this has resulted in a new field: Research Ops.
In my opinion, the most effective tool in our user research toolkit is the ‘depth interview’ (this has numerous names: user interviewing, contextual enquiry, task-based usability testing etc). This is the go-to method for researchers across the industry because it provides the biggest bang for your buck.
This is how they are run:
1. We get a representative sample of users. Preferably based on what we know about our audience already. Not what we *think* we know, but what we *actually* know. This is the first hurdle. Getting to ‘knowing’ in itself can be very difficult.
2. The second hurdle is the ‘representative’ bit. Sometimes, these people are hard to find. They live all over the world and speak different languages. We can use recruitment agencies to help us but there is a process and cost associated with that. Sometimes weeks and hundreds of Euros in fees per participant.
3. Once we have all that. We decide what in the product we would like to test. It could be how the information is organised, or if people can use search effectively. We then ask the users to complete these tasks and we watch, listen and probe.
4. The skill in conducting interviewing like this is the people part. Typically we’ll have 30 minutes to build rapport, set the tasks, inquire and probe (but not lead), and actively listen. It can take many years of experience to conduct successful interviews.
5. Once we have conducted all the interviews. We synthesise the results; typically by affinity sorting (grouping together similar insights) into actionable recommendations. These are then fed back into the design process.
6. We iterate the design based on the recommendations and we test it again. Sometimes with the same people, sometimes with different people.
Now, let’s outline some common issues that user research helps with:
This is perhaps the biggest issue I’ve seen in my career of designing digital products. Empathy with users from all walks of life takes experience and is a honed skill. Even after doing this for over twenty years, I continually have to snap myself out of making sweeping assumptions on who are users are and what they may want.
Real people – not fictitious users such as persona, or audience segments – are complicated! They are clouded by preference, insecurities, and their world-view. How they see themselves, and the context they are in –– physically, environmentally, or emotionally — *always* has a impact on what they tell you. Sometimes what they do is not what they say. Good user researchers are tuned into this. Most of us, generally, are not.
Insights we get from users helps us make decisions about what is important. These insights directly inform a product roadmap and shape what is finally released.
And one for the scientific community…
We may use the word ‘research’ for what we’re doing. This is a broad term and can be misleading, especially in a place like EMBL. But design research is not the same as scientific research. And the process of building digital products is not the same, either.
User research is not about proving or disproving a scientific hypothesis. It’s much more grey than that. Because people, context, and the methods used, make that almost impossible. User research allows us to explore the problem space to get closer to understanding. It’s about being more certain and confident that our digital products meet, or exceed, our user’s expectations.
User research makes it easier to deliver to user’s needs and, ultimately, in doing so, deliver to the organisation’s needs.
I’m Mark and I lead the digital communications team here in Strategy and Communications. I’ve been a designer, agency owner, startup owner and manager for about 20-odd years now. During that time I’ve conducted, observed and planned many user research activities. From large scale Market Segmentation through to weekly product design interviews. My experience ranges from expensive, controlled, lab-based studies through to ad-hoc guerrilla methods. These days, I spend time on either side of the activity: organising research, or getting involved in the insight analysis and figuring out how this fits with our broader digital strategy.