My sister lives in Melbourne and that naturally makes me want to get to Australia as often as I can. So I was especially pleased when a few things came together and I was able to make it to OzCHI, the Australian equivalent of the British HCI Group conference. It’s a small conference so you can feel that you’ve had a chance to talk to most people, but even so they managed to put together a really interesting programme with plenty in it for practitioners. The proceedings will be available shortly (try the ACM digital library) and you can read my conference paper here.
I’d like to pick out one paper that will change my approach to teaching usability testing.
The paper
Mikael Skov and Jan Stage are at Aalborg University. They teach HCI, and they think about how to teach it better. One problem they encountered was that although it’s pretty easy to get students to perform competently as facilitators in usability tests, it seems to be a lot harder to get them to analyse the data that they gather in the tests.
So they tried a small-scale experiment. They got a tape of a typical usability test. Each participant in their experiment reviewed the tape and recorded the usability problems. 8 of the participants did this review without being taught how to analyse data. 6 of the participants did their review after being taught the use of a simple ‘conceptual tool’.
The participants who had been taught the use of the tool found more problems and were more consistent in what they found.
Mikael Skov, in the modest way of academics, pointed out a few deficiencies in their research. For example, they did not consider the capture of positive findings. The groups were quite small. But still, the results are worth considering. I’m sufficiently convinced by Skov and Stage’s work to take the view that thinking about severity will help students to find more problems and be more accurate in their analysis of those problems.
The analysis “tool”
So what’s in the magic ‘conceptual tool’ that made so much difference?
The primary dimension is severity. They chose three levels: critical, serious and cosmetic.
The secondary dimension (and I suspect the basis of much of the strength of the tool) is four ways of explaining what we might mean by ‘critical’, ‘serious’ or ‘cosmetic’: “Slowed down relative to normal work speed”, “Understanding”, “Frustration” and whether the participant needed to get help from a “Test monitor”.
So, for example, a ‘serious’ problem is characterised as:
- Slowed down: delayed for several seconds
- Understanding: does not understand how specific functionality operates or is activated. Cannot explain the functioning of the system.
- Frustration: is clearly annoyed by something that cannot be done or remembered or something illogical that you must do. Believes has damaged something.
- Test monitor: receives a hint.
Teach severity scales and their impact as part of analysis
Now, those of us who do this stuff for a living might argue somewhat with aspects of this conceptual tool. For example, I might look for a ‘minor’ category to mop up problems that are a bit more important than ‘cosmetic’ but don’t quite make it into ‘serious’. And I might argue that you need to be slowed down for a bit longer than ‘several seconds’ before a problem becomes ‘serious’.
But I think the big lesson here isn’t the precise details of the tool. I think it’s the importance of teaching beginners an explicit, described way of categorising problems before they try looking for problems. And up to now, I’ve been doing it the other way around: getting them to look for problems, then asking them to determine how severe the problems might be.
I’d probably call it a ‘severity scale’ rather than a ‘conceptual tool’, but I’d aim to include both the classification of the problem and Skov and Stage’s secondary dimensions, which I’ll probably call something like ‘impact’ (suggestions, please?).
links
Mikael Skov and Jan Stage’s paper: Supporting problem identification in usability evaluations
This article first appeared in Usability News
Image, Severity Scale, by Red Hat, creative commons
#usability