In the chapter on Definitions, I look at what a survey is, and the concept of Total Survey Error – in order to help you avoid some of the problems that may make your survey and survey data unreliable.
I also introduce the Survey Octopus as a way of understanding the stages of a survey, the types of error which can occur at each one, and the ways in which errors are connected and affect the validity of your survey.
The topics in the chapter are:
- A survey is a process
- Total Survey Error focuses on reducing problems overall
- Meet the Survey Octopus
- We’ll aim for Light Touch Surveys
I couldn’t give all the appropriate origins and suggestions for further reading in the chapter, so here they are.
What is a survey?
My definition of a survey is modelled on the one in Survey Methodology ( Robert M. Groves, Floyd J. Fowler, Mick P. Couper, James. M. Lepkowski, Eleanor Singer, Roger Tourangeau, 2nd Edition, Wiley 2009):
a systematic method for gathering information from (a sample of) entities for the purpose of constructing quantitative descriptors of the attributes of the larger population of which the entities are members. (Groves et al. 2009)
The survey methodologists are mainly writing for people who are, or are training as survey methodologists, often working in large organisations and sometimes in National Statistical Institutes. Survey methodologists typically work on descriptive surveys, which is why their definition has “for the purpose of constructing quantitative descriptors of the attributes of a larger population of which the entities are members”. They are also likely to survey organisations such as businesses, which is why their definition refers to “entities” – a term that can include both individual people and organisations.
My book is aimed at user experience (UX) professionals, market researchers, and other people who are not survey methodologists and so I wanted to move to a definition that reflects current practice in more informal settings. My changes are:
- “A systematic method” becomes “process”, because the word “process” is more familiar in business settings.
- “Gathering information from” becomes “asking questions that are answered by”, because these days we have a variety of passive methods of gathering information such as web analytics, customer usage patterns, and to me the essential part of a survey is that is asks questions.
- “Entities” becomes “a defined group of people”, because I wanted to make it clear that people will answer, but not just any people. I return to the topic of how to define a group of people in Chapter 1, “Goals” and again in Chapter 2, “Sample”.
- “constructing quantitative descriptors” becomes “get numbers”. The original definition is clearly more precise, but I found that the word “numbers” worked better in conversations with UX professionals and stakeholders
- “the attributes of the larger population of which the entities are members” becomes “you can use to make decisions”. This is really quite a departure from the original. The survey methodologists are typically aiming to create quantitative descriptors for the population, and then leave it to the people who will use those descriptors (known in this context as “data users” to decide exactly how they will use them: for example, for academic purposes, for planning government decisions, or for triangulating with other data sets. So there is an element of decision-making in there, but it’s a long way from the definition. I discovered that when I’m working with people who are doing surveys, it really helps them to do better surveys when I bring “to make decisions” right up into the definition, so that’s what I did.
My definition is:
A process of asking questions that are answered by a sample of a defined group of people to get numbers that you can use to make decisions.
I’ll leave it to you to decide whether my ‘numbers you can use to make decisions’ is a pragmatic approximation of the rest of their definition, or an inappropriate over-simplification. (Jarrett, 2021)
Further reading on survey methodology
If you’re only going to tackle one book on survey methodology, get the one I’ve already mentioned by Groves et al (Wiley 2009): Survey Methodology is my Bible, written by giants of the survey world, and I highly recommend it. The book is packed with references to take you to the next step in your search. It is perhaps a little light on goals and doesn’t have anything on reporting but I reference it throughout my book. Stocked by many libraries; expensive new, relatively easy to find second-hand at an affordable price.
One of the earliest books I read when I became interested in general survey research was Earl Babbie’s Survey Research Methods. This is one of the more academic treatments of survey design but there are some similarities between the way his process is structured and the seven steps I recommend to design and run effective surveys.
Total Survey Error in Practice (Wiley 2017) is a more recent compendium of chapters about Total Survey Error in Bog Honkin’ Surveys. Starting with an introduction to where the concept of Total Survey Error (TSE) came from it also has useful chapters on TSE applied to big data, Twitter and Smartphone surveys. Tourangeau contributes a chapter on mixing modes and Kappelhof on the quality of survey data among ethnic minorities. If you’re interested in recent thinking on the concept of TSE then I recommend this book.
I’m also happy to recommend Designing Surveys by Blair et al (Sage 2014) to anyone who wants an introduction to surveys that is a little more formal than mine. Their process is similar to mine but doesn’t really include reporting or much on coding results.
Surveys for specific audiences
I am planning to write a blog post on books I can recommend dealing with particular survey audiences. Meanwhile, one I found useful in researching for my book is How to Conduct Organizational Surveys (Sage 1997).