Chapter 2 is about turning the ‘defined group of people’ you want to ask into the people you actually ask – your sample.
The topics in the chapter are:
- Some of the people you ask will decide not to answer
- Response rates vary by the way you deliver your questionnaire
- Response depends on trust, effort, and reward
- Decide how many answers you need
- Find the people who you want to ask
- The right response is better than a big response
The errors associated with this chapter are:
- Coverage error, which happens when the list that you sample from includes some people who are outside the defined group that you want to ask or excludes some people who are in it.
- Sampling error, which happens when you choose to ask some of the people rather than everyone.
- Non-response error, which happens when the people who respond are different from the people who don’t respond in ways that affect the result of the survey.
I couldn’t give all the appropriate origins and suggestions for further reading in the chapter, so here they are.
I found various sources of data on response rates
In the book, I published my own rules of thumb about typical response rates. Here are some sources that I found that helped me to create those rules.
NSIs can get response rates over 65%
Nearly every country in the world has a national statistical institute (NSI) and many of them are great at publishing their response rates. I’ve chosen three countries here: USA, Netherland and Australia.
So far as I’ve been able to find out, the US is the only country that has lots of them, of which two of the best-known are the Bureau of Labor Statistics and the US Census Bureau. The Census Bureau publishes response rates for its surveys. For example:
- The American Community Survey – response rates are in decline but even so the response rate achieved is above 90%.
- The Annual Survey of Public Employment and Payroll aims for response rates above 70% and ‘names and shames’ states that fail to achieve this.
Statistics Netherlands publishes many of its reports in English. They say that they typically get response rates about 65% in this report:
- Response enhancing measures for social statistics (Statistics Netherlands)
The Australian Bureau of Statistics managed to improve its response rates between the 2006 and 2011 Censuses, bucking the general trend of declining response rates. They got over 95% response (non-response under 5%) in all states and territories other than Northern Territories – an area with few roads, and a relatively high proportion of indigenous people who may move around – where they still managed to get over 92% response (non-response under 8%):
Some academic surveys can get high response rates
One of the biggest academic surveys is the European Social Survey which “measures the attitudes, beliefs and behaviour patterns of diverse populations in more than thirty nations”. It is run by a consortium of academic institutes and universities. The 2016 response rates varied from 31% in Germany to 74% in Israel. You can find all the data sets starting in 2002 on their website:
It is harder to find response rates for other surveys
The other estimates that I have quoted in the chapter are based on personal experience. From time to time, a market research business or survey tool vendor publishes an article on response rates – and then takes it down again. I wonder why. Could it be that the continually declining response rates are worry for them? Who knows.
Your own response rates are more reliable than my rules of thumb
In the book, you’ll find that my last word on surveys is ‘iterate’. That applies here, too: the best way to find out the typical response rate is likely to be for your survey is to run a pilot survey. You’ll find more about pilot surveys in chapter 7, “Fieldwork”, in the book.
Satisficing and how it relates to perceived effort
In the book, I provide my own interpretation of why people respond to surveys that I started to adapt from Don Dillman when I read the 2000 edition of his book that describes his Tailored Design Method for surveys. There’s more about the history of his method and links to the editions of his book:
There’s a further consideration about perceived effort that I decided not to include in the chapter for reasons of space, “satisficing”.
Satisficing means choosing an outcome that is ‘good enough’ rather than aiming for the best or optimal outcome.
The word ‘satisficing’ was originally coined by Herbert Simon in the late 1950s.
When answering surveys, people satisfice when they feel that the effort of ‘answering properly’ is disproportionate to the reward. It’s rarely a conscious decision, but frequently a lack of attention to detail and maybe a feeling of ‘I’m going to rush through this to get to the end’. For example, in her book “People Aren’t Robots: A Practical Guide to the Psychology and Techique of Questionnaire Design“, Annie Pettit describes ‘open box satisficing’ where people give answers such as ‘N/A’ or ‘no idea’ to open box questions.
Extreme satisficing can become drop-out.
To reduce or avoid satisficing in your survey:
- Make it as short as possible. Think carefully about whether the value of that extra question or questions is really justified by the increased risk of poor quality answers or drop out
- Test it with people in your defined group. Listen to what they say, and cut the survey down accordingly
Some market researchers recommend including a check question that will detect whether people are continuing to answer thoughtfully – Annie Pettit gives the example of ‘select the second answer’ as an entry in a gid question. She argues that this is always a bad idea, as the people who answer thoughtfully think that the researcher made a mistake in the questionnaire design and are bewildered. And the people who are satisficing won’t see it anyway.
Leslie Kish wrote the definitive text on survey sampling
If you need to investigate survey sampling in greater depth than I was able to cover in the book, then I recommend starting your search with Lesley Kish. His book Survey Sampling (Wiley 1995) was first published in 1965 so there has clearly been a lot of material published on the topic since then. However, if you want to get a sense of one of the most influential thinkers in the world of sampling this comprehensive book – still in print – is for you. It’s widely available in libraries or second-hand too, if the £123 / US$143 for a new copy is a bit steep for you.
If you start a literature search with Kish, you’ll find plenty of materials to consider – and there is a collection of Kish’s own papers to consider, some of them helpfully collected in Leslie Kish – Selected Papers (Wiley, 2003). You can get a sense of the history of the field from “The Hundred Year’s Wars of Survey Sampling” which is included in the collection.
It’s important to keep sampling errors in perspective, as Kish himself points out:
Are sampling errors necessary and sufficient, when other survey errors, such as measurement, are often potentially larger but unknown? Yes they are necessary, though not sufficient. And they become relatively more important for small subclasses and for comparisons and other analytical statistics” Kish, L. (2003). “The hundred years’ wars of survey sampling.” Leslie Kish Selected Papers: 5-19.
/————————————————————————–
From here, this page is a collection of notes. Please contact me if you need me to focus on it.
Small samples create strange patterns
Kish talks about “small subclasses” and you’ll find that all statisticians are very wary of small samples.
It can be difficult for some of us to grasp the idea that the patterns we see in small samples arise from the sample size, not necessarily from the underlying distribution. This visualisation from Rick Wicklin’s blog on ‘Sampling variation in small random samples’ has a set of samples from a normal distribution where the samples when put into histograms look like all sorts of things. Putting them into xy charts makes them look less variable, but still with lots of outliers (which are not outliers at all).
Response rate and representativeness
Elizabeth Martin, a leading survey methodologist, tackled the problem of representativeness in her 2004 Presidential Address to the AAPOR (American Association for Public Opinion Research):
“Low response rates do not mean that nonresponse bias is present, but they leave surveys more vulnerable to its effects if it is present” (Martin 2004).
Setting up an interview
For ideas about how to do interviews, start with Andrew Travers’s book Interviewing for research. It’s practical and thoughtful advice and has the benefit that he decided to make it free to download a few years ago.
It’s always important to check that whoever you involve in research is comfortable with what you’re planning to do. The UK local government organisation Hackney Council has a consent form that is clear and can be adapted to a variety of types of research: Understanding how best to ask for consent from user research participants.