Once you’ve chosen your survey method, and developed your list of questions, it’s tempting to get right on with the survey. Time is probably egging you on too! But wait a minute, would you cater for a wedding party without trying out the recipes first?
However well you and others have thought out your survey, there’s bound to be at least one glitch to resolve. Much better to take the time to identify them at the beginning than find them after you have run your full survey.
Whatever your survey method – phone interviews, face to face interviews, printed questionnaire or online survey – here are seven pitfalls you’ll want to avoid.
1. The wording of a question doesn’t work
Interviewers will quickly tell you if the wording of a questionnaire isn’t working, because respondents will have asked for clarification. For self-completion surveys, you can tell you need to improve the wording if you get unexpected responses to a question, or perhaps no responses at all.
For example, it may become apparent the age bands: 18-25, 25-30, 30-40, over forty overlap, and should be 18-25, 26-30, 31-40, over 40.
Other problems could be: ambiguous questions, unclear questions, use of abbreviations respondents don’t understand, etc.
2. A question is out of sequence
Some questions build on each other, or follow a logical sequence.
For example, if Q10 asks ‘Which of the following benefits did you receive from the scheme?’ it should come after Q12 which asks ‘Which scheme did you participate in?’
3. The routeing doesn’t work
If your questions involve routeing, for example, ‘If No, please go to Q10’, there is potential for a mistake, especially if the questionnaire has been through a number of drafts with changes in the numbering of questions.
Online surveys have the scope for more sophisticated routeing (called ‘logic’ in Survey Monkey), and consequently more scope for this type of error.
4. Questions overlap
If you get responses like, ‘see Q3 above’, you probably need to rationalise overlapping questions.
For example, if you ask ‘What suggestions do you have to improve the scheme?’, and later ask ‘Do you have any further comments?’ when it comes to analysis, suggestions will appear under both questions. The solution might be to amend the second question to ‘Do you have any further comments, not answered by previous questions?’, or if there were few responses, to delete the second question altogether.
5. Generating hundreds of ‘Other’ responses to categorise
If you devise questions with multiple choice responses (which are good because they are quick to analyse), it’s often necessary to include an ‘Other’ box at the end. A pilot is a good opportunity to review the ‘Other’ responses, and extend the list of multiple choice responses. This saves a lot of time categorising responses later on.
6. Your survey data can’t easily be analysed
As well as collecting data, a pilot should also analyse the data, to check the analysed data makes sense. For example, analysis of your pilot data may show there are very few responses to one of the choices in a multiple choice list, so it should be excluded.
7. There’s a glitch in the logistics of your survey method
A university piloted its online students survey, and wondered why only males had responded. It turned out the email list of students had a filter by sex, and this had been ‘turned on’ when the pilot list was sent out!
However straightforward your survey method may seem, it’s only by testing that you can be sure it works. In my experience the glitches are usually small things, but they could have significant consequences.
The bottom line: Piloting your survey will help you resolve any glitches, thus saving time in your main survey, ensuring your results are robust, and perhaps avoiding an error which would invalidate your survey results.© May Johnstone, 2009, Project Perspectives.co.uk. Please feel free to circulate this article provided it is used in its entirety, including this acknowledgement.