The most important issue facing a country today and the most important issue(s) facing a country, today
I am interested in measuring the most important issue facing a country today and the most important issue(s) facing a country, today. The latter is of course plural in comparison to the former. I have given this some thought and figured that an initial question “what is the most important issue facing [country] today” where the respondent replies with a single response answer will capture this without a doubt, quite convincingly. However, there is something that needs addressing with the other issues question (call this question two).
With question two, if we say “what are the most important issues facing [country] today”, and the respondent gives three issues that are important to her. Say X,Y,Z, and has already said W was the most important issue, how do we compare the most important issue with the second most important issue without recourse to a ranking system for the second question. I mean, is it (a ranking system) necessary?, or would it be based on the number of respondents giving X,Y, or Z as an important issue. By the way, we are conducting this survey face-to-face, so the respondent is stating the answers in an unprompted manner.
If Y was stated by 401 respondents out of 1000, and X = 400, Z=100, and other issues A=60, B=20, and so on, can we compare Y with W, as the “most” and “second most” important issue, without recourse to a ranking system, irrespective of the order in which the respondents said Y was important. For example, 401 of the 1000 respondents may have said Y after X, when asked the second question. So although numerically the number of respondents saying Y, is higher, it was their second response to question two. We may be assuming that respondents are allocating the same weight to responses X,Y,Z, but should we not give some weight to the order of their responses or rank them in the order they were mentioned. So, empirically speaking the interviewer may note the order in which X,Y,Z was mentioned as 2,3,4, with W=1, rather than circling or ticking X,Y,Z on the survey instrument. We can test this doing a survey with the rank order method, then comparing the responses, how many first, second, third preferences each issue received, attaching a weight for each preference say 1.0, 0.66, 0.33 for first, second, third preference, calculating a score, versus a simple count of number of responses for each issue. Does the issue that receives the highest first preferences always receive the most responses, irrespective of rank order, is what I am interested in. So, in effect we could end-up with three ways of interpreting the results. One is the plain vanilla, add the number of responses without recourse to the order in which they were mentioned, two, count only the first preferences ignore second and third preferences for the purposes of comparing question one and question two, and the third way is to attach weights. The latter is more complicated as we need to come-up with a weighting scale (I have simply used (1/number of issues) to come up with the weights described above). I cannot see any issue with placing the order in which the responses are mentioned as the method to rank them. I mean, I can't think of a reason why a respondent would say their second most important issue, last for example.
I did some research on how Gallup conducts its polls and they actually ask to rank the issues in order (this may be specific to that particular survey I found). Also, there are specific questions, where the respondent say how important the issues are. But this may be a paper based survey, so it is not quite the same as a face to face survey. That brings us to the next issue about the scales to use. The work by (WINIEWSKI and BILEWICZ, 1999) says this is one of the most important considerations in the design. So, for example, it is less than intuitive to assign an ordinal scale to a numeric scale or vice versa. On this (issue?) I was wondering if it would make sense intuitively to ask a respondent a numeric score from 0-100 for an issue X, with 0 not important at all and 100, very important. (Some recent papers on Political Trust from the U.S use the 0-100 scale) It is just that, a numeric score, is more dynamic and can be used to “make your own” scales (this insight is from a personal communication with a Professor, although we were referring to indicators or indexes). It can be used to cross compare across surveys and geographical locations and time.
If the survey is conducted using a paper-based postal survey or online, the next question (question three) could ask to rate the ranked issues (Y,X,Z) of the previous question (question two) or all four issues question (W,Y,X,Z) of question one and two. The participant could rate each issue by allocating a score from 100, up to a maximum of 100 for all the issue. So for example, the respondent allocates 50 = W and 50 = Y, or 34 = W 33 = Y and 33 = X, or any combinations like this, as long as the score is 100 in total. This question will serve three purposes, that is, it will confirm the ratings matches with the ranking response in the previous question, if indeed, the issues mentioned in question two match with the ratings issues (as a way of confirming responses, so it will probably work best in an online survey where the respondent cannot go back to change the response given to the previous question). Also, it prevents the respondent from responding with 100 for each and every issue (Converse, 1986). This question could for example ask “for every 100 hours the government spends on the most important issues you mentioned in question two, how much time would you like to see the government spent on EACH of these issues”. (using money as opposed to time would probably bias the responses towards assigning a higher or lower score for a particular issue)
So question three will be an addition to the survey (for paper and online surveys only, probably tough to do with a face to face or telephone survey), but question two and question one will stay the same for face to face surveys. Only the way the response is recorded for question two changes. This will allow continuity of “the most important issue facing [country]” and “what are the most important issues facing [country] today” in their entirety and preserve comparison across surveys. Question three is of course inspired by the work of Converse (1986). Critiquing their own study Converse (1986, p.244) says “we have indicated, with the benefit of hindsight, our regret at not having obliging our respondents to rank the issues in terms of their relative priority”.
All this in effect may amount to nothing, if all respondents were to say only one issue for question two. So, any comparison between question one and two and their interaction through time to gauge differences between the most important issue and the second most important issue could be done using the plain vanilla add up all the responses to question two and compare method. However, as soon as at least one respondent gives more than one response to question two, the mean (average) for question two is greater than 1. Where the mean for question two is greater than two, it is likely to produce even more of an inaccurate measure of the second most important issue and hence any index comparing the most and second most important issue if we don’t use a ranking or weighting method.
References
Converse, P. E., , Pierce, Roy.,. (1986). Political representation in France. Cambridge, Mass.: Belknap Press of Harvard University Press.
WINIEWSKI, M. and BILEWICZ, M. (1999). ARE SURVEYS AND OPINION POLLS ALWAYS A VALID TOOL TO ASSESS ANTISEMITISM? METHODOLOGICAL CONSIDERATIONS. American Psychologist, 54, 93–105.