To this day, we can only know what people think by asking them. That is why opinion polls are a key tool in social research. But they have become much more, they are used for everything and, as the principle of action and reaction indicates, a negative attitude towards them is being generated. It can be said that being “anti-poll” is becoming a part of our identity.
It is increasingly difficult to find people willing to answer the questions of those of us who investigate social reality. As if that weren’t enough, responding requires attention, and paying attention creates the feeling that time passes very slowly. In a context of mental overexcitation like the current one, dedicating effort and time to a task is not satisfactory, so the surveys must last a few minutes, that is, the questionnaires must be brief and, therefore, it is not possible to go deeper into the analyzes .
Today, the polls that measure voting intention seem to have become a political weapon (fuel for the fire in which the attitude towards them burns). There’s a lot. But not all are worth. And not all that are worth are worth the same. Even if it is not what we usually hear, the value of a survey does not depend on whether it says what we want to hear (confirmation bias). It also doesn’t depend on who makes them.
Sample size and selection
To ensure that a survey is valid (it measures what it claims to measure) and reliability (it measures it well), it is necessary to pay attention to the size of the sample and how it was obtained. Of all those that are talked about in the media, only three provide this information: that of the CIS (Centro de Investigaciones Sociológicas), that of 40dB for The country and that of GAD3 for the digital NIUS and the Mediaset group. The image that follows this paragraph compares the three.
Author provided
The size is easy to interpret: the more people respond, the better. The sampling error depends on it. Defines the limits between which the obtained values are likely to lie. In other words, if 23% of those who respond to the CIS survey say that they would vote for the PSOE in hypothetical elections, it indicates that this value is probably between 21.4% and 24.6%.
To obtain the sample, the CIS and GAD3 have randomly generated telephone numbers, they have verified that they are valid numbers and they have selected them again at random. This guarantees that anyone living in Spain can be contacted. Instead, 40dB has used a sample of people who have chosen to be part of their panel. Unfortunately, it does not provide information on the process to create it.
The danger of selection biases
The quality of the information provided by the surveys is threatened by selection biases. The negative attitude is turning them into a real danger. Its effect depends on whether the decision to participate influences the responses.
The differences in the intention to vote between the different surveys may be related to this bias.
In the 40dB sample there have been two possible selection biases: one when deciding to be part of the panel, another when deciding to respond to the survey.
The attacks on the CIS polls by parties opposing the Spanish government may have caused right-wing sympathizers to refuse to participate in their studies. This hypothesis can be tested by comparing how respondents define themselves ideologically. GAD3 does not provide that data. The mean on the scale is 4.71 in the CIS and 4.74 in 40dB. The scale goes from 0 (far left) to 10 (far right). The comparison allows us to maintain the hypothesis that there are no ideological differences between the samples.
The probability of participating in the next elections is high in the two surveys (the average is 8.41 and 8.11 respectively). And it is significantly higher in the CIS sample.
The CIS asks if they voted in the 2019 elections. 85.4% answered affirmatively. 40dB asks which party they voted for in 2019. 12.5% say they did not vote, so 87.5% did. The participation rate was 75.75% in the April 28 elections and 69.88% in those of November 10.
There are two possible explanations for the difference between actual participation and recall: social desirability bias (saying what we think is expected of us) or selection bias (respondents have more interest or involvement in politics and actually voted). in 2019).
This hypothesis can be tested indirectly by refining the sample (removing those who leave questions unanswered since not answering would act as an indicator of low involvement) and seeing if the percentage of people who say they voted in 2019 changes.
The 40dB survey has 8 questions. If we filter it by removing those who do not answer all the questions, we lose 265 people. The percentage of those who say they have voted rises to 90.4%. The CIS includes a notable number of questions. If in the filtering process the sample is reduced by about 1,000 people, the percentage rises to 92.1%. The differences are statistically significant. Therefore, in both samples there seems to be a selection bias.
Some reflections to finish
-
Based on objective criteria, the CIS survey provides more information and higher quality data.
-
There are only three surveys that provide the minimum information to ensure that they measure what they claim to measure and measure it well; only two of them also allow us to verify it.
-
There are no differences in the ideology of those who respond to the CIS and 40dB surveys. In both, a selection bias seems to be detected, probably due to the negative attitude towards the surveys; It affects both in the same way.
-
Selection bias is related to the topic being analyzed. People less interested in or involved with political and social reality are not represented and, therefore, the results obtained cannot be generalized.
-
Difficulties in generalizing may cause election results to not match previous estimates. That will not mean that the polls are wrong. It will simply say that they are not infallible. The more we abuse them and the more negative attitude we generate, the more fallible they will be.