7 Myths About Social Media Surveys Debunked
Social media surveys are the newest addition to consumer research methodologies. While social networks have been around for a while now, research methodologies have remained somewhat within the traditional spectrum of online panels, phone and intercept interviews.
Being the new kid on the block naturally raises some eyebrows and hesitation around surveying consumers on social networks. Fear no more! We are here to debunk some myths about it. We’ve been doing it for more than 6 years now and we know exactly how it works.
What has really changed is the avalanche of users that log in to social networks every day. With unprecedented volume, came large amounts of user data that gave researchers unparalleled access to socio-demographic info, interests, geographic location, language, and even how they are logging in.
Think about it for a second: There are 3.8 billion social media users (and counting). That’s about 45% of the current world’s population and that’s where consumers are spending their time these days. To put it in perspective: People spent 1.75 trillion hours on their mobile phones over the past 12 months. Half that time using social and communications apps. And just to be clear: 99% of social media users use their mobile devices to access them.
With this level of engagement and interaction, targeting consumers via social channels not only has provided access to large groups of people, but it has also allowed researchers to do so with great accuracy.
This is a common misconception we hear often when it comes to consumer research via social networks. Our data comes directly from the answers people provide via our questionnaires.
We use the big data social media platforms offer in order to target people with our surveys and make sure that the targeted population sees our advertisements until we obtain our desired goal of participants. All the information we receive is given voluntarily by the participants through our questionnaires and is rendered anonymous during processing so that we can provide precise data without compromising privacy.
Let the numbers speak: According to the World Economic Forum, Baby Boomers (55yo+) are showing the greatest increase in activity on social media platforms. For example, usage of Instagram and WhatsApp is up 59% and 44% respectively for this group since 2016, which is more than double the global average. While Millennials, and generations Z & X still represent a large chunk of social media users, older generations are taking to social networks in large quantities.
As an example, here’s the split on Facebook alone (which has a penetration rate of 60% among global internet users):
Facebook usage by age group:
- 51% of 13–17 year olds
- 76% of 18–24 year olds
- 84% of 25–30 year olds
- 79% of 30–49 year olds
- 68% of 50–64 year olds
- 46% of 65+ year olds
Furthermore, even though 32% of Baby Boomers don’t post on social media, they are more than twice as likely to engage with branded content than social media users who are 28 years old or younger.
As an example, we ran a study for a retirement home in the Toronto area. The eligibility criteria was people above 65 years old. We filled our quota very fast and the oldest respondent in the survey was 93 years old!
The short answer here is that our questionnaires are non-incentivized. People answer them because they want to. Since there is no monetary compensation, there’s also no incentive for people to fill a survey twice. Let’s be honest here… very few people have the time to do this (although it is key that our questionnaires are to be short to avoid respondent fatigue). It’s very rare, but it can happen. As we run A/B tests, someone might be targeted twice with our survey during the duration of the test. If someone would decide to do so, they could answer a social media survey more than once, but since your social media profile is attached to a unique email address, it is quite easy for us to remove duplicates or disqualify respondents once we have collected all responses. So even if someone answers to the survey 100 times, it won’t affect our quota samples. Outside of our A/B tests, we do “audience exclusion”, which automatically removes whoever has already answered the survey.
When it comes to bots, we have tight quality control measures to spot and flag them. However, since bots are primarily deployed to collect rewards and make money, we rarely encounter them as there is no monetary gain or compensation offered for filling up our surveys.
Targeting niche populations can be very challenging, especially for traditional methods. Online panels, for example, need to go fishing for panel members that match the niche criteria, then pre-survey them so they can be sure they are asking the right questions to the right person. With social media targeting, there’s no need to pre-screen people as they can be reached based on their interests, socio-demographic profile, and many other factors directly. In fact, at Potloc we’ve had many panel companies reach out to us to help them acquire respondents in niche populations they struggle to reach. Combining interest-based targeting with other tools like Geo-targeting and Geo-fencing facilitates this process, allowing us to reach people in any area.
We’ve seen concerns about social media targeting regarding location. For example, if you have New York City listed as your location on your social media profile but you live in Boston, it is possible that you see our surveys targeting NYC popping up in your feed while you are in Boston. This is called default targeting by location (Facebook, for example, allows researchers to advertise their surveys based on the city listed on your profile). However, we know better: People move and forget to change their location, or they spend their time between two or more cities. While we allow everyone to answer our surveys, we can disqualify the ones that are not in the targeted location at the moment of the survey. How? We check the IP address of the respondent to make sure he or she is answering from the right location and we re-validate the information by simply asking for postal code and location information directly in our surveys.
We’ve also seen some claims that Geo-targeting in social media only offers a radius of 17kms. The truth is that we can go as narrow as a 1Km radius and as wide as a whole country. Geo-targeting allows us to reach people within a city block, and we combine this with default location targeting (the city listed on your profile) and targeting by postal code to make sure we are reaching the right people at the right place (we also just ask people to validate their location). If we combine this with Geo-fencing, we can also reach anyone who enters that location with their mobile device (a great tool when it comes to surveying people who transit through a certain area, attend an event, or work there but live somewhere else).
Today, 49% of the world’s population is on Social Media, making these networks the greatest consumer panel to date. This unprecedented access to consumers everywhere reduces the chances of our samples being biased to a point where the quality of our insights would suffer.
Every consumer research has some sort of bias. Whoever tells you otherwise is just lying. So how do we deal with biases in social media? We acknowledge them! We know what biases we deal with and we know exactly how to address them. There are 4 types of survey bias when we launch a study:
- Coverage bias: Since we use social networks to target consumers, we definitely need them to meet certain conditions. They must have access to the internet, have a social media account, and be an active user. However, with 3.8 billion people registered in social networks, we can say that coverage is pretty extensive. Coverage bias affects older populations in places they are not so active online as well so we might see an under-representation of men, older people, and less-educated people or with a low socioeconomic status.
- Ad algorithm bias: Social Network’s advertising algorithms are set up in order to minimize cost-per-click (CPC). They basically push our survey ads primarily to the least expensive audiences. This might show an under-representation of men and older people.
- Cognitive load bias: Answering a 6-8 minute survey online is demanding from a cognitive standpoint, so some people might find the task too difficult to complete. This might result in an under-representation of older people, and less-educated people or with a low socioeconomic status.
- Self-selection bias: Unlike web panels, we have to communicate on the subject of the survey. People who click on our ads have an interest in that specific subject (since we can target them based on such interest). And we never offer any incentives to respondents. People who complete our surveys do it because it matters to them that their voice is heard (sharing their opinions is the 4th reason why people use social media). So, what do you think is worse: Having respondents naturally interested by the subject vs. respondents seeking incentives? We think this actually increases our data quality.
All methodologies have a bias, few are transparent. Addressing survey bias head-on by sampling enough people to ensure that you hit the targeted quotas is essential. This is why being conscious of these myths and how to address them, is essential for the success of any study. Counting with a team of experts that can help you with launching your consumer research using social media sampling is a great way to guarantee results that will have a meaningful impact on your business. To learn more about this methodology or find answers to other questions, visit our website at Potloc.com or contact us here.