What should be done about problematic online content? Whether we are talking about hateful online comments, extreme political rhetoric or false information about elections, it’s widely recognised that there are problems which need addressing when it comes to content on online platforms. But nobody has a clear answer to how to deal with them.
In this article, we explore public opinion about these issues, looking at two areas:
First, people’s thoughts about whether the platforms themselves should be responsible for their own content policies or whether governments should intervene (who should be responsible for content policies online?)
Second, people’s thoughts about whether or not platforms should be responsible for false information posted by users (who should be responsible for false or misleading content posted online?)
Public opinion on these issues matters because the users of online platforms are the ones impacted by low-quality, misleading, or even violent content. This is why we conducted a study in 2024, in collaboration with the Knight Foundation, to understand public attitudes toward regulating the digital public sphere.
We asked representative samples of survey respondents in eight countries – the US, UK, Germany, Brazil, Spain, Argentina, Japan, and South Korea – their opinions on the issues of content moderation and false information posted online.
The consistent and clear finding across countries and demographic groups is that people do not favour government intervention in the content policies of online platforms. Instead, the majority of people think responsibility should be in the hands of the companies themselves. At the same time, there is a strong feeling that online platforms, while being left to self-regulate, should do more to combat false information posted by users.
1. Who should be responsible for content policies on online platforms?
The first broad area of policy debate online we will look at is content moderation policies. We asked survey respondents across our eight countries whether platforms – social media platforms, video networks, messaging apps, search engines, and generative AI tools – or governments should have greater responsibility for content policies online. We used binary response options, as these are shown to improve the reliability of the resulting data.
Question: In your opinion, who should have greater responsibility for policies or guidance when it comes to what content is allowed on each of the following platforms? Which of the following comes closest to your view?
[Insert platform] should have greater responsibility
The national government should have greater responsibility
Don't know
Platforms asked about:
Social media networks (e.g., Facebook, X, TikTok, Instagram)
Search engines (e.g., Google, Yahoo, Bing)
Video websites (e.g., YouTube, Vimeo, Dailymotion)
Messaging apps (e.g., WhatsApp, WeChat, Messenger, Snapchat)
Generative artificial intelligence (AI) tools (e.g., ChatGPT, Google Gemini, Grok)
At the headline level, appetite for government intervention differs across countries, though not by a great deal. It must be noted at the outset that, in all countries, most people (between 51% and 70% across markets) think social media platforms, video networks, messaging apps and search engines should have greater responsibility for their own content policies than governments. In some countries, there is greater appetite for government intervention than in others, though this remains a minority position.
In the UK, for example, a higher proportion of people want to see the government take a more active role when it comes to content policies on each of the platform types (see chart below). This higher comfort with government intervention may be linked to the UK government’s ongoing inquiries into the potential harms of online platforms, particularly on younger people. Germany was also an early mover in terms of regulating online platforms for potentially harmful online content, passing the Network Enforcement Act in 2017 which requires platforms to remove content deemed illegal under German law.
Generative AI companies are seen a little bit differently than other platforms. While clear majorities across countries prefer social and video platforms to be in control of their own content policies, the same is not true of generative AI companies.
More people in the UK – and to a small degree in the US and Germany – think governments should take an interventionist role than think AI companies should be left to develop their own content policies. However, differences in opinion in the US and Germany are small, and only the UK sees a plurality of people (50%) in favour of government oversight. In our research we have found that people in the UK have a more sceptical view of AI than people in other countries. People are also more inclined to want intervention when it comes to new tools which people may be generally less positive about.
Following the pattern across countries, views on who should be more responsible for platform policies also do not appear to vary much by age, gender, education, or platform use. For example, in the next chart, we can see the proportion of people across all countries who say social media platforms should have greater responsibility for their content policies, versus the proportion of people who say governments should have greater responsibility.
Around six-in-ten people across all groups think social media networks should have greater responsibility for platform policy than governments, including users of Facebook, X, and TikTok – three platforms where people have expressed higher concern about the trustworthiness of the content to be found on each platform, according to our Digital News Report 2024.
Findings are broadly similar when it comes to opinions about search engines such as Google, video networks such as YouTube, and messaging apps such as WhatsApp. There are some differences, however, when it comes to opinions about generative AI companies. Men and younger people are more likely than women and older respondents to say that generative AI companies should have greater responsibility for their content policies. The same is true of people who are generally more positive about new technologies.
Overall, findings suggest differences in opinion about platform policy (if they are to be significant) are to be found elsewhere. For instance, it may be reasonable to think large differences would arise when we compare people who are concerned about the negative impacts of social media platforms and those who aren’t. But, doing this comparison across all countries, we find that people who have felt a negative personal impact of social media networks are only slightly more in favour of government intervention (see next chart). The same is true of people who perceive a wider negative societal impact of social media networks.
Openness to government intervention remains a minority position across all groups of people, with clear majorities saying platforms should be responsible for their own policies. Research tends to show that people have a more positive view of social media platforms than might be assumed, even after many scandals and years of negative news coverage.
Finally, we can see some differences in opinion when we break the data down along political lines. Again, it is clear that a plurality of people on the political left, centre, and right think platforms should have greater responsibility for their own content policies – between 57% and 69% of people across platforms and political leanings are in favour of this position.
However, more people on the political left are open to governments playing a role when it comes to content policy on different platforms, but this openness to government intervention remains a minority position, even among left-leaning people.
2. How to manage false information on online platforms?
In this section, we move from a question about content policies on platforms in general to a more specific question about false information. We asked people across our eight countries whether certain types of platforms – social media platforms, video networks, and messaging apps in this case – should or shouldn’t be held responsible for showing people false information that other users post.
This question has been a particularly contentious issue in the United States, where Section 230 of the Communications Decency Act gives online platforms broad immunity from liability for the content posted by users. Broadly speaking, it is users in the US who are responsible for what they post, not the platforms. This broad protection has been criticised – and in other countries, active steps have been taken to explore how online platforms can be held more responsible for the content posted by users.
A major motivation for making platforms more responsible for false information posted by users is the harms caused by viral pieces of misinformation. In the UK, false rumours about the identity of the murderer of three young girls in Southport in 2024 led to anti-immigration riots. In the US and Brazil, false claims of election fraud led to the January 6th insurrection and a similar breach of the Brazilian National Congress in 2023.
Question: In your opinion, should each of the following platforms be held responsible or not responsible for showing potentially false information that users post?
[Insert platform] should be held responsible
[Insert platform] should not be held responsible
Don’t know
Platforms asked about:
Social media networks (e.g., Facebook, X, TikTok, Instagram)
Video websites (e.g., YouTube, Vimeo, Dailymotion)
Messaging apps (e.g., WhatsApp, WeChat, Messenger, Snapchat)
Overall, at the headline level, there are again small differences in opinion across countries. A clear majority of people in all countries are in favour of platforms being held responsible for false information, but survey respondents in the US and Germany are somewhat less in favour – though a nearly two-thirds majority still are.
When it comes to demographics and platform usage, again clear majorities in all cases (between 56% and 76%) think platforms should be held more responsible than not (see Figure 7). Only between 14% and 33% of people across groups think platforms shouldn’t be held responsible.
There are some differences in opinion by age, with older people more in favour than younger people of platforms being held responsible for false information, but these differences are small.
As before, there are few differences by gender, education, and platform usage – with the exception of YouTube. Interestingly, YouTube users are more likely than non-YouTube users to say that platforms like it should be held responsible for false information posted by users.
Finally, when it comes to political leaning, respondents on the political left are again more likely to support intervention. Fully 80% of those on the political left across our eight countries think social media networks should be held more accountable for false information, compared to 65% of those on the right. A similar proportion of left-leaning people (78%) think the same about video networks, compared to 65% of those on the right.
And yet, again, while there are differences between left and right, the majority of those on the right (and centre) think social media and video networks should be held more accountable.
3. How to make sense of this data?
One takeaway from these findings is the near consensus on these policy issues across different groups of people, as well as national contexts. Differences in opinion, while they exist, tend to be small. In fact, differences of only 10-15 percentage points exist between groups of people – such as those on the political left and right – where some might expect much more divergence. In most cases, there are few differences in opinion across demographic groups.
Overall, when asked, the public across eight different countries agree that responsibility for content policies should be left to the platforms themselves. Whether we are considering social media networks, search engines, video sites, or messaging apps, people agree that decisions about content should not be in the hands of politicians in government. Yet the public also broadly agrees that the platforms need to do more to combat false information online.
So what does this look like in practice? There are self-regulatory models like the approach adopted by Elon Musk on X, where users are enlisted to append ‘Community Notes’ on certain posts to add context. This is an approach that Meta has also adopted in the US after dropping the programme it had implemented where third-party fact-checkers would assist in debunking false posts on Facebook and Instagram.
Some other networks like Reddit and Mastodon are more decentralised, with content moderation being carried out by smaller communities of people. On search engines, poor-quality content is typically downranked by the algorithm.
But putting platforms in charge of their own policies is not necessarily always a good thing, and there are legitimate criticisms of the performance of all online platforms when it comes to the types of content allowed (or not allowed).
The one area where there is more appetite for government intervention is generative AI. This is a newer technology which has raised concerns about the spread of false political information – and even whether fake images and videos will break down our ability to tell fact from fiction at all. Many people are unsure of where the technology will go and if governments need to step in now to set up rules.
Our findings may be surprising to those who, after years of scandals and media criticism, would think public attitudes toward platform companies to be more negative than they are. But our research has found people to be generally positive about the impact of online platforms on themselves and society (with the partial exception of social media) and a lot more positive about these platforms than they are about national governments.
Platform policy often requires governments and platform companies to work together to some extent and they are not the only two actors relevant to the conversation about online content policies. There are also roles to play for, among others, the courts (who interpret the laws around free speech and its limitations), civil society groups and the news media, who may advocate for various arguments.
Another relevant actor, as this article highlights, is the public. Of course, public opinion on what to do about complicated issues shouldn’t be the only factor that dictates regulation. But what the public says should matter, since they vote for representatives in government and also use these platforms every day.