Toxic information is one of the biggest threats of our generation. Malign influence campaigns, conspiracy theories, political ads, and unchecked opinions sold as fact are eroding trust – in people and institutions.
The world is in need of a response to help abate the pollution in our information environment. But before we can respond, we first need to think about the drivers of toxic information and how we arrived at where we are today.
The economics of toxic information in a digital world
The Internet often gets blamed for creating the kinds of unhealthy engagement we see today. But toxic information is not new; it has shaped world events for centuries. Rather, the Internet has created a new, and very effective vector for the dissemination of toxic information: one where the volume of information is larger, more virulent, and much cheaper to produce.
In the past, the production and distribution of information required significant resources and a well established platform. As a result, the power of producing and distributing information was tightly held by few people and organizations. Today, social media has given everyone with access to the Internet the opportunity to produce and distribute information for free, regardless of its veracity. Just as this model gives regular social media users the power to produce information, it has also dramatically lowered the costs for malicious actors to intentionally weaponize information. Researchers at the University of Baltimore have estimated that today, the cost of running a disinformation campaign that has the power to influence big-ticket political events such as trade agreements, elections, and referendums is just $400,000 USD.
But social media platforms don’t just give people the opportunity to produce information; they incentivize it. The addictiveness of external validation has been exploited by companies to create feedback loops - likes, comments, and shares - that encourage people to create new posts that propel information. The result is a self-sustaining production cycle that pollutes our online information environments.
The Internet has also decreased the costs of accessing information and given people greater choice of information to consume. Anyone with an Internet connection can find information at any time of day, on any topic, for free on social media platforms like Facebook, Twitter, and Reddit. Meanwhile, fact-checked information and rigorous journalism often exists behind a paywall. This exacerbates information inequality by stratifying news sources along socioeconomic lines; those who can afford to spend money on information are likely to receive a higher quality than those who don’t. Information inequality increases community vulnerability, as malign actors more easily exploit those with restricted access to higher quality information from established sources.
With the rise of non-traditional news sources, consumers now also have the option to choose who they want to get their information from – whether it be journalists, politicians, family, friends, celebrities, or influencers. Under such circumstances, consciously or otherwise, many people end up self-selecting into information channels that validate existing views or beliefs, regardless of whether the information is fact, fiction, or somewhere in between. This makes us even more vulnerable to toxic information, because we are most likely to encounter it from a source that we personally selected as trustworthy.
Lastly, toxic information is dangerous because it is designed to take advantage of the ways that we engage online. In a world where we scroll mindlessly through large volumes of content, these false stories catch our eye with ideas that are provocative, entertaining, or surprising. As a result, false information spreads like wildfire. A landmark study by researchers at MIT found that false news stories on Twitter are 70 percent more likely to be retweeted than true news stories, and are 6 times faster to reach at least 1,500 people. The onset of the COVID-19 pandemic exemplified just how fast false information spreads online. By April 5, 2020, COVID-19 misinformation on publicly accessible platforms had spread to 87 countries across the world, in 25 different languages – a deluge of rumours and conspiracy theories that has continued to grow in the months since.
The result is that toxic information has wreaked havoc across the world, and continues to do so. It has been used to subvert democratic institutions, influence the outcome of elections, incite religious and ethnic violence, and undermine public health responses to a pandemic that has claimed the lives of more than 1 million people worldwide.
Where do we go from here?
Right now, millions of pieces of misinformation and disinformation are circulating online, seeking to shift people’s behaviours and opinions. A healthy information environment requires action from government, civil society, the private sector, and most of all, individuals. In the short-term, while we find better ways of reducing the volume of toxic information online, we should acknowledge that limiting the supply is an insufficient solution. Ultimately, we need to strengthen communities’ resilience to it.
People are the fastest and best response to toxic information, but they need the tools that allow them to process and engage with information on their own terms. They need spaces and opportunities where content isn’t driven by profit-maximizing algorithms, but rather, which give people the space to reflect on information themselves and craft their own opinions.
We saw the need for this type of space - so we decided to build it.
We build digital spaces that give people greater agency by allowing them to independently explore and think critically about information they consume. This process is incentivized by gamification.
Gamified experiences can be an effective way of correcting toxic information because they are designed to be engaging and interactive, fostering genuine moments of discovery for players. In this setting, people are competing against no one but themselves, and are given the space to reflect on the information presented to them through the game without being influenced by the full throttle stream of likes and comments of others on the Internet.
These types of experiences create a low-pressure, active learning environment. Gamification normalizes getting a few things wrong, and allows players the opportunity to explore correct information and actively update their own understanding of the world on their own terms.
The anonymous data we are collecting shows that giving people the agency to process accurate, transparent information in their own time can help to increase individuals’ resilience to toxic information. Results from our platform on COVID-19 misinformation called It’s Contagious show that more than 7 out 10 people who score below-average on their first play correctly remember the answer to a question they initially got wrong. They’re remembering the correct information inside of a couple of minutes.
Our platforms also provide a window into people’s emotional response to toxic information, how they feel towards corrective information, and the effects of these feelings on their public policy preferences. This information offers society, policymakers, and industry leaders valuable insights on how we can reduce the harmful effects of toxic information on our communities.
We are using this people-centred framework to help communities build resilience against the effects of toxic information across the world, from COVID-19 misinformation in Canada, to tolerance and diversity in Myanmar, to electoral interference in Europe. While digital platforms have catalyzed a new kind of toxic information, they can also be reimagined to help build healthy, resilient communities. It’s the start of a new infrastructure - good technology - that we’re building with communities to help them thrive.
Written by Farhaan Ladhani@dpsorg