Many of these respondents also cited another reason for concern about the future of the social climate online. They focused on the incentive structures of online life and argued: Things will stay bad because tangible and intangible economic and political incentives support trolling. Participation = power and profits.
It’s a brawl, a forum for rage and outrage. … The more we come back, the more money they make off of ads and data about us. So the shouting match goes on.
Many respondents argued that there are particular affordances of the internet and commercial realities that reward bad behavior. They noted there is money to be made and reputations to build in “echo chambers.” Some suggested that internet service providers and media organizations do not have a meaningful incentive to moderate activities or act as “police” on their own properties, because conflicts between users – and groups of users – typically lead to higher levels of engagement. And that produces more clicks and advertising revenue.
Additionally, respondents point to the 2016 U.S. presidential election and the U.K. “Brexit” as examples of the way that hyper-partisan activity and attacks on opponents on social media drive more-profitable traffic and sway public opinion.
‘Hate, anxiety, and anger drive participation,’ which equals profits and power, so online social platforms and mainstream media support and even promote uncivil acts
Randy Albelda, a professor of economics at the University of Massachusetts Boston, said, “There is a tendency for the companies with the largest internet/social media interfaces (Facebook, Google, Twitter, etc.) to want to make more and more money. They will use the internet to sell more things. This shapes the technology and how we use it. While there is lots of ‘free choice’ in what we can buy, this does not contribute to the expansion of democratic practices.”
The mass media encourages negative and hateful speech by shifting the bulk of their media coverage to hot-button click-bait.
An anonymous respondent wrote, “I expect to see more effectively manipulative interactions to become a core part of the experience of internet content. It is clear that many professionals involved in the design and monetization of the internet see only another tool to influence people’s behavior and have steered the infrastructure design and practical use in such a way to emphasize rather than balance out less desirable parts of our human natures. The current general professional effort to build perceptual and behavioral control into the system has too much emphasis on commercial reward and not enough on human service and is therefore negative in the whole. I would prefer a more neutral communication network.”
Andrew Nachison, founder at We Media, said, “It’s a brawl, a forum for rage and outrage. It’s also dominated social media platforms on the one hand and content producers on the other that collude and optimize for quantity over quality. Facebook adjusts its algorithm to provide a kind of quality – relevance for individuals. But that’s really a ruse to optimize for quantity. The more we come back, the more money they make off of ads and data about us. So the shouting match goes on. I don’t know that prevalence of harassment and ‘bad actors’ will change – it’s already bad – but if the overall tone is lousy, if the culture tilts negative, if political leaders popularize hate, then there’s good reason to think all of that will dominate the digital debate as well. But I want to stress one counterpoint: There’s much more to digital culture than public affairs and public discourse. The Net is also intensely personal and intimate. Here, I see the opposite: friends and family focus on a much more positive discourse: humor, love, health, entertainment, and even our collective head shakes are a kind of hug, a positive expression of common interest, of bonding over the mess out there. It would be wrong to say the Net is always negative.”
Dave McAllister, director at Philosophy Talk, wrote, “The ability to attempt to build up status by tearing down others will result in even more bad actors, choosing to win by volume. It is clear that the concept of the ‘loudest’ wins is present even now in all aspects of life in the United States, as represented by the 2016 presidential campaign.”
Micah Altman, director of research at MIT Libraries, replied, “The design of current social media systems is heavily influenced by a funding model based on advertisement revenue. Consequences of this have been that these systems emphasize ‘viral’ communication that allows a single communicator to reach a large but interested audience, and devalue privacy, but are not designed to enable large-scale collaboration and discourse. While the advertising model remains firmly in place there have been increasing public attention to privacy, and to the potential for manipulating attitudes enabled by algorithmic curation I am optimistic. I am optimistic that in the next decade social media systems will give participants more authentic control over sharing their information, and will begin to facilitate deliberation at scale.”
The numbers show that social media platforms have already become the tail that wags the dog, as the profit woes of the mainstream media old guard cause those organizations to try to shape their content and performance to fit the social media and search environments established by digital platform providers.
Dave Burstein, editor at fastnet.news, wrote, “Most dangerous is the emerging monopoly-like power of Facebook and Google to impose their own censorship norms, 100,000’s of thousands of times. Ask any news vendor about the de facto power of Facebook. This is just one reason to reduce the market dominance by making sure others can take market share, interoperability, users’ ability to take their data (social graph) to new services.”
Jesse Drew, a professor of cinema and digital media at the University of California, Davis, wrote, “The mass media encourages negative and hateful speech by shifting the bulk of their media coverage to hot-button click-bait.”
Ansgar Koene, senior research fellow at the Horizon Digital Economy Research Institute, replied, “For the most part people online want to interact and communicate constructively, same as they do offline. The perception of the level of negativity is stronger than it really is due to a current over reporting in the media.”
It is in the interest of the paid-for media and most political groups to continue to encourage ‘echo-chamber’ thinking and to consider pragmatism and compromise as things to be discouraged.
An anonymous respondent noted, “Corporate media seems to rely progressively more heavily on the attention-getting antics of bad actors, the illusion that will pass for ‘public discourse’ in the future will be one of trolling, offense, and extremism.”
Another anonymous respondent said errors and bias are more abundant due to the public’s move to being “informed” via social media sites, writing, “The news media have become more unreliable as social interaction sites have become more prolific. People are now getting their ‘news’ from both places and sharing it rapidly, but already there is a dearth of fact-checking and more often than not, what is posted is emotionally charged and usually presents only one side of a story, often with a biased opinion at that.”
David Durant, a business analyst at U.K. Government Digital Service, argued, “It is in the interest of the paid-for media and most political groups to continue to encourage ‘echo-chamber’ thinking and to consider pragmatism and compromise as things to be discouraged. While this trend continues, the ability for serious civilized conversations about many topics will remain very hard to achieve.”
Trevor Owens, senior program officer at the Institute of Museum and Library Services, commented, “As more and more of the public square of discourse is created, managed, and maintained on platforms completely controlled by individual companies, they will continue to lack the kind of development required to develop the kind of governance that makes communities viable and functional. Given that the handful of technology companies that increasingly control discourse are primarily run by very privileged individuals it seems very likely that those individuals will continue to create systems and platforms that are not responsive to the issues that those who are vulnerable and less privileged face on the Web.”
Christopher Wilkinson, a retired senior European Union official, commented, “Online interaction is already vast, and the experience is quite mixed. Numbers will grow, but quality will not improve. There is no indication of a will to improve; I suspect that the advertising industry likes it that way.”
An anonymous respondent said, “One of the more corrosive aspects of contemporary discourse, both online and off, is the increasing inability of the ‘marketplace’ of ideas to successfully adjudicate between credible accounts, evidence, conspiratorial, and fallacious accounts. This is the result of many factors, not simply the internet, but the way in which it has been promoted and framed. The equation of interactivity with democratization has resulted in a kind of ersatz leveling of the deliberative field, wherein expertise is dismissed as merely a ruse of power, and the fact that one’s opinion can be expressed vociferously, distributed widely, in unaccountable ways has contributed to an unwillingness to accept the results of deliberation. Or rather, it has circumvented deliberation altogether, replacing it with personal, one-way broadcasting. Rather than interactivity bolstering deliberation, it has turned everyone into a broadcaster. This is a sweeping claim meant to describe a general tendency rather than all online communication. But the result is clear: the rise of Donald Trump, the circulation of the idea of ‘post-truth’ politics, and Brexit all point to these shifts in deliberation … once the register of deliberation no longer works to convince or legitimate, the other available option is violence. When we cannot meaningfully discuss, when our words have little purchase on one another, when everyone is so focused on broadcasting their own ideas rather than interacting with those of others, the result is fragmentation and, ultimately, violence.”
An anonymous professor at the Georgia Institute of Technology was one of several expert respondents to mention the looming influence of bots – “social” computer algorithms written to act human in various social online settings to argue, persuade, manipulate, elicit emotional responses and otherwise influence human actions. He wrote, “As illustrated by the Microsoft experience with the Tay chatbot, the sophistication of negative contributions to social media is increasing. Another example is Chinese Weibo, which appears to contain more bot accounts than real people. Therefore, more control is already in place. The competition between real people and bot-generated content will intensify as more monetary rewards become available to bot participation. Abuses will be amplified by bots controlled by entities that maximize non-altruistic goals.”
Technology companies have little incentive to rein in uncivil discourse, and traditional news organizations – which used to shape discussions – have shrunk in importance
In many other elaborations, respondents pointed out that emboldening uncivil discourse is “business as usual” in today’s online world. They said moderating online spaces to be more civil, plural, and factually accurate requires a lot of effort and has not been proven to boost profits. Traditional news organizations used to perform the function of shaping and guiding cultural debates, but the internet has curtailed their role and their businesses. These respondents say this has changed the information environment and had some impact.
We haven’t found, or even thought up, the rules of online engagement. We’ve just borrowed them, mostly unconsciously, from the last place we got comfortable: our newspapers and magazines.
Glenn Ricart, Internet Hall of Fame member and founder/CTO of US Ignite, replied, “The predominance of internet tools that assume you want ‘relevant’ information, or information that your friends recommend, or that match your own communications, all these reinforce an ‘echo chamber’ internet. Instead of seeing the wide diversity of opinion present on the internet, you are subtly guided into only seeing and hearing the slice of the internet featuring voices like your own. With such reinforcement, there’s little social pressure to avoid negative activities. It is of great concern that we have yet to find a funding model that will replace the Fourth Estate functions of the press. This problem only exacerbates the issue of internet communication tools featuring voices like your own. We desperately need to create interest in serious, fact-laden, truth-seeking discourse. The internet could be, but it largely isn’t, doing this.”
Jason Hong, an associate professor at Carnegie Mellon University, wrote, “We’ve already seen the effects of trolls, harassers, and astroturfers in attacking and silencing others online, and there’s very little on the horizon in terms of improving discourse. It’s all too easy for bad actors to organize and flood message boards and social media with posts that drive people away. Or, to paraphrase Gresham’s law, bad posts drive out the good.”
Barry Chudakov, founder and principal at Sertain Research and StreamFuzion Corp., wrote, “Regarding solutions that encourage more-inclusive online interactions, there is no editorial board for public discourse online. We haven’t found, or even thought up, the rules of online engagement. We’ve just borrowed them, mostly unconsciously, from the last place we got comfortable: our newspapers and magazines.”
Joe Mandese, editor in chief of MediaPost, predicted, “Digital, not just online, communication will continue to expand, providing more platforms for all forms of public discourse, including ‘negative’ ones. Of course, negative is in the eye of the beholder, but since there is no regulator on the open marketplace of digital communications, it will create as much opportunity for negative discourse as anything else.”
Oscar Gandy, emeritus professor of communication at the University of Pennsylvania, wrote, “I see the forces within the market, with Facebook in particular, pushing us toward narrower and narrower spheres of interaction. My sense is that ‘widespread demand’ will be seen as re-affirming that push by social platforms.”
Louisa Heinrich, founder at Superhuman Limited, observed, “Highly regarded media outlets set the tone of public discourse to a great degree – when the media we see is brash, brazen, and inflammatory, we adopt that language. I hope we will see a conscious shift in social networks to promote diversity of ideas and of thinking, and also a return to journalistic standards (i.e., factual truth as well as opinion), but I fear that will only come when we are able to come up with business models that don’t depend on hyper-targeting content for advertising dollars.”
Stephen J. Neveroski, a respondent who shared no additional identifying details, commented, “I increasingly see news as both condensed and homogenized. Headlines are deceptive, click-bait abounds. Mainstream media all report the same thing, differing little in the opinions they proffer instead of facts. A turnstile of sources of ‘information’ crop up, but they don’t keep pace with our need for relevant information. Unfortunately I see a generalized dumbing down of the population. People on the news today couldn’t even recite the first line of the Declaration of Independence. Overall we are unable to process information, let alone form a cogent argument. Our intuition, rather than being shaped by the great thinkers of civilization, has been more affected by the Kardashians, and nobody seems to care.”
An anonymous respondent, wrote, “The amount of labor required to do effective moderation is at odds with the business model of the for-profit publishers generating the majority of content, and the traffic commenting generates benefits them in page/ad views. I can’t see the current state of affairs changing as a result.”
An anonymous respondent noted, “Mainstream platforms need to do a better job of establishing rules of the road for use of their service. They are hiding behind free speech arguments so they don’t have to invest in solving hard problems. I don’t think it is contrary to free speech to have standards of behavior for use of a commercial service. The major platforms are hiding behind that argument.”
As long as there are relatively small barriers to participation and low barriers to innovation the internet will serve as a reflection of society, both good and bad.
Ian Peter, an internet pioneer and historian based in Australia, wrote, “The continued expansion of sale of personal data by social media platforms and browser companies is bound to expand to distasteful and perhaps criminal activities based on the availability of greater amounts of information about individuals and their relationships.”
Christine Maxwell, program manager of learning technologies at the University of Texas- Dallas, said, “Recently, referring to the House Benghazi Report, Wired magazine described the beauty and the tragedy of the internet age: ‘As it becomes easier for anyone to build their own audience, it becomes harder for those audience members to separate fact from fiction from the gray area in between.’ To make meaningful and actionable – contextualized – decisions today, individuals need an unbiased knowledge discovery platform to assess information objectively. Without this becoming widely available, coupled with the ability to learn how to ask better questions, I fear that online communication will indeed become more shaped by negative activities.”
Tse-Sung Wu, a project portfolio manager at Genentech, wrote, “As long as there are relatively small barriers to participation and low barriers to innovation the internet will serve as a reflection of society, both good and bad. On the one hand, you have the internet echo chamber, which allows for extreme political or social positions to gain hold. Online communities are quite different from actual, face-to-face communities. In the former, there is no need for moderation or listening to different points of view; if you don’t like what you’re reading, you can leave; there is no loyalty. In an actual community where one lives, one is more likely to compromise, more likely to see differing viewpoints.”
An anonymous respondent wrote, “Social media is driven by novelty. Large amounts of ‘content’ are quickly consumed, generate chatter, and then disappear. They are loaded with click-bait and spam. I question the lasting impact this media can truly have. Identity politics appear to be creating rigid tribes of believers, and big data is biased to locking people into boxes defined by their past preferences. I am skeptical there will be more-inclusive online interactions. Lots of communities are appearing online for casual interests and hobbies. This is a great thing, but how much farther can it go?”
Legacy print media such as newspapers and magazines traditionally published a limited and tightly edited set of public comments. When they went digital – a form that allowed for unlimited responses to be filed instantly by the public – it opened the floodgates for vitriol as well as well-considered and thoughtful discourse. While there is some agreement that “comments” sections online facilitate an abundance of negative discourse, there is less consensus on whether the current trend of disabling comment sections entirely – to preempt trolling and other “negative noise,” including the public harassment of journalists, celebrities, politicians, and content creators – is a productive strategy.
As long as site revenue is based on views, anonymous inflammatory comments will continue.
Richard Forno, a senior lecturer in computer science and electrical engineering at the University of Maryland-Baltimore County, commented, “Online interactions are already pretty horrid – just look at the tone of many news site comment sections … or the number of sites that simply remove user feedback/forum sections altogether.”
Henning Schulzrinne, a professor at Columbia University and Internet Hall of Fame member predicted that, in future, “There may be a segregation into different types of public discourse … it seems likely that many newspapers will have to resort to human filtering or get rid of comment sections altogether. Twitter will remain unfiltered, but become more of a niche activity. Facebook is more likely to develop mechanisms where comments can be filtered, or people will learn to ignore comments on all but personal messages. (Recent announcements by Facebook about selecting fewer news stories are an indirect indicator. Heated debates about gun control don’t mix well with pictures of puppies.)”
Leah Stokes, an assistant professor at the University of California, Santa Barbara, wrote, “I am hopeful that online discourse will become more regulated over time, and less anonymous. The New York Times comment section – where people have to register, can up-vote, and be flagged in a positive way by editors – leads to a mature, interesting dialogue. Without this semi-moderated atmosphere, many newspaper comments devolve.”
Anonymously, an IT manager commented, “The comment section for news and blog sites has become a sounding chamber for insults and spurious attacks, and the ready availability of any number of hate-filled lies that would normally be ignored by the mainstream seems to be increasing over time, filtering from the hidden corners of the Web into our daily lives. … Most sites should absolutely ditch their comment function if they aren’t going to moderate the hate and rage machine it spawns.”
An anonymous respondent wrote, “Sites allow comments because it generates page views, and folks are more likely to comment when they can do so anonymously. A trolling comment generates more comments, which means even more page views. As long as site revenue is based on views, anonymous inflammatory comments will continue.”
Some say comments sections have already begun to evolve to provide a more valuable stream of public input.
Alexander Halavais, director of the MA in social technologies at Arizona State University, said, “Particularly over the last five years, we have seen the growth of technologies of reputation, identity, and collaborative moderation. Newspapers that initially rejected comment streams because of their tendency of toxicity now embrace them. YouTube, once a backwater of horrible commentary, has been tamed. While there are still spaces for nasty commentary and activities, they are becoming destinations that are sought out by interested participants rather than the default.”
Terrorists and other political actors are benefiting from the weaponization of online narratives by implementing human- and bot-based misinformation and persuasion tactics
To troll is human, yes. But to mislead, misinform, manipulate, lie, persuade, to create an atmosphere of anger, fear, and distrust, to work to gain power at nearly any cost is also human. Some experts in this canvassing pointed out that the weaponization of the narrative is much more of a threat than trolling.
There’s money, power, and geopolitical stability at stake now, it’s not a mere matter of personal grumpiness from trolls.
The rise of ISIS (also known as ISIL or Daesh), the jihadist militant group, was facilitated by its uses of social media as a weapon of divisive propaganda beginning in 2014. A number of respondents referred to its activities in mentioning how terrorists use persuasive hate speech and lies online.
This canvassing of experts took place in the summer of 2016 – before largescale press coverage of how foreign trolls operated in the U.S. and Europe. Still, this problem was mentioned by some respondents. In November and December dozens of news organizations broke stories assessing the influence of social media in the 2016 U.S. presidential election and “fake news” became the term most commonly applied by headline writers to describe propaganda items disguised as “news.”
Anonymously, a futurist, writer, and author at Wired, explained, “New levels of ‘cyberspace sovereignty’ and heavy-duty state and non-state actors are involved; there’s money, power, and geopolitical stability at stake now, it’s not a mere matter of personal grumpiness from trolls.”
Matt Hamblen, senior editor at Computerworld, warned, “Traditional institutions and people working within those institutions will be under greater attack than now. … Social media and other forms of discourse will include all kinds of actors that had no voice in the past; these include terrorists, critics of all kinds of products and art forms, amateur political pundits, and more.”
Laurent Schüpbach, a neuropsychologist at University Hospital in Zurich, focused his entire response about negative tone online on burgeoning acts of political manipulation, writing, “The reason it will probably get worse is that companies and governments are starting to realise that they can influence people’s opinions that way. And these entities sure know how to circumvent any protection in place. Russian troll armies are a good example of something that will become more and more common in the future.”
Karen Blackmore, a lecturer in IT at the University of Newcastle, wrote, “Misinformation and anti-social networking are degrading our ability to debate and engage in online discourse. When opinions based on misinformation are given the same weight as those of experts and propelled to create online activity, we tread a dangerous path. Online social behaviour, without community-imposed guidelines, is subject to many potentially negative forces. In particular, social online communities such as Facebook also function as marketing tools, where sensationalism is widely employed, and community members who view this dialogue as their news source gain a very distorted view of current events and community views on issues. This is exacerbated with social network and search engine algorithms effectively sorting what people see to reinforce worldviews.”
Privacy and anonymity are double-edged swords online
An anonymous professor at a U.S. Polytechnic Institute said, “Russia has found it extremely useful to use such media to flood political and social discourse; other nations have or will follow suit. Cybersecurity will generally increase, but the potential for bad actors to take targeted aim will remain, and it will definitely impact security, privacy, and public discourse.”
Stephan G. Humer, head of the internet sociology department at Hochschule Fresenius Berlin, noted, “Social media and especially digital commentary will be used in a more strategic way [by 2026]. In my research I have seen that social media, in general, and digital commentary, in a very special way, reflects societal moods and thoughts, so influencing this discourse level will be much more interesting in the near future.”
Norah Abokhodair, information privacy researcher at the University of Washington, commented, “There is a very clear trend that social media is already being shaped by the bad guys. Already automation (creating social bots on social media platforms) is amplifying the voices of the bad people most of the time. Terrorist organizations are able to recruit many young people through these platforms and many more examples. Privacy and anonymity are double-edged swords online because they can be very useful to people who are voicing their opinions under authoritarian regimes however the same technique could be used by the wrong people and help them hide their terrible actions.”
Susan Mernit, CEO and co-founder at Hack the Hood, wrote, “Humans universally respond to anger and fear. For balanced dialogue, this is a challenging combination.”
David Wuertele, a software engineer at Tesla Motors, commented, “Unfortunately, most people are easily manipulated by fear. Donald Trump’s success is a testament to this fact. Negative activities on the internet will exploit those fears, and disproportionate responses will also attempt to exploit those fears. Soon, everyone will have to take off their shoes and endure a cavity search before boarding the internet.”
Lauren Wagner, a respondent who shared no additional identifying details, replied, “While there may be a utopian wish for technological systems that encourage more-inclusive online interactions, polarizing pieces will result in more engagement from users and be financially advantageous to online platforms. Consequently, online public discourse will be shaped by a more divisive tone and ‘bad’ actors. Writers are becoming more adept at authoring articles that engage their core readership online, whether it’s a broad audience using general clickbait tactics or a more specific audience with, for example, an article supporting a specific political candidate. With the rise of Donald Trump we are seeing that this phenomenon is not only limited to writers. Subjects are learning how to persuade the media to ensure that they receive a certain type of online coverage, which tends to be divisive and inciting.”
An anonymous respondent wrote, “Commentary has become more and more extreme as people are more and more comfortable having and expressing more radical or extreme values. This spreads negativity, as these comments are often negative in nature and people are more likely to respond such comments with their own commentary. As people become less moderate in their political views, religious values, etc., the internet will reflect that. Trends in our politics and society show movement towards more extremism, hate, fear; so too will our social media and digital commentary move towards more negativity. As opposing groups of whatever issue become more zealous and disconnected from each other, they will become less likely to accept each other’s opinions, speech, and expression. This is the case of groups on all sides of issues, whether political, religious, social, etc. You can already see a sort of vigilantism as people are quick to throw out condemnations and fall into mob mentality as they attack commentary they find offensive or unacceptable or anti-(whatever). I believe that this is as far as it will go, with users trying to self-police. While I don’t think major social media services will infringe on free speech because the backlash would be intense, the desire for services that favor a ‘safe zone’ mentality over free speech will increase.”
Karl M. van Meter, sociological researcher and director of the Bulletin of Methodological Sociology at Ecole Normale Supérieure de Paris, wrote, “There will probably continue to be new systems invented and new fashions of use that will wash over the world’s social media users. This, of course, will also bring use in ‘bad faith,’ including criminal and even terrorist use, but that will always be part of this expanding market and the debate about internet use.”