The more hopeful among these respondents cited a series of changes they expect in the next decade that could improve the tone of online life. They believe: Technical and human solutions will arise as the online world splinters into segmented, controlled social zones with the help of artificial intelligence (AI).

Anti-harassment is a technologically solvable problem. As the tools to prevent harassment improve, the harassers will be robbed of their voices, and the overall tone will improve.
Anonymous respondent

While many of these experts were unanimous in expressing a level of concern about online discourse today many did express an expectation for improvement. These respondents said it is likely the coming decade will see a widespread move to more-secure services, applications, and platforms, reputation systems and more-robust user-identification policies. They predict more online platforms will require clear identification of participants; some expect that online reputation systems will be widely used in the future. Some expect that online social forums will splinter into segmented spaces, some highly protected and monitored while others retain much of the free-for-all character of today’s platforms. Many said they expect that due to advances in AI, “intelligent agents” or bots will begin to more thoroughly scour forums for toxic commentary in addition to helping users locate and contribute to civil discussions.

Jim Hendler, professor of computer science at Rensselaer Polytechnic Institute, wrote, “Technologies will evolve/adapt to allow users more control and avoidance of trolling. It will not disappear, but likely will be reduced by technological solutions.”

Trevor Hughes, CEO at the International Association of Privacy Professionals, wrote, “Negative activities will always be a problem. … However, controlling forces will also continue to develop. Social norms, regulations, better monitoring by service providers all will play a role in balancing the rise of negative activities.”

Robert Matney, COO at Polycot Associates, wrote, “Reputation systems will evolve to diminish the negative impact that bad actors have on online public discourse, and will become as broadly and silently available as effective spam systems have become over the last decade.”

Tom Sommerville, agile coach, commented, “As people trade elements of their privacy for benefits in the online world, online personas will be more transparently associated with the people behind them. That transparency will drive more civil discourse.”

Peter Brantley, director of online strategy at the University of California, Davis, replied, “I expect there will be more technologically mediated tools to condition parameters of community participation. There is a great interest in helping to create ‘safe’ or self-regulating communities through the development of metrics of mutual ratification. However at the same time, there will be an enlargement in the opportunities and modes for engagement, through annotation or development of new forums, and these will be characterized by the same range of human discourse as we see now.”

“Anti-harassment is a technologically solvable problem. As the tools to prevent harassment improve, the harassers will be robbed of their voices, and the overall tone will improve,” wrote an anonymous senior security engineer at a major U.S.-based internet services company.

Of course figuring out just how to set the technology up to accomplish all of this isn’t as easy as it may seem. It requires a thorough assessment and weighing of values. Isto Huvila, a professor at Uppsala University in Sweden, noted, “Currently the common Western ideology is very much focused on individuals and the right to do whatever technologies allow us to do – the problem is that it might not be a very good approach from the perspective of humankind as a whole. More-focused ideas of what we would like human society to be as a whole would be much needed. The technology comes first after that.”

While many participants in this canvassing have faith in the technology, none specifically addressed the ways in which values might be fairly and universally defined so they may be equitably applied across platforms in order that people actively moving from one to another and another can have a relatively good understanding and expectation of the filtering or other rules being applied to discourse.

AI sentiment analysis and other tools will detect inappropriate behavior, and many trolls will be caught in the filter; human oversight by moderators might catch others

Some respondents predicted that AI or people teaming with algorithms in hybrid systems will create and maintain “smart” moderation solutions. They expect increasingly powerful algorithms are likely to do most of the filtering work and, in some cases, all of it. Some expect that ID will be required or that people will self-identify through reputation systems allowing their identity to be established across online platforms. Some participants in this canvassing suggested there will be a back-and-forth socio-technological arms race between those moderating the systems and those who oppose and work to override the moderators.

I expect that automated context analysis will weed out most trolls and harassers the way that spam filters weed out most spam today.
Klaus Æ. Mogensen

David Karger, a professor of computer science at MIT, said, “We are still at the early stages of learning how to manage online public discourse. As we’ve rushed to explore ways to use this new medium, our lack of experience has led to many surprises both about what does work (who would have imagined that something like Wikipedia could succeed?) and what doesn’t (why aren’t online discussion forums just as friendly as grandma’s book club?). … My own research group is exploring several novel directions in digital commentary. In the not-too-distant future all this work will yield results. Trolling, doxxing, echo chambers, click-bait, and other problems can be solved. We will be able to ascribe sources and track provenance in order to increase the accuracy and trustworthiness of information online. We will create tools that increase people’s awareness of opinions differing from their own, and support conversations with and learning from people who hold those opinions.”

Ryan Hayes, owner of Fit to Tweet, predicted, “We may have augmented-reality apps that help gauge whether assertions are factually correct, or flag logical fallacies, etc. Rather than just argue back and forth I imagine we’ll invite bots into the conversation to help sort out the arguments and tie things to underlying support data, etc.”

Scott Amyx, CEO of Amyx+, an Internet of Things business consultancy, said, “Free speech will be amplified through peer-to-peer multicast, mesh network technologies. Earlier-generation platforms that enabled free speech – such as Open Garden’s FireChat – will usher in even broader and more pervasive person-to-person (P2P) communication technologies, powered by the Internet of Things [IoT]. Billions of IoT-connected devices and nodes will increase the density to support vibrant P2P global wireless sensor networks. IoT is transitioning our computing model from centralized to a decentralized computing paradigm. This enables self-forming, self-healing networks that enable messaging, communication and computing without the need for a central system or the traditional Internet. Everything becomes node-to-node. These technological enablements will amplify the voices of the people, especially in closed, censored nations. For clarity, new technologies will not necessarily foster greater griping, distrust, and disgust but rather it will allow private individual thoughts and conversations to surface to public discourse.”

Dave Howell, a senior program manager in the telecommunications industry, predicted, “Identity will replace anonymity on the internet. Devices will measure how a human interacts with them and compare to Web cookie-like records to match persons with an advertising database. This will become public knowledge and accessible to law enforcement and courts within the decade. There will be ‘Trust Providers’ at the far end of transaction blockchains who keep an official record of identity (interaction patterns), and these may be subpoenable. Individuals will learn that public utterances (on the internet) won’t/don’t go away, and can have consequences. Whether or not organizations (e.g., ACLU) can pass ‘Right to be Forgotten’ and privacy/speech protection acts in the decade will probably be irrelevant, as social belief will likely be suspicious that individuals are tracked regardless.”

An anonymous senior program manager at Microsoft, observed, “Online reputation will become more and more important in an economy with many online markets, for labor (the gig economy, Uberization) as well as products (Etsy, Ebay, etc.), or apartments (Airbnb), etc. Online personas will become more consolidated and thus trolling will be more discouraged.”

Susan Price, digital architect at Continuum Analytics, predicted the rise of “affinity guilds.” She said, “Until we have a mechanism users trust with their unique online identities, online communication will be increasingly shaped by negative activities, with users increasingly forced to engage in avoidance behaviors to dodge trolls and harassment. … New online structures, something like affinity guilds, will evolve that allow individuals to associate with and benefit from the protection of and curation of a trusted group. People need extremely well-designed interfaces to control the barrage of content coming to their awareness. Public discourse forums will increasingly use artificial intelligence, machine learning, and wisdom-of-crowds reputation-management techniques to help keep dialog civil. If we build in audit trails, audits, and transparency to our forums, the bad effects can be recognized and mitigated. Citizens tend to conflate a host individual or organization’s enforcement of rules of civil exchange (such as removing an offensive post from one’s own Facebook page) with free speech abridgement. There will continue to be many, many venues where individuals may exercise their right to free speech; one individual’s right to speak (or publish) doesn’t require any other individual to ‘hear and attend.’ Better education and tools to control and curate our online activities can help. Blockchain technologies hold much promise for giving individuals this appropriate control over their attention, awareness, and the data we all generate through our actions. They will require being uniquely identified in transactions and movements, and readable to holders of the keys. A thoughtful, robust architecture and systems can give individuals control over the parties who hold those keys.”

An anonymous respondent put the two mostly likely solutions succinctly, “AI controls will limit the blatantly obvious offensive trolling. That will take a lot of the sting out the problem. Identification controls will minimize a lot of the remaining negative elements, though it will also clearly lead to the suppression of unpopular opinions.”

Klaus Æ. Mogensen, senior futurist at the Copenhagen Institute for Futures Studies, said, “I expect that automated context analysis will weed out most trolls and harassers the way that spam filters weed out most spam today.”

Advances will help reduce, filter or block the spam, harassment, and trolls … Without these steps, the online world will go the way of the telephone – it may ‘ring,’ but no one will pick it up.
Anonymous respondent

Evan Selinger, professor of philosophy at the Rochester Institute of Technology, commented that companies must assure some level of comfort to keep growing their user bases, adding, “Accordingly, they are working harder to ensure that their platforms are designed to optimize doing things like automatically detecting harassment, easily allowing for users to report harassment, and swiftly acting upon harassment complaints by applying sanctions derived from clear Community Guidelines and Terms of Service that revolve around expectations of civility. … I also imagine a robust software market emerging of digital ventriloquists that combines predictive analytics with algorithms that interpret that appropriateness of various remarks. For example, the software could detect you’re communicating with a person or member of a group that, historically, you’ve have hard time being civil with. It could then data-mine your past conversations and recommend a socially acceptable response to that person that’s worded in your own personal style of writing.”

Thomas Claburn, editor-at-large at Information Week, wrote, “I expect more pressure on service providers to police their services more actively. And I suspect people will be more careful about what they say because it will be harder to hide behind pseudonyms. Also, I anticipate more attention will be paid to application and website design that discourages or mitigates negative interaction.”

An anonymous social scientist said, “Advances will help reduce, filter or block the spam, harassment, and trolls and preserve the intents and purposes of an online space for public discourse. Without these steps, the online world will go the way of the telephone – it may ‘ring,’ but no one will pick it up.”

An anonymous respondent wrote, “Curation is too difficult. Tools to manage negative responses will get better. It will probably be a combination of automated and crowdsourced management.”

Jennifer Zickerman, an entrepreneur, commented, “More-active moderation will become the norm in online discourse. I expect that this will driven by new anti-harassment laws; a greater sense of social responsibility among organizations that host spaces for discourse; and society’s decreasing tolerance for racism, sexism, bullying, etc. We are already seeing this trend. While some technological solutions will help organizations moderate their discourse spaces, in the next ten years moderation will continue to be mostly a human task. This gives larger organizations with bigger resources an advantage. Smaller organizations may not have the resources to have their own spaces for discourse.”

While technological solutions are expected to lead the way in dealing with bad actors and misinformation online, some predict some degree of human moderation will continue to be an important part of the system in the next decade.

Demian Perry, mobile director at NPR, said, “Jack Dorsey said it best: ‘Abuse is not civil discourse.’ As more of our lives move online, people will naturally gravitate, as they do in the real world, to healthy, positive relationships. The success of online communities will hinge on the extent to which they are able to prevent the emergence of a hostile environment in the spaces under their stewardship. Algorithms will play an increasing role, but probably never fully replace the need for human curators in keeping our online communities civil.”

Annie Pettit, vice president of data awesomeness at Research Now, observed, “With the advent of artificial intelligence, many companies will build processes that are better able and more quickly able to detect and deal with inappropriate negativity. Simply seeing less negativity means that few people will contribute their own negativity or share other negativity.”

There will be partitioning, exclusion, and division of online outlets, social platforms and open spaces

Some respondents predicted the increased fragmentation of existing social platforms and online services. They expect that over the next decade non-hostile “safe spaces” will emerge where controlled discourse can flourish. Many pointed out the downsides of these approaches – a wider selection of those comments about the negatives of such segmentation is shared in the next section of this report.

The internet will evolve into a ‘safe zone,’ and the more spirited discussions will move onto darknets specifically set up to encourage open and uncensored discussion on topics of the day.
Garland McCoy

Valerie Bock, of VCB Consulting, commented, “There will be free-for-all spaces and more-tightly-moderated walled gardens, depending on the sponsor’s strategic goals. There will also be private spaces maintained by individuals and groups for specific purposes. These will tend to be more reliably civil, again, because people will be known to one another and will face consequences for behavior outside of group norms.”

A computer security professor at Purdue University, said, “I fully expect we will also see further partitioning and divide among outlets – there will be few ‘places’ where many points of view can be expressed and discussed civilly. There also is likely to be an increase in slanted ‘fact’ sites, designed to bolster partisan views by how history and data is presented.”

Bart Knijnenburg, assistant professor in human-centered computing at Clemson University, said, “We are still figuring out the netiquette of online social interaction. Networks seem to rearrange themselves over time (newsgroups -> IRC -> MySpace -> Facebook) and interaction becomes more inclusive and more structured. I believe we are at the point of highest integration but lowest regulation. Over the next decade social networks will become more fractured and in some cases more (self-)regulated. This will reduce the negative experiences, as the benevolent majority becomes relatively more vocal and crowds out the trolls. I say this with a worldview in mind; I feel that in the U.S. the current political reality will negatively impact online discourse in the short run, but this problem may resolve itself within the decade.

Garland McCoy, president of the Technology Education Institute, predicted there will more “self-appointed ‘PC’ police, and for those engaged in public discourse on the internet who share items deemed inappropriate or not ‘PC’ there will be swift consequences. … The internet will evolve into a ‘safe zone,’ and the more spirited discussions will move onto darknets specifically set up to encourage open and uncensored discussion on topics of the day.”

Will Ludwigsen, a respondent who shared no additional identifying details, said, “My suspicion (perhaps my hope, now that I think about it), is that the internet will naturally bifurcate into a wild, anything-goes environment and a curated one. The need for safe spaces and reliable information will eventually lead to more ‘trusted’ and ‘moderated’ places, though of course the question is whom we’re trusting to do the moderating (probably corporations) and what’s in it for them.”

Irina Shklovski, associate professor at the IT University of Copenhagen, observed, “There is no one public discourse online, but there are myriad spaces where public discourse happens. These are controlled by different actors, they develop different norms of engagement, and they may or may not fall victim to trolling and significant negative interactions. There are also many different publics that engage in different sorts of discourse, and this will only increase in number and diversity over time. Perhaps the current threat of trolling and harassment is one reason for an increasing fragmentation and focusing of public discourse into areas and spaces that are kept ‘safe’ for certain types of discourse, managed and protected. What the effect of this sort of fragmentation will be is hard to predict.”

Michael Whitaker, vice president of emerging solutions at ICF International, commented, “I expect online communication to be less shaped by negative activities, but not necessarily because of behavior changes by the broader community. … We are likely headed toward more-insular online communities where you speak to and hear more from like-minded people. Obvious trolls will become easier to mute or ignore (either manually or by algorithm) within these communities. This future is not necessarily desirable for meaningful social discourse that crosses ideologies but it is a trend that may emerge that will make online communications less negative within the spheres in which most people interact.”

An anonymous respondent predicted, “It’s more likely that we’ll see more corporate-controlled, moderated, ‘closed’ spaces masquerading as open spaces in the next decade.”

“Algorithms, driven by marketers seeking more predictive powers, will get more proficient at keeping people isolated to their own political-taste regimes,” wrote an anonymous design professor.

Bob Frankston, internet pioneer and software innovator, wrote, “I see negative activities having an effect but the effect will likely to be from communities that shield themselves from the larger world. We’re still working out how to form and scale communities.”

An anonymous chief scientist added, “Like the physical world, the online world will develop no-go zones. Polarization will continue and grow more accurate – who is in, who is out.”

An anonymous health information specialist added, “The really awful, violent anonymous speech will get pushed to the darker recesses of the internet where its authors find their own kind and support.”

D. Yvette Wohn, assistant professor at the New Jersey Institute of Technology, commented, “Bad actors and harassment will not go away, and some services may lose users for trying to aggressively eliminate these forces while others do not, but certain technologies that target underage users will be able to create ‘safe’ places where negativity will be constrained to constructive criticism. These safe places will arise through a joint effort between community policing and system designs that encourage supportive behavior. Mainstream social media services will not be initiating this – rather it will arise from youth with coding and social skills who self-identify this need for a safe space.”

An anonymous senior account representative stated, “It’s a process of natural selection: non-safe environments disappear and safe environments develop better moderation techniques and spread those around to other communities.”

Trolls and other bad actors will fight back, innovating around any barriers they face

Some respondents said they expect the level of angst and concern over social behaviors will fluctuate, depending upon a number of forces. Peter Morville, president of Semantic Studios, said, “The nature of public discourse online is in a state of persistent disequilibrium (see “Out of Control” by Kevin Kelly), so I expect the pendulum to swing back and forth between better and worse.”

Simply, it will be an arms race of design between new technologies and the way they are exploited and used.
Erik Johnston

Many predict that human nature will remain the same, and the trolls and misinformation-disseminating manipulators the filters and bots are aimed at will effectively “fight back” with altered behaviors and new technological approaches in a seesaw battle often described as an “arms race.”

Axel Bruns, a professor at the Queensland University of Technology’s Digital Media Research Centre, said, “There is an ongoing arms race between trolls and platform providers, and a real limit to the extent that trolling can be combatted using purely technological means without simultaneously undermining the open and free environment that makes many social media platforms so attractive to users. Just as important an approach to addressing trolling is social measures, including digital literacies education and peer pressure. Here, unfortunately, I see the present prevalence of trolling as an expression of a broader societal trend across many developed nations towards belligerent factionalism in public debate, with particular attacks directed at women as well as ethnic, religious, and sexual minorities. Unless this trend can be arrested and reversed, I don’t expect the problem of trolling to be reduced, either.”

Erik Johnston, an associate professor and director of the Center for Policy Informatics at Arizona State University, observed, “Simply, it will be an arms race of design between new technologies and the way they are exploited and used. We wrote a paper called “Crowdsourcing Civility” that talks about how once different threats to a community are identified, there are a wide variety of solutions for addressing these concerns.”

Terry Langendoen, an expert at the U.S. National Science Foundation, said, “Management, including detection and suppression of the activities of bad actors, is a form of defensive warfare on the part of those we may call ‘good actors,’ so we can comfortably predict that the conflict will take the form of an arms race – in fact it already has, and while there is no counterpart of a nuclear deterrent, the means for controlling bad behavior in social media is now and will continue to be widely distributed, so that those who may be harmed by such behavior will increasingly have access to resources for defending themselves.”

David Lankes, professor and director at the University of South Carolina’s School of Library and Information Science, wrote, “I see the discourse on the Net evolving into a competition between trolls, advocates of free speech, and increased automation seeking to filter speech on public sites. We are already seeing the efforts of large search firms using natural language processing and advanced machine learning to create chatbots and filtering software to identify extremism and harassment. The complexity of this software will continue to increase in sophistication and effectiveness, however it is ultimately competing against nuances of interpretation and attempts to be heard by bad actors.”

An anonymous respondent confidently expressed faith in communities overcoming assaults by trolls and manipulators, writing, “A mixture of lessening anonymity and improving technologies will work to combat new avenues of online harassment and continued fragmentation into echo chambers. Trolls will always be there and find ways around new tech, but communities will continue to move apart into their own spheres and help isolate them from general consumption.”

An anonymous senior security architect who works with a national telecommunications provider predicted the opposite, writing, “I expect online communication to be more shaped by negative activities over the next decade. Social media and search engines … are our tools for engaging in public discourse, finding venues for conversation, and attempting to learn about events. Previously, real efforts to maximize this effect for an intended outcome were the purview of organizations and specialists. This is being democratized and now any small, active group can make an effort to sway outcomes with very little monetary investment. We’ve already seen it used for commercial, military, political, and criminal ends. Manufactured urgency and outrage are triggers that people respond to. … The technological means to automate defenses against adversarial manipulation always trail behind. They must. Sometimes we really do need to see things urgently; sometimes we really do need to be outraged. I expect that some voices online will naturally be silenced as a result, to the detriment of free speech. This may happen naturally as people who would otherwise join in discourse will choose not to for various reasons. It may happen as legislation pushes toward a more-censored view of the Matrix through right-to-be-forgotten style rulings, attempts to automate filtering of offensive speech, or abuse of existing copyright and digital rights laws. It may happen as the companies providing these platforms filter what their users see algorithmically, further isolating the bubbles of conversation that are producing such negative activity today. And I’m hard-pressed to see which of those is the worse outcome. There is already a very negative consequence to privacy and I expect this to get worse. Doxxing of users has become nearly as common as short-lived DDoS [distributed denial of service] attacks against online gamers, which swings between idle amusement and revenge while continuing to be cheaper. Doxxing of companies has proven to be similarly damaging to the privacy of employees and clients. The Web has never taken security particularly seriously and this trend is continuing. The current focus on end-to-end obfuscation is great for individual communications, but provides little support against the troves of information kept about us by every site, large and small, that we interact with. This has many carryover effects, as some kinds of discourse naturally lend themselves to vitriolic and privacy-damaging attacks. The overall state of privacy and security puts our services, histories, data trails, and conversation at risk whenever someone is sufficiently motivated to retribution. Normalizing this activity in society does not lessen the damage it does to speech.”

Peter Levine, Lincoln Filene professor and associate dean for research at Tisch College of Civic Life at Tufts University, predicted a tie, commenting, “Lots of bad actors will continue to swarm online public discourse, but the designers and owners of Web properties will introduce tools to counter them. Not knowing who will prevail, I am predicting a stalemate.”