Numbers, Facts and Trends Shaping Your World

The Future of Truth and Misinformation Online

Theme 2: The information environment will not improve because technology will create new challenges that can’t or won’t be countered effectively and at scale

Many respondents who expect no improvement in the information environment argue that certain actors in government,business and other individuals with propaganda agendas and special interests are turning technology to their favor in the spread of misinformation. There are too many of them and they are clever enough that they will continue to infect the online information environment, according to these experts.

A clear articulation of this view came from Howard Greenstein, adjunct professor of management studies at Columbia University. He argued, “This is an asymmetric problem. It is much easier for single actors and small groups to create things that are spread widely, and once out, are hard to ‘take back.’” Moreover, the process of distinguishing between legitimate information and questionable material is very difficult, those who support this line of reasoning said.

An anonymous respondent wrote, “Whack-a-mole seems to be our future. There is an inability to prevent new ways of disrupting our information systems. New pathways will emerge as old ones are closed.”

Those generally acting for themselves and not the public good have the advantage, and they are likely to stay ahead in the information wars

Eric Burger, research professor of computer science and director of the Georgetown Center for Secure Communications in Washington, D.C., replied, “Distinguishing between fake news, humor, strange-but-true news or unpopular news is too hard for humans to figure out, no less a computer.”

While technology may stop bots from spreading fake news, I don’t think it will be that easy to stop people who want to believe the fake news and/or make up the fake news. Anonymous program officer

Wendell Wallach, a transdisciplinary scholar focused on the ethics and governance of emerging technologies at The Hastings Center, wrote, “While means will be developed to filter out existing forms of misinformation, the ability to undermine core values will continue to be relatively easy while steps to remediate destructive activities will be much harder and more costly. Furthermore, a gap will expand as technological possibilities speed ahead of their ethical-legal oversight. Those willing to exploit this gap for ideological purposes and personal gain will continue to do so.”

Justin Reich, assistant professor of comparative media studies at MIT, noted, “Strategies to label fake news will require algorithmic or crowd-sourced approaches. Purveyors of fake news are quite savvy at reverse engineering and gaming algorithms, and equally adept at mobilizing crowds to apply ‘fake’ labels to their positions and ‘trusted’ labels to their opponents.”

Sean Goggins, an associate professor and sociotechnical data scientist, wrote, “Our technical capacity to manipulate information will continue to grow. With investment tilted toward for-profit enterprise and the intelligence community and away from public-sector research like that sponsored by the National Science Foundation, it’s doubtful that technology for detecting misinformation will keep up with technologies designed to spread misinformation.”

An associate professor of communication studies at a Washington-based university said, “The fake news problem is not one that can be fixed with engineering or technological intervention short of a total reimagination of communication network architecture.”

Fredric Litto, professor emeritus at the University of São Paulo in Brazil, wrote, “The incredibly complex nature of contemporary information technology will inevitably make for a continuing battle to reduce (note: I dare not say eliminate) false and undesirable ‘news’ and other information permeating electronic media. Without a foolproof method of truly eliminating the possibility of anonymity – and I cannot see this really happening by 2027 – there will be no end to the malicious use of most, if not all, modes of communication.”

Michel Grossetti, research director at CNRS (French National Center for Scientific Research), commented, “It is the old story of the bullet and the cuirass. Improvement on one side, improvement on the other.”

Daniel Berleant, author of the book “The Human Race to the Future,” predicted, “Digital and psychological technologies for the spreading of misinformation will continue to improve, and there will always be actors motivated to use it. Ways to prevent it will develop as well but will be playing catch-up rather than taking the lead.”

John Lazzaro, a retired electrical engineering and computing sciences professor at the University of California, Berkeley, wrote, “I don’t think society can reach a consensus on what constitutes misinformation, and so trying to automate the removal of misinformation won’t be possible.”

Andreas Birkbak, assistant professor at Aalborg University in Copenhagen, said, “The information environment will not improve because there is no way to automate fact checking. Facts are context-dependent.”

A North American program officer wrote, “While technology may stop bots from spreading fake news, I don’t think it will be that easy to stop people who want to believe the fake news and/or make up the fake news.”

A researcher based in North America said, “News aggregators such as Facebook will get better at removing low-information content from their news feeds but the amount of mis/disinformation will continue to increase.”

Joseph Konstan, distinguished professor of computer science and engineering at the University of Minnesota, observed, “Those trying to manipulate the public have great resources and ingenuity. While there are technologies that can help identify reliable information, I have little confidence that we are ready for widespread adoption of these technologies (and the censorship risks that relate to them).”

A former software systems architect replied, “Bad actors will always find ways to work around technical measures. In addition, it is always going to be human actors involved in the establishment of trust relationships and those can be gamed. I do not envision media organizations being willing participants.”

Can technology detect and flag trustworthy information? A North American research scientist said the idea of basing likely veracity on people’s previous information-sharing doesn’t always work, writing, “People don’t just share information because they think it’s true. They share to mark identity. Truth-seeking algorithms, etc. don’t address this crucial component.”

A vice president for an online information company wrote, “It is really hard to automatically determine that some assertion is fake news or false. Using social media and ‘voting’ is overcome by botnets for example.

J. Cychosz, a content manager and curator for a scientific research organization, commented, “False information has always been around and will continue to remain, technology will emerge that will help identify falsehoods and culture will shift, but there will always be those who find a path around.”

Philippa Smith, research manager and senior lecturer in new media at Auckland University of Technology, noted, “Efforts to keep pace with technology and somehow counteract the spread of misinformation or fake news may be more difficult than we imagine. I have concerns that the horse has bolted when it comes to trying to improve the information environment.”

[it]

Ed Terpening, an industry analyst with the Altimeter Group, replied, “Disinformation will accelerate, as trust in institutions we’ve thought of as unbiased widen polarization through either hiding or interpreting facts that fulfill an agenda.”

Basavaraj Patil, principal architect at AT&T, wrote, “The rapid pace of technological change and the impact of false information on a number of aspects of life are key drivers.”

Bradford W. Hesse, chief of the health communication and informatics research branch of the U.S. National Cancer Institute, said, “Communication specialists have been dealing with the consequences of propaganda, misinformation and misperceived information from before and throughout the Enlightenment. What has changed is the speed with which new anomalies are detected and entered into the public discourse. The same accelerated capacity will help move the needle on social discourse about the problem, while experimenting with new solutions.”

Liam Quin, an information specialist at the World Wide Web Consortium, said the information environment is unlikely to be improved because “human nature won’t change in such a short time, and people will find ways around technology.”

Alan Inouye, director of public policy for the American Library Association, commented, “New technologies will continue to provide bountiful opportunities for mischief. We’ll be in the position of playing defense as new abuses or attacks arise.” However, he also added, “This will be a future that is, on balance, not worse than today’s situation.”

A distinguished engineer for a major provider of IT solutions and hardware warned that any sort of filtering system will flag, filter or delete useful content along with the misinformation, “It’s not possible to censor the untrustworthy news without filtering some trustworthy news. That struggle means the situation is unlikely to improve.”

Weaponized narratives and other false content will be magnified by social media, online filter bubbles and AI

Some respondents noted that the people best served by the manipulation of public sentiment, arousing fear and anger and obfuscating reality, are encouraged by their success now and that gives them plenty of incentive to make things worse in the next decade. As a professor and author based in the United States put it, “Too many people have realized that lying helps their cause.”

An anonymous respondent based in Asia/Southeast Asia replied, “We are being ‘gamed,’ simply put.”

Just as it’s now straightforward to alter an image, it’s already becoming much easier to manipulate and alter documents, audio, and video, and social media users help these fires spread much faster than we can put them out.Martin Shelton

Alexis Rachel, user researcher and consultant, said, “The logical progression of things at this point (unless something radical occurs) is that there will be increasingly more ‘sources’ of information that are unverified and vetted – a gift from the internet and the ubiquitous publishing platform it is. All it takes is something outrageous and plausible enough to go viral, and once out there, it becomes exceedingly difficult to extinguish – fact or fiction.”

Martin Shelton, a security researcher with a major technology company, said, “Just as it’s now straightforward to alter an image, it’s already becoming much easier to manipulate and alter documents, audio, and video, and social media users help these fires spread much faster than we can put them out.”

Matt Stempeck, a director of civic technology, noted, “The purveyors of disinformation will outpace fact-checking groups in both technology and compelling content unless social media platforms are able to stem the tide.”

[and may improve the overall information environment]

An anonymous respondent, wrote, “Distrust of academics and scientists is so high it’s hard to imagine how to construct a fact-checking body that would trusted by the broader population.”

The most-effective tech solutions to misinformation will endanger people’s dwindling privacy options, and they are likely to limit free speech and remove the ability for people to be anonymous online

While some people believe more surveillance and requirements for identity authentication are go-to solutions for reining in the negative impacts of misinformation, a number of these experts said bad actors will evade these measures and platform providers, governments and others taking these actions will expand unwanted surveillance and curtail civil liberties.

Fred Davis, a futurist based in North America, wrote, “Automated efforts to reduce fake news will be gamed, just like search is. That’s 20 years of gaming the system – search engine optimization and other things that corrupt the information discovery process have been in place for over 20 years, and the situation is still bad. Also, it may be difficult to implement technology because it could also be used for mass censorship. Mass censorship would have a very negative effect on free speech and society in general.”

Adam Powell, project manager at the Internet of Things Emergency Response Initiative at the University of Southern California, said, “The democratization of the internet, and of information on the internet, means just that: Everyone has and will have access to receiving and creating information, just as at a watercooler. Not only won’t the internet suddenly become ‘responsible,’ it shouldn’t, because that is how totalitarian regimes flourish (see: Firewall, Great, of China).”

An eLearning specialist observed, “Any system deeming itself to have the ability to ‘judge’ information as valid or invalid is inherently biased.” And a professor and researcher noted, “In an open society, there is no prior determination of what information is genuine or fake.”

The owner of a consultancy replied, “We’re headed to a world where most people will use sources white-listed (explicitly or not) by third parties (e.g., Facebook, Apple, etc.).”

A distinguished professor emeritus of political science at a U.S. university wrote, “Misinformation will continue to thrive because of the long (and valuable) tradition of freedom of expression. Censorship will be rejected.”

A professor at a major U.S. university replied, “Surveillance technologies and financial incentives will generate greater surveillance.” A retired university professor predicted, “Increased censorship and mass surveillance will tend to create official ‘truths’ in various parts of the world. In the United States, corporate filtering of information will impose the views of the economic elite.”

Among the respondents to this canvassing who recommended the removal of anonymity was Romella Janene El Kharzazi, a content producer and entrepreneur, who said, “One obvious solution is required authentication; fake news is spread anonymously and if that is taken away, then half of the battle is fought and won.” A research scientist based in Europe predicted, “The different actors will take appropriate measures – including efficient interfaces for reporting and automatic detection – and implement efficient decision mechanisms for the censorship of such content.”

A senior researcher and distinguished fellow for a major futures consultancy observed, “Reliable fact checking is possible. Google in particular has both the computational resources and talent to successfully launch a good service. Facebook may also make progress, perhaps in a public consortium including Google. Twitter is problematic and would need major re-structuring including a strict, true names policy for accounts – which is controversial among some privacy sectors.”

A retired consultant and strategist for U.S. government organizations replied, “Regardless of technological improvements, the change agents here are going to have to be, broadly speaking, U.S. Supreme Court judges’ rulings on constitutional interpretations of free speech, communication access and any number of other constitutional issues brought to the fore by many actors at both the state and national level, and these numerous judicial change agents’ decisions are, in turn, affected by the citizen opinion and behavior.”

Anonymous respondents also commented:

  • “The means and speed of dissemination have changed [the information environment]. It cannot be legislated without limiting free speech.”
  • “It’s impossible to filter content without bias.”
  • “The internet is designed to be decentralized; not with the purpose of promoting accuracy or social order.”
  • “There is no way – short of overt censorship – to keep any given individual from expressing any given thought.”
  • “Blocking (a.k.a. censoring) information is just too dangerous.”
  • “I do not think it can be stopped without doing a lot of damage to freedom of speech.”
  • “Forces of evil will get through the filters and continue to do damage while the majority will lose civil rights and many will be filtered or banned for no good reason.”
  • “It’s a hard problem to solve fairly.”

Sign up for The Briefing

Weekly updates on the world of news & information