Numbers, Facts and Trends Shaping Your World

The Future of Truth and Misinformation Online

Theme 1: The information environment will not improve. The problem is human nature

Misinformation and “fake news” have been around for as long as people have communicated. But today’s instant, low-budget, far-reaching communications capabilities have the potential to make the problem orders of magnitude more dangerous than in the past.

Mankind has always lied, and always will; which is why the winners of wars get to write the history their way and others have no say, but with the internet, the losers have a say! William L. Schrader

As Frederic Filloux explains: “‘Misinformation’ – a broader concept that encompasses intentional deception, low-quality information and hyperpartisan news – is seen as a serious threat to democracies. … The Dark Web harbours vast and inexpensive resources to take advantage of the social loudspeaker. For a few hundred bucks, anyone can buy thousands of social media accounts that are old enough to be credible, or millions of email addresses. Also, by using Mechanical Turk or similar cheap crowdsourcing services widely available on the open web, anyone can hire legions of ‘writers’ who will help to propagate any message or ideology on a massive scale. That trade is likely to grow and flourish with the emergence of what experts call the ‘weaponized artificial intelligence propaganda,’ a black magic that leverages microtargeting where fake news stories (or hyperpartisan ones) will be tailored down to the individual level and distributed by a swarm of bots. What we see unfolding right before our eyes is nothing less than Moore’s Law applied to the distribution of misinformation: An exponential growth of available technology coupled with a rapid collapse of costs.”

Roughly half the experts in this canvassing generally agreed with Filloux’s description of how technologies are emerging to enable misinformation distribution, and they worry about what may come next. Many expressed deep concerns about people’s primal traits, behaviors and cognitive responses and how they play out in new digital spaces. They said digital platforms are often amplifying divisions and contentiousness, driving users to mistrust those not in their “tribe.”

As William L. Schrader, a former CEO with PSINet, wrote, “Mankind has always lied, and always will; which is why the winners of wars get to write the history their way and others have no say, but with the internet, the losers have a say! So which is better? Both sides, or just the winner? We have both sides today.”

Respondents discussed the scale of the problem and how difficult it can be to assess and weed out bad information, saying that even sophisticated information consumers are likely to struggle in the coming information environment and credulous consumers may have little chance of working their way to true information. Nathaniel Borenstein, chief scientist at Mimecast, commented, “Internet technologies permit anyone to publish anything. Any attempt to improve the veracity of news must be done by some authority, and people don’t trust the same authorities, so they will ultimately get the news that their preferred authority wants them to have. There is nothing to stop them choosing an insane person as their authority.”

More people = more problems. The internet’s continuous growth and accelerating innovation allow more people and artificial intelligence (AI) to create and instantly spread manipulative narratives

Some experts argued that the scale of the problem – too much bad information too easily disseminated – is their major concern. The internet facilitates too many information actors with divergent motives to allow for consistent identification of reliable information and effective strategies to flag false information.

Andrew Odlyzko, professor of math and former head of the University of Minnesota’s Supercomputing Institute, observed, “‘What is truth has almost always been a contentious issue. Technological developments make it possible for more groups to construct their ‘alternate realities,’ and the temptation to do it is likely to be irresistible.”

Andrew Nachison, author, futurist and founder of WeMedia, noted, “Technology will not overcome malevolence. Systems built to censor communication, even malevolent communication, will be countered by people who circumvent them.”

Technological developments make it possible for more groups to construct their ‘alternate realities,’ and the temptation to do it is likely to be irresistible. Andrew Odlyzko

David Weinberger, writer and senior researcher at Harvard University’s Berkman Klein Center for Internet & Society, noted, “It is an urgent problem, so it will be addressed urgently, and imperfectly.”

Jan Schaffer, executive director of J-Lab, said, “There are so many people seeking to disseminate fake news and produce fake videos in which officials appear to be talking that it will be impossible to shut them all down. Twitter and Facebook and other social media players could play a stronger role. Only a few national news organizations will be trusted sources – if they can manage to survive.”

Brian Cute, longtime internet executive and ICANN participant, said, “I am not optimistic that humans will collectively develop the type of rigorous habits that can positively impact the fake news environment. Humans have to become more effective consumers of information for the environment to improve. That means they have to be active and effective ‘editors’ of the information they consume. And that means they have to be active and effective editors of the information they share on the internet, because poorly researched information feeds the fake news cycle.”

Rajnesh Singh, Asia-Pacific director for a major internet policy and standards organization, observed, “The issue will be how to cope with the volume of information that is generated and the proportion of it that is inaccurate or fake.”

Steve Axler, a user-experience researcher, replied, “Social media and the web are on too large a scale to control content.”

A software engineer referred to the human quest for power and authority as the underlying problem, writing, “Automation, control and monopolization of information sources and distribution channels will expand, with a goal to monetize or obfuscate.”

Allan Shearer, associate professor at the University of Texas, Austin, observed, “The problem is the combination of the proliferation of platforms to post news and an increasing sense of agency in each person that his/her view matter, and the blurring of facts and opinions.”

A vice president for stakeholder engagement said, “With a deluge of data, people look for shortcuts to determine what they believe, making them susceptible to filter bubbles and manipulation.”

Jens Ambsdorf, CEO at The Lighthouse Foundation, based in Germany, replied, “The variability of information will increase. The amount of ‘noise’ and retweeted stuff will increase and without skills and tools it will become more difficult for citizens to sort out reliable from unreliable sources.”

A professor at Harvard Business School wrote, “The vast majority of new users and a majority of existing users are not sophisticated readers of news facts, slants or content, nor should we expect them to be. Meanwhile, the methods for manipulation are getting better.”

Diana Ascher, information scholar at the University of California, Los Angeles, observed, “Fake news, misinformation, disinformation and propaganda are not new; what’s new is the algorithmic propagation of such information. In my research, I call this the new yellow journalism.”

Axel Bender, a group leader for Defence Science and Technology (DST) Group of Australia, said, “The veracity of information is unlikely to improve as 1) there will be an increase in the number and heterogeneity of (mis)information sources; and 2) artificially intelligent misinformation detectors will not be smart enough to recognise semantically sophisticated misinformation.”

[information and communications technology]

Collette Sosnowy, a respondent who shared no additional personal details, wrote, “The sources of information and the speed with which they are spread are so numerous I don’t see how they could effectively be curtailed.”

I am concerned that as artificial intelligences advance, distinguishing between what is written by a human and what is generated by a bot will become more difficult. Tiffany Shlain

[software]

Sebastian Benthall, junior research scientist, New York University Steinhardt, responded, “The information environment is getting more complex. This complexity provides more opportunities for production and consumption of misinformation.”

Tiffany Shlain, Filmmaker & Founder, The Webby Award, wrote, “I am concerned that as artificial intelligences advance, distinguishing between what is written by a human and what is generated by a bot will become more difficult.”

Matt Moore, a business leader, observed, “The pressures driving the creation of ‘fake news’ will only increase – political partisanship, inter-state rivalry, plus the technologies needed to create and disseminate fake news will also increase in power and decrease in cost. New verification tools will emerge but these will not be sufficient to counter these other forces.”

Jon Lebkowsky, web consultant/developer, author and activist, commented, “Given the complexity of the evolving ecosystem, it will be hard to get a handle on it. The decentralization of education is another difficult aspect: universal centralized digital literacy education could potentially mitigate the problem, but we could be moving away from universal standard educational systems.”

The executive director of a major global privacy advocacy organization said, “What’s essentially happening today is basic human behaviour and powerful systems at play. It is only out-of-touch advocates and politicians who believe we can somehow constrain these results.”

Veronika Valdova, managing partner at Arete-Zoe, noted, “Rogue regimes like Russia will continue exploiting the information environment to gain as much power and influence as possible. Jurisdictional constraints will make intervention less practicable. Also, whilst the overall information environment in English-speaking countries might improve due to the employment of artificial intelligence and easier neutralization of bots, this may not necessarily be the case for small nations in Europe where the environment is compartmented by language.”

Joel Reidenberg, chair and professor of law at Fordham University, wrote, “The complexity of the information ecosystem and the public’s preference for filter bubbles will make improvements very difficult to achieve at scale.”

[promotion of misinformation]

An author and journalist based in North America wrote, “Fragmenting social groups and powerful economic interests have the motive and means to create their own narratives. Who is the status quo that can defeat this in a modern society that likes to define itself as disruptive, countercultural, rebel, radical – choose the term that fits your tribe.”

Anonymous respondents also commented:

  • “There is just too much information and the environment has become so fragmented.”
  • “The sheer volume of information and communication is too much.”
  • “Many users seem to be indifferent or confused about objectively accurate information, which is difficult to confirm in an environment of information overload.”

Humans are by nature selfish, tribal, gullible convenience seekers who put the most trust in that which seems familiar

A share of these respondents supported a view articulated by Peter Eckart, director of information technology at the Illinois Public Health Institute. He argued, “The problem isn’t with the sources of information, but with the hearers of it. If we don’t increase our collective ability to critically analyze the information before us, all of the expert systems in the world won’t help us.” People believe what they want to believe, these experts argued, and now have new ways to disseminate the things they believe to others.

The information superhighway’s very speed and ease have made people sloppier thinkers, not more discerning. Author and former journalism professor

David Sarokin, writer, commented, “People spread the information they want to spread, reliable or not. There’s no technology that will minimize that tendency.”

Helen Holder, distinguished technologist at Hewlett Packard (HP), said, “People have a strong tendency to believe things that align with their existing understanding or views. Unreliable information will have a substantial advantage wherever it reinforces biases, making it difficult to discredit or correct. Also, people are more inclined to believe information received from more than one source, and the internet makes it trivial to artificially simulate multiple sources and higher levels of popular support or belief.”

Bill Jones, chairman of Global Village Ltd., predicted, “Trust can be so easily abused that it’s our collective ability to discern false from true, which ultimately is the key, but that is fraught with challenges. No one can do it for us.

A futurist/consultant based in North America said, “The toxicity of the modern information landscape is as much attributable to vulnerabilities in human neurobiology as it is to anything embedded in software systems. Many of us, including those with the most control over the information environment, badly want things to improve, but it’s unclear to me that purely technical methods can solve these problems.”

Cliff Cook, planning information manager for the City of Cambridge, Massachusetts, noted, “Fake news and related problems thrive when they have a receptive audience. The underlying problem is not one of fake news – rumors were no doubt a problem in ancient Rome and the court of King Henry VIII – but the presence of a receptive audience. Until a means is found to heal the fundamental breakdown in trust among Americans, I do not see matters improving, no matter what the technical fix.”

An anonymous respondent wrote, “Google and Facebook are focusing money and attention on the problem of false information. … We have not yet reached a societal tipping point where facts are valued, however.”

Matt Armstrong, an independent research fellow working with King’s College and former executive director of the U.S. Advisory Commission on Public Diplomacy, replied, “The influence of bad information will not change until people change. At present, there is little indication that people will alter their consumption habits. When ‘I heard it on the internet’ is a mark of authority rather than derision as it was, we are in trouble. This is coupled with the disappointing reality that we are now in a real war of words where many consumers do not check whether the words are/were/will be supported by actions or facts. The words of now are all that matter to too many audiences.”

An assistant professor of political science wrote, “Improving information environments does little to address demand for misinformation by users.”

An anonymous research scientist observed, “False narratives are not new to the internet, but authority figures are now also beginning to create them.”

A former journalism professor and author of a book on the future of news commented, “The information superhighway’s very speed and ease have made people sloppier thinkers, not more discerning.”

A researcher based in Europe replied, “The problem with fake news is not a technological one, but one related to human nature, fear, ignorance and power. … In addition, as new tools are developed to fight fake news, those interested in spreading them will also become more savvy and sophisticated.”

[a one-sided and largely discredited publication in American Revolution times]

Many respondents mentioned distrust in authority as a motivating factor behind the uptick in the spread of misinformation, and some said political polarization and the destruction of trust are feeding the emergence of more misinformation.

Daniel Kreiss, associate professor of communication at University of North Carolina, Chapel Hill, commented, “Misinformation/fake news/ideological/identity media is a political problem. They are the outcome, not the cause, of political polarization.”

A senior fellow at a center focusing on democracy and the rule of law wrote, “Many people do not care about the veracity of the news they consume and circulate to others, and these people will continue spreading false information; those who do so from within established democracies can be punished/penalized, but many will remain in non-democracies where access to reliable information will deteriorate. My prediction is that in parts of the world things will improve, in others they will deteriorate. On average things will not improve.”

Anonymous respondents also wrote:

  • “To really solve this issue we need to look deeper at what truth means and who cares about it. It will take more than a decade to sort that out and implement solutions.”
  • “Collective-action problems require a collective-action response, and I don’t think we’ll manage that in the international environment.”
  • “The information environment reflects society at its best or worst; changes in human behavior, not technology, will impact on the information environment.”
  • “At best, the definition of ‘lie’ will simply change and official disinformation will be called information anyway.”
  • “I have yet to see any evidence that the most-active political media consumers want more facts and less opinion.”
  • “There has never been a wholly truthful human environment, and there are too many vested interests in fantasy, fiction and untruths.”
  • “I do not think technology can keep up with people’s creativity or appetite for information they find congenial to their pre-existing beliefs.”
  • “As long as people want to believe a lie, the lie will spread.”
  • “From propaganda to humour, the natural drive to share information will overcome any obstacles that hinder it.”
  • “It will be a constant game of whack-a-mole, and polarization has now come to facts. It’s almost like facts are a philosophy class exercise now – what is truth?”

In existing economic, political and social systems, the powerful corporate and government leaders most able to improve the information environment profit most when it is in turmoil

A number of these experts predicted that little will change as long as social media platforms favor content that generates lots of clicks – and therefore ad dollars – whether the information is true or not. A typical version of this view came from Jonathan Brewer, consulting engineer for Telco2. He commented, “The incentives for social media providers are at odds with stemming the spread of misinformation. Outrageous claims and hyperbole will always generate more advertising revenue than measured analysis of an issue.”

Gina Neff, professor at the Oxford Internet Institute, said, “The economic stakes are simply too high to rein in an information ecosystem that allows false information to spread. Without the political commitment of major social media platforms to address the problem, the technical challenges to solving this problem will never be met.”

The basic incentive structure that promotes untrustworthy information flow won’t change, and the bad guys will improve their approaches faster than the good guys. Professor of legal issues and ethics

Ari Ezra Waldman, associate professor of law at the New York Law School, wrote, “The spread of misinformation will only improve if platforms take responsibility for their role in the process. So far, although intermediaries like Facebook have nodded toward doing something about ‘fake news’ and cyberharassment and other forms of misleading or harmful speech, they simultaneously continue to maintain that they are merely neutral conduits and, therefore, uneasy about maintaining any sort of control over information flow. The ‘neutral conduit’ canard is a socio-legal strategy that is little more than a fancy way of absolving themselves of responsibility for their essential role in the spread of misinformation and the decay of discourse.”

Joseph Turow, professor of communication at the University of Pennsylvania, commented, “The issues of ‘fake’ and ‘weaponized’ news are too complex to be dealt with through automated, quantitative or algorithmic means. These activities have always existed under one label or another, and their rapid distribution by activist groups, companies and governments as a result of new technologies will continue. One reason is that the high ambiguity of these terms makes legislating against them difficult without infringing on speech and the press. Another reason is that the people sending out such materials will be at least as creative as those trying to stop them.”

A professor of legal issues and ethics at one of the pre-eminent graduate schools of business in the United States said, “The basic incentive structure that promotes untrustworthy information flow won’t change, and the bad guys will improve their approaches faster than the good guys.”

Dave Burstein, editor of FastNet.news, said, “Speaking of reports on policy and technology, the important thoroughly misleading information usually comes from the government and especially lobbyists and their shills. All governments lie, I.F. Stone taught us, and I can confirm that’s been true of both Obama’s people and the Republicans this century I have reported. Corporate advocates with massive budgets – Verizon and AT&T in the hundreds of billions – bamboozle reporters and governments into false claims. The totally outnumbered public-interest advocates often go over the line sometimes as well.”

Johanna Drucker, professor of information studies at the University of California, Los Angeles, commented, “The constructedness (sic) of discourse removes news from the frameworks in which verification can occur. Responsible journalism will continue on the basis of ethical accountability, but nothing will prevent other modes of discourse from proliferating. No controls can effectively legislate for accuracy or verity. It is a structural impossibility to suture language and the lived.”

Mercy Mutemi, legislative advisor for the Kenya Private Sector Alliance, commented, “Fake news spreads faster than genuine news. It is more attractive and ‘hot.’ We do not see corresponding efforts from genuine news peddlers to give factual information that is timely and interesting. On the contrary, reporters have become lazy, lifting articles off social media and presenting only obvious facts. Fake news peddlers have invested resources (domains and bots) to propagate their agenda. There isn’t a corresponding effort by genuine news reporters. People will get so used to being ‘duped’ that they will treat everything they read with skepticism, even real news. It will no longer be financially viable to invest in real news as the readership may go down. In such an environment, it is likely fake news will continue to thrive.”

A professor of media and communication based in Europe said, “The online information environment will not improve if its architectural design, operation and control is left to five big companies alone. If they do not open up their algorithms, data governance and business models to allow for democratic and civic participation (in other words, if there is only an economic driver to rule the information environment) the platform ecosystem will not improve its conditions to facilitate an open and democratic online world.”

A leading researcher studying the spread of misinformation observed, “The payoffs for actors who are able to set the agenda in the emerging information environment are rising quickly. Our collective understanding of and ability to monitor these threats and establish ground rules across disparate communities, geographies and end devices will be challenged.”

A research scientist at Oxford University commented, “Misinformation and disinformation and motivated reasoning are integral to platform capitalism’s business model.”

Rick Hasen, professor of law and political science at the University of California, Irvine, said, “By 2027 there will be fewer mediating institutions such as acceptable media to help readers/viewers ferret out truth. And there will be more deliberate disinformation from people in and out of the U.S.”

Raymond Hogler, professor of management at Colorado State University, replied, “Powerful state actors … will continue to disseminate false, misleading and ideologically driven narratives posing as ‘news.’”

A member of the Internet Architecture Board said, “The online advertising ecosystem is very resistant to change, and it powers the fake news ‘industry.’ Parties that could do something about it (e.g., makers of browsers) don’t have a strong incentive to do so.”

[sharing of all information]

An author/editor/journalist wrote, “Confirmation bias, plus corporate manipulation, will not allow an improvement in the information environment.”

An internet pioneer and principal architect in computing science replied, “Clicks will remain paramount, and whether those clicks are on pages containing disinformation or not will be irrelevant.”

Edward Kozel, an entrepreneur and investor, predicted, “Although trusted sources (e.g., The New York Times) will remain or new ones will emerge, the urge for mass audience and advertising revenue will encourage widespread use of untrusted information.”

David Schultz, professor of political science at Hamline University, said, “The social media and political economic forces that are driving the fragmentation of truth will not significantly change in the next 10 years, meaning the forces that drive misinformation will continue.”

Paul Gardner-Stephen, senior lecturer at the College of Science & Engineering at Flinders University, noted, “Increasing technical capability and automation, combined with the demonstrated dividends that can be obtained from targeted fake news makes an arms race inevitable. Governments and political parties are the major players. This is Propaganda 2.0.”

Peter Levine, associate dean and professor at the Tisch College of Civic Life at Tufts University, observed, “I don’t think there is a big enough market for the kinds of institutions, such as high-quality newspapers, that can counter fake news, plus fake news pays.”

A postdoctoral scholar at a major university’s center for science, technology and society predicted, “Some advances will be made in automatically detecting and filtering ‘fake news’ and other misinformation online. However, audience attention and therefore the financial incentives are not aligned to make these benefits widespread. Even if some online services implement robust filtering and detection, others will happily fill the void they leave, pandering to a growing audience willing to go to ‘alternative’ sites to hear what they want to hear.”

David Brake, a researcher and journalist, pointed out, “The production and distribution of inaccurate information has lower cost and higher incentives than its correction does.”

Mark Lemley, a professor of law at Stanford University, wrote, “Technology cannot easily distinguish truth from falsehood, and private technology companies don’t necessarily have the incentive to try.”

Darel Preble, president and executive director at the Space Solar Power Institute, commented, “Even the technical media … is substituting ad hominem attacks (or volume) and repetition for technical accuracy to complex problems. Few people are familiar with or want to risk their paycheck to see these problems fixed, so these problems will continue growing for now.”

Amali De Silva-Mitchell, a futurist, replied, “There is political and commercial value in misinformation. Absolutely ethical societies have never existed. Disclosures are critical and it will be important to state the source of news as being human or machine, with the legal obligation remaining with the human controller of the data.”

Some said the information environment is impossible to fully tame due to the human drive to continually innovate, competing to upgrade, monetize and find new ways to assert power.

Alan D. Mutter, media consultant and faculty at the graduate school of journalism at the University of California, Berkeley, replied, “The internet is, by design, an open and dynamically evolving platform. It’s the Wild West, and no one is in charge.”

Anonymous respondents commented: 

  • “‘Fake news’ is just the latest incarnation of propaganda in late capitalism.”
  • “The profit motive will be put in front of value. The reliance of corporations on algorithms that allow them to do better targeting leads to greater fragmentation and greater possibility for misinformation.”
  • “People have to use platforms for internet communication. The information environment is managed by the owners of these platforms who may not be so interested in ethical issues.”
  • “We cannot undo the technology and economics of the current information environment, nor can we force those who are profiting from misinformation to forego their monetary gains.”

Human tendencies and infoglut drive people apart and make it harder for them to agree on ‘common knowledge.’ That makes healthy debate difficult and destabilizes trust. The fading of news media contributes to the problem

Many of these experts said one of the most serious problems caused by digital misinformation and the disruption of public support of traditional news media models is the shrinkage of the kind of commonly embraced facts that are the foundation of civil debate – a consensus understanding of the world. An anonymous respondent predicted, “The ongoing fragmentation of communities and the lack of common voice will lead to the lower levels of trust.”

A major issue here is that what one side believes is true, is not the same as what the other side believes. … We are facing an almost existential question here of ‘what is truth?’ Historian and former legislative staffer

A professor of education policy commented, “Since there is no center around which to organize truth claims (fragmented political parties, social groups, identity groups, institutional affiliations, fragmentation of work environments, increasing economic precarity, etc.) … there are likely to be more, not fewer, resources directed at destabilizing truth claims in the next 10 years.”

An historian and former legislative staff person based in North America observed, “A major issue here is that what one side believes is true, is not the same as what the other side believes. Example: What Yankees and Confederates believed about the Civil War has never been the same, and there are differing social and cultural norms in different ages, times, regions and religions that have different ‘takes’ on what is right and proper behavior. We are facing an almost existential question here of ‘what is truth?’”

[than the sources that express the same belief system as they and their friends]

Philip Rhoades, retired IT consultant and biomedical researcher with the Neural Archives Foundation, said, “The historical trend is for information to be less reliable and for people to care less.”

A professor of rhetoric and communication noted, “People can easily stay in their own media universe and never have to encounter ideas that conflict with their own. Also, the meshing of video and images with text creates powerful effects that appeal to the more rudimentary parts of the brain. It will take a long time for people to adapt to the new media environment.”

A professor of journalism at New York University observed, “The fragmentation of the sources of media – and increasing audience participation – meant that it was no longer just canonical sources that could get their voices amplified.”

A number of respondents challenged the idea that any individuals, groups or technology systems could or should “rate” information as credible or not.

A professor of political economy at a U.S. university wrote, “I don’t think there is a clear, categorical distinction between ‘false’ news and the other kind. Some falsehoods have been deliberately fostered by elites for purposes of political management – the scope has widened dramatically in recent years.”

Greg Shatan, partner at Bortstein Legal Group based in New York, replied, “Unfortunately, the incentives for spreading false information, along with the incentives for destabilizing trust in internet-based information, will continue to incentivize the spread of ‘fake news.’ Perversely, heightened concerns about privacy and anonymity are counterproductive to efforts to increase trust and validation.”

A project manager for the U.S. government responded, “It is going to get much worse before it gets better. There is no sign that people are willing to work at what we agree on, most would prefer to be divisive and focus on differences.”

An anonymous research scientist said, “I do not buy the assumption that information, ‘accurate’ or not, is the basis of political or – in fact – any action. I actually think it never has been. Yes, this is the story we like to tell when justifying actions vis-a-vis everyone else. It helps us present ourselves as rational, educated and considerate human beings. But no, in practice we do and say and write and report whatever seems reasonable in the specific situation for the specific purposes at hand. And that is OK, as long as others have the opportunity to challenge and contest our claims.”

Some respondents noted that trust has to be in place before people can establish any sort of shared knowledge or begin to debate and decide the facts on which decisions can be based.

An anonymous internet activist/user based in Europe commented, “Who can determine what is or is not fake news?”

A principal research scientist based in North America commented, “The trustworthiness of information is a subjective measure as seen by the consumer of that information.”

An anonymous futurist/consultant said, “Technology and platform design is only one part of the problem. Building trust and spreading information-quality skills takes time and coordination.”

A director with a digital learning research unit at a major university on the U.S. West Coast said, “As the technology evolves, we will find ways (technologically) and also culturally to become savvier about the way in which we manage and define ‘trustworthiness.’”

A small segment of society will find, use and perhaps pay a premium for information from reliable, quality sources. Outside of this group ‘chaos will reign’ and a worsening digital divide will develop

A deeper digital divide was predicted by some respondents who said that 10 years from now those who value accurate information and are willing to spend the time and/or money to get it will separate from those who do not. Alex ‘Sandy’ Pentland, member of the U.S. National Academy of Engineering and the World Economic Forum, predicted of the information environment, “Things will improve, but only for the minority willing to pay subscription prices.”

An anonymous journalist observed, “One of today’s most glaring class divides is between those who are internet-savvy and so skilled at evaluating different sources and information critically that it’s almost instinctive/automatic, and those who have very limited skills in that department. This divide is usually glaringly obvious in anyone’s Facebook feed now that such a large portion of the population is on Facebook, and the lack of ability to evaluate sources online critically is most common in older persons with limited education and/or limited internet proficiency – and can sometimes also be observed in young people with the same attributes (limited education/internet proficiency).”

Garland McCoy, president of the Technology Education Institute, predicted, “As most of us know there is the public internet, which operates as a ‘best effort’ platform and then there are private internets that command a premium because they offer much more reliable service. So it will be with the ‘news’ and information/content on the internet. Those who have the resources and want fact checking and vetting will pay for news services, which exist today, that charge a subscription and provide, for the most part, vetted/authenticated facts ‘news.’ Those who do not have the resources or who don’t see the ‘market value’ will take their chances exploring the world of uncensored, unfiltered and uncontrolled human mental exertion.”

[several]

Meamya Christie, user-experience designer with Style Maven Linx, replied, “There will be a division in how information is consumed. It will be like a fork in the road. People will have a choice to go through one portal or another based on their own set of core values, beliefs and truths.”

A strategist for an institute replied, “The trust in 2027 will be only for the elites who can pay, or for the most-educated people.”

A fellow at a UK-based university said, “I don’t think a technological or top-down solution can ‘fix’ the information environment without addressing a range of root issues relating to democratic disenfranchisement, deteriorating education and anti-intellectualism.”

A senior research fellow working for the positive evolution of the information environment said, “Only a small fraction of the population (aged, educated, affluent – i.e., ready to pay for news) will have good, balanced, fair accurate, timely, contextualized, information.”

Icon for promotion number 1

Sign up for The Briefing

Weekly updates on the world of news & information

Icon for promotion number 1

Sign up for our weekly newsletter

Fresh data delivery Saturday mornings