Numbers, Facts and Trends Shaping Your World

The Future of Truth and Misinformation Online

Theme 3: The information environment will improve because technology will help label, filter or ban misinformation and thus upgrade the public’s ability to judge the quality and veracity of content

Many respondents who said they hope for or expect an improvement in the information environment 10 years from now mentioned ways in which new technological and verification solutions might be implemented. A number of these proposed solutions include the hope that technology will be created to evaluate content – making it “accessible.”

Technology moves fast and humans adapt more slowly, but we have a proven capability to solve problems we create with technology. Robert Bell

Andrea Forte, associate professor at Drexel University, said, “As mainstream social media take notice of information quality as an important feature of the online environment, there will be a move towards designing for what I call ‘assessability’ – interfaces that help people appropriate assessments of information quality.”

Filippo Menczer, professor of informatics and computing at Indiana University, noted, “Technical solutions can be developed to incorporate journalistic ethics into social media algorithms, in a way similar to email spam filters.”

Scott Fahlman, professor emeritus of AI and language technologies at Carnegie Mellon University, commented, “For people who are seriously trying to figure out what to believe, there will be better online tools to see which things are widely regarded as true and which have been debunked.”

Robert Bell, co-founder of the Intelligent Community Forum, commented, “Technology moves fast and humans adapt more slowly, but we have a proven capability to solve problems we create with technology.”

Joanna Bryson, associate professor and reader at University of Bath and affiliate with the Center for Information Technology Policy at Princeton University, responded, “We are in the information age, and I believe good tools are likely to be found in the next few years.”

David J. Krieger, director of the Institute for Communication & Leadership in Lucerne, Switzerland, commented, “The information environment will improve because a data-driven society needs reliable information, and it is possible to weed out the false information.”

Andrew McStay, professor of digital life at Bangor University in Wales, wrote, “Undoubtedly, fake news and weaponised information will increase in sophistication, but so will attempts to combat it. For example, the scope to analyse at the level of metadata is a promising opportunity. While it is an arms race, I do not foresee a dystopian outcome.”

Clifford Lynch, director of the Coalition for Networked Information, noted, “The severity of the problem has now been recognized fairly widely, and while I expect an ongoing ‘arms race’ in the coming decade, I think that we will make some progress on the problem.”

A CEO and research director noted, “There are multiple incentives, economic and political, to solve the problem.”

An anonymous respondent said, “The public will insist that online platforms take more responsibility for their actions and provide more tools to ensure information veracity.”

Likely tech-based solutions include adjustments to algorithmic filters, browsers, apps and plug-ins and the implementation of ‘trust ratings

Matt Mathis, a research scientist at Google, responded, “The missing concept is an understanding of the concept of ‘an original source.’ For science, this is an experiment, for history (and news) an eyewitness account by somebody who was (verifiably) present.   Adding ‘how/why we know this’ to non-original sources will help the understanding that facts are verifiable.”

Federico Pistono, entrepreneur, angel investor and researcher with Hyperloop TT, commented, “Algorithms will be tailored to optimize more than clicks – as this will be required by advertisers and consumers alike – and deep learning approaches will improve.”

Tatiana Tosi, netnographer at Plugged Research, commented, “The information environment will improve due to new artificial-intelligence bots that will verify the information. This should balance privacy and human rights in the automated environment.”

A web producer/developer for a U.S.-funded scientific agency predicted, “The reliance on identity services for real-world, in-person interactions, which start with trust in web-based identification, will force reliability of information environments to improve.”

An associate professor of business at a university in Australia commented, “Artificial intelligence technologies are advancing quickly enough to create an ‘Integrity Index’ for news sources even down to the level of individual commentators. Of course, other AI engines will attempt to game such a system. I can envisage an artificial blogger that achieves high levels of integrity before dropping the big lie just in time for an election. Big lies take a day or more to be disproven so it may just work, but the penalty for a big lie, or any lie, can be severe so everyone who gained from the big lie will be tainted.”

A distinguished engineer for one of the world’s largest networking technologies companies commented, “Certificate technologies already exist to validate a website’s sources and are in use for financial transactions. These will be used to verify sources for information in the future. Of course, there will always be people who look for information (true or false) that validates their biases.”

Ayaovi Olevie Kouami, chief technology officer at the Free and Open Source Software Foundation for Africa, said, “The actual framework of the internet ecosystem could have a positive impact on the information environment by setting up all the requisite institutions, beginning with DNSSEC, IXPs, FoE, CIRT/CERT/CSIRT, etc.”

Jean Paul Nkurunziza, a consultant based in Africa, commented, “The expected mass adoption of the IPv6 protocol will allow every device to have a public IP address and then allow the tracking of the origin of any online publication.”

Mark Patenaude, vice president for innovation, cloud and self-service technology at ePRINTit Cloud Technology, replied, “New programming tech and knowledge will create a new language that will teach us to recognize malicious, false, misleading information by gathering all news and content sources and providing us with accurate and true information.”

Hazel Henderson, futurist and CEO of Ethical Markets Media, said, “Global ethical standards and best practices are being developed in the many domains affected. New verification technologies, including blockchain and smart contracts, will help.”

An anonymous respondent based in North America who has confidence things may be improved listed a series of technologies likely to be effective, writing: “Artificial intelligence, machine learning, exascale computing from everywhere, quantum computing, the Internet of Things, sensors, big data science and global collaborative NREN (National Research and Education Network) alliances.”

An anonymous respondent based in Europe warned, “Technical tools and shields to filter and recognize manipulations will be more effective than attempts at education in critical thinking for end users.”

Anonymous survey participants also responded:

  • “Relatively simple steps and institutional arrangements can minimize the malign influence of misinformation.”
  • “Machines are going to get increasingly better at validating accuracy of information and will report on it.”
  • “Artificial intelligence technologies will advance a lot, making it easy to make fake news more difficult to be discovered and identified.”
  • “Technology for mass verification should improve as will the identification of posters. Fakers will still exist but hopefully the half-life of their information will shrink.”
  • “Things will improve due to [better tracking of the] provenance of data and security and privacy laws.”

Regulatory remedies could include software liability law, required identities and the unbundling of social networks like Facebook

A number of respondents said that evidence suggests people and internet content platform providers can’t solve this problem and argued there will be pressure for regulatory reforms that hold consistently bad actors responsible.

I hope regulators will recognise that social media companies are publishers, not technology companies, and therefore must take responsibility for what they carry. Anonymous respondent

An associate professor at a major Canadian university said, “As someone who has followed the information-retrieval community develop over the past 15 years – dealing with spam, link farms, etc. – given a strong enough incentive, technologies will advance to address the challenge of misinformation. This may, however, be unevenly distributed, and may be more effective in domains such as e-business where there is a financial incentive to combat misinformation.”

An anonymous respondent wrote, “I hope regulators will recognise that social media companies are publishers, not technology companies, and therefore must take responsibility for what they carry. Perhaps then social media companies will limit the publication of false advertising and misinformation.”

A professor of media and communication based in Europe said, “It will be very difficult to assign penalties to culprits when platforms deny responsibility for any wrongdoing by their ‘users.’ Accountability and liability should definitely be assumed by platform operators who spread news and information, regardless of its source and even if unwittingly. Government has very limited power to ‘fake news’ or ‘misinformation’ but it can definitely help articulate which actors in society are responsible.”

A senior vice president for government relations predicted, “Governments should and will impose additional obligations on platforms to increase their responsibility for content on their services.”

One possibility that a notable share of respondents mentioned is the requirement of an authenticated ID for every user of a platform. An anonymous respondent said, “Bad actors should be banned from access, but this means that a biography or identification of some sort would be necessary of all participants.”

Those in support of requiring internet users to provide a real identity when participating online also mentioned the establishment of a reputation system. A partner in a services and development company based in Switzerland commented, “A bad reputation is the best penalty for a liar. It is the job of society to organize itself in a way to make sure that the bad reputation is easily visible. It should also extend to negligence and any other related behaviour allowing the spread of misinformation. Penal law alone is too blunt a tool and should not be regarded as a solution. Modern reputation tools (similar in approach to what financial audits and ratings have achieved in the 20th century) need to be built and their use must become an expected standard (just like financial audits are now a legal requirement).”

An anonymous activist/user wrote, “Loss of anonymity might be a way of ensuring some discipline in the system, yet the institutions which would be deciding such punishments today have no credibility with most of the population.”

An anonymous ICT for development consultant and retired professor commented, “Government best plays a regulating role and laws are punitive; so both regulation and laws should be stringently applied.”

A post-doctoral fellow at a center for governance and innovation replied, “Jail time and civil damages should be applied where injuries are proven. Strictly regulate non-traditional media especially social media.”

An associate professor at Brown University wrote, “Essentially we are talking about the regulation of information, which is nearly impossible since information can be produced by anyone. Government can establish ethical guidelines, perhaps similar to the institutional review boards that regulate scientific research. Or it can be done outside government, like a better business bureau.”

An anonymous respondent based in Europe wrote, “Publicity, monetary fines and definitely jail terms, depending on the scope and consequences of the spreading false information. In terms of the government role in terms of prevention, it should not be different than any other area, including sound legal regulation, strengthened capacities identify false information and stop at early stages using legal mechanism, education and awareness raising of citizens, as well as higher ethical stands (or zero tolerance) for public officials walking on the edge.”

A postdoctoral scholar based in North America wrote, “If we are talking about companies such as Facebook, I do think there is room for discussion on the federal level of their responsibility as, basically, a private utility. Regulation shouldn’t be out of the question.”

A legal researcher based in Asia/Southeast Asia said, “Stop them from using any internet. Government should create regulations for internet companies to prevent the distribution of false information.”

A professor of humanities said, “Penalties are a nice idea, but who will decide which instances of ‘fake news’ require greater penalties than others? The bureaucracy to make these decisions would have to be huge.”

Icon for promotion number 1

Sign up for The Briefing

Weekly updates on the world of news & information

Icon for promotion number 1

Sign up for our weekly newsletter

Fresh data delivery Saturday mornings