Numbers, Facts and Trends Shaping Your World

The Future of Digital Spaces and Their Role in Democracy

2. Public digital spaces will be improved: Tech can be fixed, governments and corporations can reorient incentives and people can band together to work for reform

A notable share of the most hopeful respondents to this canvassing declared that in order to serve the public interest and improve digital spaces, the tech industry, government and civil society need to focus on achieving an ethical tech design that values people over profit. They said that this – combined with vastly improved individual digital literacy globally, a much-upgraded investment in accurate, fair journalism and the closing of the digital divide – is crucial to bringing about the change needed for a better future with new, more effective digital-age social norms. Some also said support for accurate journalism and global access to fact-based public information sources is essential to help citizens responsibly participate in democratic self-governance.

While some respondents said the primary responsibility for improvements in the digital public sphere falls solely upon the technology industry or solely upon government or upon civil society, many said that real change requires human leadership across all sectors of society to bring it all together.

David J. Krieger, director of the Institute for Communication and Leadership, based in Lucerne, Switzerland, said, “What is needed in the face of global problems such as climate change, migration, a precarious and uncontrolled international finance system, the ever-present danger of pandemics, not to speak of a Hobbesian ‘state of nature’ or a geopolitical ‘war of all against all’ on the international level, is a viable and inspiring vision of a global future. The global network society is a data-driven society. The most important reforms or initiatives we should expect are those that make available more data of better quality to more people and institutions. Here the primary values and guiding norms are connectivity, flow of information, encouragement of participation in production and use of information, and transparency. Those in business, politics, civil society organizations and the public should focus on practical ways in which to implement these values. To the extent that they are implemented it will become possible to mitigate against the social harms caused by the economy of attention in media (click bait, filter bubbles, fake news, etc.), political opportunism, and the lack of social responsibility by business. Decisions on all levels and in all areas – business, education, health care, science and even politics – should be made on the basis of evidence and not on the basis of status, privilege, gut feelings, bias, personal experience, etc. Data-driven decision-making can, in many situations, be automated. This requires the most complete and reliable data on everything and everyone as possible.”

Mark Surman, executive director of the Mozilla Foundation, a leading advocate for trustworthy artificial intelligence (AI), digital privacy and the open internet, wrote, “It is my optimistic side that says ‘yes,’ we can improve. This is far from a certainty, but right now we have governments and a public who actively want to point internet spaces in a better direction. And you have a generation of young developers and startup founders who want to do something different – something more socially beneficial – than what they see from big tech. If these people can rally around a practical and cohesive vision of what ‘a better internet’ looks like, they have the resources, power and smarts to make it a reality.”

“We are not powerless in the face of our technology. We can choose the tech we find acceptable and we can mandate changes to make it serve us better rather than worse.”


David Weinberger, senior researcher at Harvard’s Berkman Center for Internet and Society

David Weinberger, senior researcher at Harvard’s Berkman Center for Internet and Society, commented, “These technologies are complex dynamic systems embedded in the complex dynamic system that we call life on Earth. I expect to see more concern about how the current systems are tearing us apart, along with a continuation of the underplaying of how they are binding us together. … We are not powerless in the face of our technology. We can choose the tech we find acceptable and we can mandate changes to make it serve us better rather than worse. Of course, complex dynamic systems are often – usually  – unpredictable, nonlinear and chaotic, but because we humans can exert control if we choose to, I have to believe our social tech will get better at serving our human needs and goals. I do want to note that it is entirely possible that our ideas about what constitutes useful and helpful discourse are being changed by our years on social media. Because this is a change in values, what looks like negative behavior now may start to look positive. By this I mean that the social media ways of collaboratively making sense of our world may start to look essential and look like the first time we humans have had a scalable way of building meaning. If we are able to get past the existential threat posed by the ways our online social engagements can reinforce deeply dangerous beliefs, then I have hope that – with the aid of 2035’s tech – we’ll be able to make even more sense of our world together in diverse ways that have well-traveled links to other viewpoints.”

Rob Reich, a professor focused on public policy, ethics and technology who also serves as associate director of the Human-Centered Artificial Intelligence initiative at Stanford University, predicted, “In the absence of a significant change in the ethos of Silicon Valley, the regulatory indifference/logjam of Washington, D.C., and the success of the current venture capital funding model, we should not expect any significant change to digital spaces, and in 2035 our world will be worse. Our collective and urgent task is to address problems and make interventions in all three of these core elements: the ethos of Silicon Valley, the regulatory savvy and willpower of D.C. and the European Union, and the funding model of venture capitalists.”

A selection of respondents’ comments on the broad topic of the people-driven change needed is organized over the next section under these themed subheadings: 1) Some tech design will focus on pro-social and pro-civic digital spaces; 2) Government regulation plus less-direct “soft” pressure by government will help share corporations’ adoption of more ethical behavior; 3) The general public’s digital literacy will improve and people’s growing familiarity with technology’s dark sides will force change; 4) People will evolve and improve their use of digital spaces and make them better; 5) New internet governance structures will appear that draw on collaborations among citizens, businesses and governments: 6) Better civic life online will arise as communities find ways to underwrite accurate, trustworthy public information – including journalism.

Social media algorithms are the first thing to fix

A large share of respondents singled out algorithmic intermediaries used by big tech firms as the most significant problem to overcome. They note that algorithms privilege user engagement with social media and profit over the quality of content that social media users see. They argue that those incentives for user engagement have replaced journalism and other traditional democratic intermediaries in shaping the character of knowledge-sharing in digital spaces and discourse.

They point out that the surveillance-based business model of digital capitalism enabled a new class of mega-rich individuals and corporations to control the primary infrastructures of the public sphere and wield enormous lobbying power over government. These experts urge that big tech should focus on solving emerging problems by implementing more ethical applications of artificial intelligence (AI) to improve online spaces that are important to democracy and the public good.

Don Heider, executive director of the Markkula Center for Applied Ethics at Santa Clara University, wrote, “Technology could be designed to promote the common good and human well-being. This is a decision each organization must make in regard to what it produces. Whether or not to promote the common good and human well-being is also a decision each citizen must make each time they use any technology. Human designers and engineers make a series of choices about how technology will work, what behaviors will be allowed, what behaviors will not be allowed and hundreds of other basic decisions which are baked into technology and are often opaque to users. Then human users take that technology and use it in myriad ways, some helpful, some harmful, some neutral.

“Governments and regulatory groups can require certain features in technology, but ultimately have great difficulty in controlling technology. That’s why we spend time thinking about ethical decisions and teaching folks how to incorporate ethics into decision making, so individuals and companies and governments can consider more carefully the effect of technology on humans.”

Chris Arkenberg, research manager at Deloitte’s Center for Technology Media and Communications, predicted, “I do believe the largest social media services will continue spending to make their services more appealing to the masses and to avoid regulatory responses that could curb their growth and profitability. They will look for ways to support public initiatives toward confronting global warming, advocating for diversity and equality and optimizing our civic infrastructure while supporting innovators of many stripes. To serve the public good, social media services will likely need to reevaluate their business models, innovate on identity and some degree of digital embodiment, and scale up automating content moderation in ways that may challenge their business models. Regulators will likely need to be involved to require more guardrails against misinformation/disinformation, memetic ideologies, and exploitation of the ad model for microtargeted persuasion. However, this discussion often overlooks the reality that people have flocked to social media and continue to use it. Surveys continue to show that most users don’t change their behaviors, and when things become problematic they often want regulators to hold the companies accountable rather than taking responsibility themselves. So, part of this may simply be about maturing digital literacy.”

Amy Zalman, futures strategist and founder of Prescient Foresight, wrote, “Positive change could come from: 1) Engineering/programming options and choice into designing digital spaces differently so that those that work according to recommender systems or predictive algorithms open new spaces up for people rather than closing them into their preferences and biases. 2) Voluntary accountability by technology platform CEOs and others who profit from the internet/digital spaces. This accountability will come about, if it does, from consistent nudging by government leaders, other business leaders and the public. I do not believe that the public sector can impose these options through law or regulation very effectively right now, except at blunt levels. 3) Literacy training.”

Eric Goldman, co-director of the High-Tech Law Institute at Santa Clara University School of Law, observed, “In 15 years, I expect many user-generated content services will have figured out ways to mediate conversations to encourage more pro-social behavior than we experience in the offline world.”

Jenny L. Davis, a senior lecturer in sociology at the Australian National University, said, “Although any good/bad question obscures the complex dynamics of evolving sociotechnical systems, it is true that the speed of technological development fundamentally outpaces policies and regulations. By 2035, I expect platforms themselves to be better regulated internally. This will be motivated, indeed necessary, to sustain public support, commercial sponsorships and a degree of regulatory autonomy. I also expect tighter policies and regulations to be imposed upon tech companies and the platforms they host.”

An expert on media and information policy commented, “Several forces and initiatives will start to mitigate the problem thanks to an increasing awareness of the heterogeneous positive and negative impacts of digital spaces and digital life on individuals, communities and society. For one, technology designers will increasingly reconsider the behavioral and social effects of their choices and stronger ethical considerations will start to change the technological architectures of digital spaces. An increasing number of individuals will argue for the need of a technology ethics that can govern digital spaces and digital life. New initiatives and businesses will emerge that use ethics-informed design, creating alternative digital spaces in which individuals and groups can interact. We will increasingly realize that the effects of digital technology are heterogeneous and context-specific. Hence questions such as ‘does the internet increase or reduce depression?’ will be recognized as overly simple, as average statistics do not reveal much about a heterogeneous population. Once this is recognized, it is possible to advance technology designs and user conventions in ways that mitigate undesirable effects.”

A professor and researcher who studies the media’s role in shaping people’s political attitudes and behaviors said, “By 2035 tech leaders will be more aware of the problematic aspects of the digital sphere and design systems to work against them. There will be greater government regulation and more public awareness of the problematic aspects of digital life. There will be more choice in digital spaces. There will be less incivility and mis- and disinformation. There will still be problems with bringing diverse people together to cooperate.”

A leading expert in human-computer interfaces at a major global tech company urged, “Ethicists at large tech companies need to have actual power, not symbolic power. They can advise, but rarely (ever?) actually stop anything or cause real practices to change.”

Alf Rehn, a professor of innovation, design and management at the University of Southern Denmark, wrote, “We need to assume that in the coming 10-15 years, we will learn to harness digital spaces in better, less polarizing manners. In part, this will be due to the ability to use better AI driven for filtering and thus developing more-robust digital governance. … There will of course always be those who would weaponize digital spaces, and the need to be vigilant isn’t going to go away for a long while. Better filtering tools will be met by more-advanced forms of cyberbullying and digital malfeasance, and better media literacy will be met by more elaborate fabrications – so all we can do is hope that we can keep accentuating the positive.”

“We need to assume that in the coming 10-15 years, we will learn to harness digital spaces in better, less polarizing manners. In part, this will be due to the ability to use better AI driven for filtering and thus developing more-robust digital governance.”


Alf Rehn, a professor of innovation, design and management at the University of Southern Denmark

Kate Klonick, a law professor at St. John’s University whose research has focused on private internet platforms’ policies and social responsibilities, responded, “Norms will coalesce around speech and harms on platforms. I think political leaders will have little role in this happening. I see tech leaders and academics playing a role in shaping and identifying where the norms come out and where effective policy can land. I think that users in 2035 will have more control over what they see, hear and read online, and, also, in some ways there will be less control by consolidation of major technologies.”

A scientist and expert at data management who works at Microsoft said, “Facebook, Twitter and other social media companies are investing heavily in flagging hate speech and disinformation. Hopefully, they’ll have the legal option to act on them.”

A professor emeritus of engineering predicted, “Responsible internet companies will rise. Irresponsible internet companies will become the home to a small number of dangerous organizations.”

Some tech design will focus on pro-social and pro-civic digital spaces

A number of respondents made specific suggestions about what the tech sector can do to improve the digital public sphere. They urged that tech business methods and technology design be changed and oriented toward public good – seeing people as more than mere “users.” Some noted that well-applied artificial intelligence (AI) and the implementation of decentralized and distributed technologies may help achieve better content moderation or help create decommodified social spaces. They encourage the creation of digital spaces where rage-inducing or manipulative engagement is deemphasized and civil discourse is encouraged. For instance, they see advances leading to ad hoc birds-of-a-feather networks where like-minded people join together to discuss issues and solve problems.

Henning Schulzrinne, an Internet Hall of Fame member and former CTO for the Federal Communications Commission, wrote, “Some subset of people will choose fact-based, civil and constructive spaces, others will be attracted to or guided to conspiratorial, hostile and destructive spaces. For quite a few people, Facebook is a perfectly nice way to discuss culture, hobbies, family events or ask questions about travel – and even to, politely, disagree on matter politic. Other people are drawn to darker spaces defined by misinformation, hate and fear. All major platforms could make the ‘nicer’ version the easier choice. For example, I should be able to choose to see only publications or social media posts that rely on fact-checked, responsible publications. I should be able to avoid posts by people who have earned a reputation of offering low-quality contributions, i.e., trolls, without having to block each person individually. This might also function as the equivalent of self-exclusion in gambling establishments. (I suspect grown children or spouses of people falling into the vortex of conspiracy theories would want such an option, but that appears likely to be difficult to implement short of having power of attorney.) Social media platforms have options other than a binary block-or-distribute, such as limiting distribution or forwarding. This might, in particular, be applied to accounts that are unverified. There are now numerous address-verification systems that could be used to ensure that a person is indeed, say, living in the United States rather than at a troll farm in the Ukraine.”

Stephen Downes, an expert with the Digital Technologies Research Centre of the National Research Council of Canada, wrote, “The biggest change by 2035 will be the introduction of measures that allow for the creation of contexts. In an environment where every message can potentially be seen by everyone (this is known as ‘context collapse’) we’ve seen a trend toward negative and hostile messaging, as it is a reliable way to gain attention and followers. This has created a need, now being filled, for communication spaces that allow for the creation of local communities. Measuring online impact by high follower counts, which leads to the proliferation of negative impacts, will become a thing of the past. It should be noted that this impact is being created not by content moderation algorithms, which has been the characteristic response by social media (Facebook, Twitter, TikTok, etc.) but by changes in network topology. These changes can be hard-coded into the design of the system, as they are for example in platforms like Slack and Microsoft Teams. They can be a natural outcome of resource limitations and gateways, for example in platforms like Zoom.

“I think we may see algorithmically generated network topologies in the near future, perhaps similar to Google’s Federated Learning of Cohorts (FLoC) but with a more benign intention than the targeting of advertising. Making such a system work will require more than simply placing login or subscription barriers at the entrance to online communities; today’s social networks emerged as a response to the practice in the early 2000s, and trying it again is unlikely to be successful. A more promising approach may be found in a decentralized approach to online social networks, as found in (say) Mastodon or Diaspora. Protocols, such as ActivityPub and Webmention, have been designed around a system of federated social networks. However, the adoption barrier remains high and they’re too technical to reach widespread adoption.

“There needs to be a concerted effort to, first, embrace the idea of decentralized social networking, and second, ease the transition from toxic social media platforms to more-personable community networks. This will require that social and technology leaders embrace a certain level of standardization and interoperability that is not owned by any particular company (I recognize that this will be a challenge for the tech community). In particular, a mechanism for decentralized and (in some way) self-sovereign identity will be required, to on the one hand, enable portability across platforms, but on the other hand, ensure account security. Government can, and may be required to, play a role in such a mechanism. We are seeing signs that we’re moving toward such an approach.

“We can draw perhaps a parallel between what we might call ‘cognitive networking’ with what we already see in financial networking. A person can have a single authenticated identity, guaranteed by government, that moves across financial platforms. Their assets are mostly fluid with the system; they can move them from one platform to another and exchange them for goods and services. In cognitive networking, we see a similar design, however a person’s cognitive assets consist of activity data, content created by the person, lists and graphs, nonfungible tokens and other digital assets. The value of such assets is not measured financially but rather directly by the interactions generated in decentralized communities. In essence, the positive outcome from such a development is a transition from an economy based on mass to an economy based on connection and interactivity. This, if well executed, has the potential to address wealth inequality directly by limiting the utility of the accumulation of wealth, just as decentralized communities limit the utility of the accumulation of large numbers of followers, by making it too expensive to be able to extract value from low-return practices such as mass advertising and propaganda.

“Needless to say, there’s a lot that could go wrong. Probably the major risk is the concentration of platform ownership. Even if we achieve decentralized communities, if they depend on a given technology provider (for example, Slack or Microsoft) then there is a danger that this centralization will be monetized, creating again inequality and a concentration of wealth, and undermining the utility of cognitive networking. There needs to be a public infrastructure layer underpinning such a system, and the danger of public infrastructure being privatized is ongoing and significant.

“We might also get identity wrong. For example, how do we distinguish between individual actions and actions taken by a proxy, such as an AI agent? Failure to draw that distinction creates an advantage for individuals with access to masses of AI proxies, as they would be able to be simultaneously in every community. The impact would be very similar to the impact of targeted advertising in social network platforms such as Facebook, where it’s not possible to know what messages a given entity is targeting to different individuals and different communities, because each message is unique, and each message may be delivered by proxies whose origins cannot be detected or challenged by the recipient. These risks are significant because unless individuals are able to attain an equitable standing in a cognitive network, they are unable to participate in community decision-making, with the result that social decision-making will be conducted to the advantage of those with greater standing, just as occurs in financial networks today.”

John Battelle, co-founder and CEO of Recount Media, said, “Within 15 years, I believe the changes wrought by significantly misunderstood technologies – 5G and blockchain among them – will wrest control of the public dialogue away from our current platforms, which are mainly advertising-based business models.”

Heather D. Benoit, a senior managing director of strategic foresight, responded, “Digital life will (hopefully) be improved by a number of initiatives aimed at reducing the proliferation of misinformation and conspiracy theories online. Blockchain systems can help trace digital content to its source. Detection algorithms can identify and catalog deepfakes. Sentiment and bias analysis tools allow readers to better understand online content. A number of digital literacy programs are aiming to help educate the general public in online safety and critical thinking. One of the more interesting solutions I’ve seen are AIs built to break down echo chambers by exposing users to alternative viewpoints. There are a number of challenges to overcome – misinformation may just be one. But the fact that questions are being asked and solutions devised is a sign that digital life is maturing and that it should improve given enough time.”

Gus Hosein, executive director of Privacy International, commented, “Digital spaces are messy. They were supposed to be diverse, but to exist, the platforms work to gamify behaviour, promote consumption and ensure that people continue to participate. While much could be said of ‘old media,’ they weren’t capable of making people behave differently in order to consume. And so, we have small numbers of fora where this is taking place, and they dominate and shape behaviour. To minimise this … we would have to promote diversity of experience. Yes, we could promote alternative platforms but that hardly ever works. We could open infrastructure, but someone would still have to build and take responsibility for and secure it. The fact that alternative fora have all failed is possibly because singular fora weren’t ever supposed to be a thing in a diverse digital world that was to reflect the diversity in the world. The platforms need users to justify their financial existence, so that’s why they shape behaviour, promote engagement, ensure consumption. If they didn’t, then they wouldn’t exist. So, maybe the objective should be a promotion of diversity of experience that isn’t mediated by companies that need to benefit from human interaction. If so, that means we will have to be OK that there are fora where people are nearly solely up to ‘bad things’ because the alternative is fewer fora that replicate the uniformity of the current platforms.”

A tech CEO, founder and digital strategist said, “A positive transformation could occur if the large tech platforms can find ways to mitigate effects of propaganda and disinformation campaigns. How well can they manage the problem of disinformation while honoring the principle of free speech? Legislation could help, but much depends on the will and capabilities of the platform operators. Possible solutions might be to restrict the uses of data and enforce interoperability. Tech monopolies have evolved partly due to network effects, and these are widely held to be a substantial part of the problem. Addressing monopoly is partly a legal issue, partly a business issue and partly (in this case) an issue of technology.”

A strategy and research director wrote, “For a positive scenario to play out, wealth must be more evenly distributed. Because so much of today’s wealth is tied up in digital spaces and assets, how they evolve must include a redistribution. Initiatives could shift the value equation to cooperative/community-based rewards systems for information at the personal level. This is more likely to happen outside the current financial/reward system. So, cryptocurrency would likely play a role, and digital assets and exchanges would aggregate P2P [peer-to-peer]. A large trigger would be the open-source developments of biochemistry (CRISPR technologies) that enable gene-editing to sharply address the increasing tyranny of health care costs. By working to eliminate disease, cancers, etc., people will come to understand the value of sharing their genetic code despite the risks – pooling information for the common good means we learn faster than the government and the providers. When this trigger brings people back into learning, science may again have a role to play. Making positive change also requires a rethinking of educational access and some return to meritocracy for accelerated access so a broader swath of the population can again prosper.”

“People are taking more and more notice of the ways social media (in particular) has systematically disempowered them, and they are inventing and popularizing new ways to interact and publish content while exercising more control over their time, privacy, content data and content feeds.”

Susan Price, human-centered design innovator at Firecat Studio

Susan Price, human-centered design innovator at Firecat Studio, observed, “People are taking more and more notice of the ways social media (in particular) has systematically disempowered them, and they are inventing and popularizing new ways to interact and publish content while exercising more control over their time, privacy, content data and content feeds. An example is Clubhouse – a live-audio platform with features such as micropayments to content and value creators and a lively co-creation community that is pushing for accessibility features and etiquette mores of respect and inclusion. Another signal for change is the popularity of the documentary ‘The Social Dilemma,’ and the way its core ideas have been adopted in popular vernacular. The average internet user in 2035 will be more aware of the value of their attention and their content contributions due to platforms like Clubhouse and Twitter Spaces that monetarily reward users for participation. Emerging platforms, apps and communities will use fairer value propositions to differentiate and attract a user base. Current problems such as the commercial exploitation of users’ reluctance to read and understand terms of service will be solved by the arrival of competing products and services that strike a fairer bargain with users for their attention, data and time. Privacy, malware and trolls will remain an ongoing battleground; human ingenuity and lack of coordination between nations suggests that these larger issues will be with us for a long time.”

Brent Shambaugh, developer, researcher and consultant, predicted, “Decentralized and distributed technologies will challenge the monopolies. Many current tech leaders and politicians will become less relevant as they drown in their own hubris. The next 14 years will be turbulent in both the physical and digital worlds, but the average user will come out on top. Tech leaders and politicians who follow this trend will survive. I could believe the opposite, but I choose to be an optimist.”

Counterpoint: A portion of these experts believe the tech sector alone is not likely to lead the way to significant change

A share of respondents said they do not expect that people in the technology sector will play a leading role in helping to better the digital public sphere. Following is a selection of representative comments. Many more statements along these lines are included in a later chapter dealing with surveillance capitalism, datafication and manipulation.

Greg Sherwin, a leader in digital experimentation with Singularity University, said, “As long as humans are treated as round pegs forced to fit into the square holes in the mental models of the greatest technological influencers of digital spaces, negative side effects will accumulate with scale, and users who are forced into binary states will react in binary conflicts by design. As it is now, most of the leadership behind the evolution of digital spaces is weighted heavily toward those with a reductionist, linear view of humans and society. Technology cannot remove the human from the human. And while the higher bandwidth capabilities of some digital spaces stand to improve empathy and connection, these can be just as easily employed for negative social outcomes.”

Christopher Richter, a professor at Hollins University whose research focuses on communications processes in democracies, predicted, “I am confident that the interacting systems of design processes, market processes and user behaviors are so complex and so motivated by wealth concentration that they cannot and will not improve significantly in the next 14 years. Diagnosis, reform and regulation are all reactive processes. They are slow, and they don’t generate profit, while new-tech development in irrational market environments can be compared to a juggernaut, leading to rapid accumulation of huge amounts of wealth, the beneficiaries of which in turn rapidly become entrenched and devote considerable resources to actively resisting diagnosis, reform and regulation that could impact their wealth accumulation. Social media and other digital technologies theoretically and potentially could support a more-healthy public sphere by channeling information, providing neutral platforms for reasoned debate, etc. But they have to be designed and programmed to do so, and people have to value those functions. Instead, they are designed to generate profit by garnering hits, likes, whatever, and people prefer or are more vulnerable to having their emotions tweaked than to actually cultivating critical thinking and recognizing prejudice. Thus, emotional provocation is foregrounded. Even if there is a weak will to design more-equitable applications, recent research demonstrates that even AI/machine learning can reflect deep-seated biases of humans, and the new apps will be employed in ways that reflect the biases of the users – facial-recognition software illustrates both trends. And even as the problems with something like facial recognition may get recognized and eventually repaired, there are many, many more new apps being rapidly developed, the negative effects of which won’t be recognized for some time.”

Alexa Raad, chief purpose and policy officer at Human Security wrote, “Business models drive innovation. The quest for advertising revenue has driven innovations in the design of digital spaces as well as innovations in machine learning. Advertising – a primary profit center for tech behemoths like Facebook, Google, Twitter and TikTok – relies upon algorithms that engage and elicit an emotional response and an action (be it to buy a product or buy into a system of beliefs). It is hard to see new business models emerging that have the same economic return.”

Ian Peter, Australian internet pioneer, futurist and consultant, commented, “Monetisation of the digital space seems to be a permanent feature and there seems to be no mechanism via which concerned entities can address this.”

A leading internet infrastructure architect who has worked at major technology companies for more than 20 years, responded, “From the perspective of the designers and operators of these digital spaces, individual users are ‘shapeable’ toward an idealistic set of ends (users are the means toward the end of an ideal world) rather than being ends in themselves who should be treated with dignity and respect. This means the designers and operators of these digital spaces truly believe they are ‘doing good’ by creating systems that can be used to modify human behavior at large scale. Although this power has largely been used to increase revenue in the past, as the companies move more strongly into the political realm and as governments realize the power of these systems to shape behavior, there will be ever-greater collusion between the operators of these digital spaces and governments to shape societies toward ends that the progressive elements of governments believe will move societies toward their version of an ‘ideal future.’ There is little any individual can do to combat this movement, as each individual voice is being drowned in an overwhelming sea of information, and individual voices that do not agree with the vision of the progressive idealists are being depromoted, flatly filtered and – in many cases – completely deplatformed. The problem is one of human nature and our beliefs about human nature.”

A Chinese social media researcher said he doubts any sort of beneficial redesign will emerge, writing, “We need to redesign the internet, but many incumbents won’t yield, or – based on the same reasons – they won’t let it happen. Whether or not we can tame big tech politically, there are so many other challenges to the architecture of internet that everything is leaning toward being controlled and centralized, eventually becoming fragile enough to be further abused or fall into worse perils.”

Russell Newman, associate professor of digital media and culture at Emerson College, observed, “Assuming we remain in a moment of unabated present forward movement, what prevails is a set of business models that continue to rely heavily on intensified tracking with an assist from artificial intelligence and machine learning, all of which we now know bake in societal inequities rather than alleviating them and point systems far away from any democratic outcome. Many of the debates about misinformation occurring now are in fact epiphenomena of several trends as parties harness them toward various ends. Several trends worry me in particular:

  1. While the largest tech companies receive the largest share of attention, the conduit providers themselves – AT&T, Comcast, Spectrum, Verizon – have been constructing their own abilities to track their users for the purpose of selling data about them to advertisers or other comers, and/or to strengthen their ability to provide intermediary access to such information across supply chains and more. Verizon’s recent handover of its Verizon Media unit to Apollo only means that one of the largest tracking entities in existence has been transferred to a sector that cares even less about the quality of democratic communications, seeking instead deeper returns. Clampdowns by tech giants on third-party tracking is similarly likely only serving to consolidate the source of tracking information on users with fewer, larger players. This is to leave aside that we are nowhere close to serious privacy legislation at the federal level.
  2.  Adding to this, the elimination of network neutrality rules by the FCC is devastating for future democratic access to communications. In fact, the Trump [administration’s] FCC did not just remove network neutrality rules but took the agency itself out of even overseeing broadband communications overall. The resultant shift from common carriage communications, which required providers to take all paying comers, to private carriage portends all sorts of new inequities and roadblocks to democratic discourse while also potentially intensifying tracking (blocking the ability to use VPNs, perhaps). Maddeningly, the Biden administration shows little serious interest in fixing this; the fact it has yet to even hint at appointing a tiebreaking Democratic FCC commissioner with dwindling time remaining in this Congress is a disaster.
  3. Our tech giants are not just advertising behemoths but are also effectively and increasingly military contractors in their own right, with massive contracts with the intelligence and defense arms of the government. This instills troubling incentives that do not point toward increased democratic accountability. Facial-recognition initiatives in collaboration with police departments similarly portend intensifications of existing inequities and power imbalances.
  4. Traditional media upon which democratic discourse depends is continuing to consolidate; to add insult to injury, it is becoming financialized. Newspapers in particular are doing so under the thumb of hedge funds with no commitment to democratic values, instead seeing these important enterprises as revenue centers to wring dry and discard. ‘Citizen journalism’ is not a foundation for a democracy; a well-resourced sector prepared and incentivized to do deep investigative reporting about crucial issues of our time is. Emergent entities like Vox, Buzzfeed and Axios themselves received early support from the usual giants in tech and traditional media; and their own logics don’t necessarily lean toward optimally democratic ends, with Axios as recently as late 2020 telling the Wall Street Journal it saw itself as a software-as-a-service provider for other corporations.”

Erhardt Graeff, assistant professor of social and computer science at Olin College of Engineering, commented, “The only way we will push our digital spaces in the right direction will be through deliberation, collective action and some form of shared governance. I am encouraged by the growing number of intellectuals, technologists and public servants now advocating for better digital spaces, realizing that these represent critical public infrastructure that ought to be designed for the public good. Most important, I think, are initiatives that bring technologists together to realize the public purpose of their work, such as the Design Justice Network, public interest technology and the tech worker movement. We need to continue strengthening our public conversation about what values we want in our technology, honoring the expertise and voices of non-technologists and non-elites; use regulation to address problems such as monopoly and surveillance capitalism; and, when we can, refuse to design or be subject to antidemocratic and oppressive digital spaces.”

Marcus Foth, professor of informatics at Queensland University of Technology, exclaimed, “Issues of privacy, autonomy, net neutrality, surveillance, sovereignty, etc., will continue to mark the lines on the battlefield between community advocates and academics on the one hand, and corporations wanting to make money on the other hand. Things could change for the better if we imagine new economic models that replace the old and tired neoliberal market logic that the internet is firmly embedded in. There are glimpses of hope with some progressive new economic models (steady state, degrowth, doughnut, and lots of blockchain fantasies, etc.) being proposed and explored. However, I am doubtful that the vested interests holding humankind in a firm grip will allow for any substantial reform work to proceed. These digital spaces are largely hosted by digital platform corporations operating globally. In the early days of the internet, the governance of digital spaces on the top ‘applications’ layer of the OSI (Open Systems Interconnection) model comprised simple and often organically grown community websites and Usenet groups. Today, this application layer is far more complex, as the commercial frameworks, business plans and associated governance arrangements – including policies and regulations – have all become far more sophisticated. While the pace with which this progression advances and seems to accelerate, the direction since the World Wide Web went live in 1993 has not changed much. The underlying big platform corporations that have emerged are strongly embedded in a capitalist market logic, set to be profitable following outdated neoliberal growth key performance indicators (KPIs). What they understand to be ‘better’ is based on commercial concerns and not necessarily on social or community concerns.”

Dan Pelegero, a consultant based in California, responded, “If the approach toward making our digital spaces better is either profit-driven or compliance-driven, without any other motivators, then the economics of our digital spaces will only make life better for the owners of platforms and not the users. The issues around the governance of our digital spaces do not have to do with technology, they have to do with policy and how we, as people, interact. Our bureaucracies have moved too slowly to keep up with the pace of communication changes. Regulation of these spaces is predominantly a volunteer-led effort or still remains a low-compensation, high-labor activity.”

Liza Potts, professor of writing, rhetoric and American cultures at Michigan State University, wrote, “The lack of action on the part of platform leaders has created an environment where our democracy, safety and security are all at risk. At this point, the only solution seems to be to break apart the major platforms, standardize governance, implement effective and active moderation and hold people accountable for their actions. Without these moves, I do not see anything changing.”

Jeremy West, senior digital policy analyst at the Organization for Economic Cooperation and Development (OECD), said there are ways to make a difference outside of a full tech and government commitment to flipping the script entirely, writing, “Neither tech leaders nor politicians (with some scattered exceptions) have been especially helpful, and I don’t have much hope for improvement there. However, by 2035 I expect to see users having substantially greater control over the data they wish to share, and more options for accessing formerly ‘free’ services by choosing to pay a pecuniary fee rather than sharing their data. Greater transparency from online service providers about harmful content, including mis/disinformation, is on the way. That will improve the evidence base and facilitate better policymaking in ways that are not currently possible. I expect to see terrorist and violent extremist content, child sexual abuse material and the like pushed into ever-smaller and more-remote corners of the internet. That is not to say that it will be eradicated, though.”

A retired U.S. military strategist commented, “The financial power of the major social media platforms, enabling technology providers and competing macro political interests, will act in ways that enable maximum benefit for them and their financial interests. We need look no further than capitalist experience in other economic sectors, in which the industries of digital spaces have thus far not demonstrated a singleness or distinctive separateness from the type of economic power exercise and consolidation quite familiar to us in U.S. industry.”

A leader of a center for society, science, technology and medicine responded, “Without a major restructuring of capitalist incentives or other substantial regulatory action – neither of which I think are likely unless climate change makes it all a moot point – digital spaces and digital life will continue to be ‘business as usual,’ emphasis on the business. While my teaching in technology ethics writ broadly betrays at least some optimism that things *could* change, I think it is unlikely that they will.”

A 30-year veteran of internet and web development said, “Maybe – if we are lucky – over the next decade or two various digital spaces and people’s use of them will change in ways that serve (or seem to serve) the public good (within an evolving definition of that term) to an extent greater than they do today. It is likely that the digital oligarchy, as well as Wall Street, are going to fight tooth-and-nail to maintain the status quo. In the meantime, we are barreling headlong toward a country that is isomorphic, with Huxley’s ‘Brave New World,’ Collins’ ‘The Hunger Games,’ Atwood’s ‘The Handmaid’s Tale,’ etc. (Cf. this quote from Chris Hedges‘ article titled ‘American Requiem’: ‘An American tyranny, dressed up with the ideological veneer of a Christianized fascism, will, it appears, define the empire’s epochal descent into irrelevance.’)” 

An internet pioneer wrote, “The major changes in society point to greater stratification in its wealth. So, for-fee subscription services will do a better and better job of serving public good while only serving the wealthy. Free services that compete will continue to profit from manipulation by advertisers and other exploitive actors. Thus, community spaces will get better and worse depending on their revenue models, and social problems will not be addressed. (Black swan events like a change in our economic system might change things. Don’t bet on it.)”

A vice president for learning technologies predicted, “Tech leaders will help achieve improvements through their personal guidance (public and private) of their concerns to recognize the larger missions/aims that exist beyond corporate growth and personal power. Improvements in the digital lives of the average users will come through increasing the transparency of sources of information. Tech reforms I foresee include filtering mechanisms that recognize a filter’s origins – such as gatekeepers recognized for point of view, methods, etc. Persistent concerns will remain, especially the emerging approaches we see today in which players are gaming the system to harmful ends, including various forms of warfare.”

The leader of a well-known global consulting firm commented, “The emergence of new business and economic models, and a new and updated view of what public commons are in the digital age might possibly help. Digital spaces suffer from the business models that underly them, those that encourage and amplify the most negative behaviors and activities.”

A policy entrepreneur said, “Some corporations will successfully market their differentiation as leaders in trust-building and proactive ethical behaviors. However, there will be some holdouts continuing to exploit surveillance capitalism and providing platforms for misinformation that serves social division. All wealthy Western countries are going to surpass the U.S. in responsible digital technology regulations before 2030. Between compliance with the non-U.S. standards and the example provided by Engine No. 1 to shake up boards of directors, multinational corporations will choose the lowest-cost compliance strategies and will be swayed not to be on dual tracks.”

An accomplished programmer and innovator based in Berkeley, California, wrote, “Simply put, digital spaces are driven by monetary profit, and I don’t see that changing, even by 2035. The profit motive means that providers will continue to do the least amount of work necessary to maximize profit. For example, software today is insecure and unreliable, but the cost of making it secure and reliable is higher than providers want to pay; it would cut into their profits. In a slightly different but still related vein, the ‘always on’ aspects of digital spaces discourage people from human things like inner contemplation or even just reading a book. The providers don’t make money if you are just meditating on inner peace, so they make their platforms as addictive as possible. There is no incentive for them to do otherwise.”

An editorial manager for a high-tech market research firm said, “Elites are now firmly in control of emerging digital technology. The ‘democratization’ of internet resources has run its course. I don’t see these trends changing over the next few decades.”

A professor of sociology at an American Ivy League university responded, “Unless we re-educate engineers and tech-sector workers away from their insane notions of technology that can change society in ways in line with their ideologies and toward a more nuanced and grounded understanding of the intersection of technology and social life, we’ll continue to have sociopathic technologies foisted upon us. Unless we can dismantle the damaging personal data economy and disincentivize private data capture and the exchange of database information for profit, we will continue to see the kinds of damage through personalization algorithms, leaks, and the very real possibilities that such information is used to nefarious ends by governments. Until Jack Dorsey pulls the plug on Twitter and Mark Zuckerberg admits that Facebook has been a terrible mistake, and Google steps away from personal data tracking, we are not headed anywhere better by 2035.”

A professor emeritus of social sciences commented, “The tremendous clout of advertisers makes it extremely difficult to restrict corporate surveillance, which often is done insecurely, leaving everyone vulnerable to hackers and malware. The struggle for security in online communications and transactions from attempts to mandate backdoors everywhere makes it difficult for device and system developers to make a secure computer or phone. Another challenge is finding ways to reduce hate and dangerous misinformation while preserving civil liberties and free speech. But I do believe that that the continuation of the information commons in the form of open courseware, Wikipedia, the Internet Archive, fair-use provisions in intellectual property laws, open university scientific papers, all of the current and future online collaborations to address environmental problems, and open access to government will provide support to all of our efforts to make 2035 a better world than it seems to be heading toward at the moment.”

Government regulation plus less-direct ‘soft’ pressure by government will help encourage corporations’ adoption of more-ethical behavior

A share of these experts, whether they are hopeful or not for significant improvement of the digital public sphere, argued that regulation is necessary. They expect that legislation and regulation of digital spaces will expand, nudging the profit-focused firms in the digital economy to focus on issues of privacy, surveillance and data rights and finally rein in misinformation and disinformation to some degree. While some see legislation as a remedy, some do not agree, noting that regulation could lead to unwanted negative outcomes – among them the stifling of innovation and free speech and the further empowerment of authoritarian governments. Thus, a share of these experts suggest that a combination of carefully directed regulation and “soft” public and political pressure on big tech will lead its leaders to be more responsive and attuned to ethical design aimed at better serving the public interest.

Andrew Wyckoff, director of the OECD’s Directorate for Science, Technology and Innovation, predicted, “The twin forces of innovation and heightened recognition that the digital infrastructure is essential to the economy and society will have the biggest impact. As for innovation we will witness a profound change as ubiquitous computing enabled by fibre, 5G and embedded sensors and linked equipment and devices (the Internet of Things) augmented by AI becomes a reality. This new platform will unleash another innovation cycle. The pandemic has made it clear to all policymakers that the digital infrastructure – from the cables to widely used applications and platforms – are essential public services and the light-touch regulation of the internet’s first few decades will fade.

“Governments are slowly developing the capacity and know-how to govern the digital economy and society. This new cadre of policymakers will assert ‘sovereignty’ over what was ungoverned and will seek to promote digital spaces as useful, safe places, just as they did for automobiles and roads in the 20th century. What will be noticeably improved about digital life for the average user 2035? Key initiatives will be digital identities, control over personal data, protection of vulnerable populations (e.g., children) and measures to improve security.

“What current problems will persist and continue to raise major concerns? The end-to-end property of the internet, which is its ‘democratising’ feature has led to an inevitable decentralisation and recentralization, altering power dynamics. This shift is destabilising and naturally resisted by incumbents, causing strife and calls to reassert control.”

Peng Hwa Ang, professor of media law and policy at Nanyang Technological University, Singapore, commented, “What we are seeing is friction arising from the early days of use of disruptive technologies. We need law to lubricate these social frictions. Yes, I know Americans tend to see laws as stopping action. But consider a town where all the traffic lights are green. If laws, judiciously formulated, passed and enforced, are social lubricants, these frictions will be minimised. I expect therefore that people will appreciate the need for such social lubrication. John Perry Barlow’s Declaration of the Independence of Cyberspace is not an ideal. It was obvious to me when it was published that it was not realistic. It has taken many people some 20 years to realise that. The laws need to catch up with the technology. Facebook for example is now aware that it needs some regulation (internal rules short of hard government laws) in order to actually help its own business. Without some restraint, it is blamed for, and thus associated with, bad and criminal action. In short, I am optimistic because I think:

  1. We are realising the futility of [Barlow’s] declaration.
  2. The problems we face and will face highlight the need for social lubrication at different levels.
  3. These regulations will come to pass.”

Stephan G. Humer, internet sociologist and computer scientist at Fresenius University of Applied Sciences in Berlin, said, “Initiatives aimed at empowerment and digital culture will probably have the greatest impact because this is where we continue to see the greatest deficits and therefore the greatest need for change. People need to be able to understand, master and individually shape digitization, and this requires appropriate initiatives. A diverse mix of initiatives – some governmental, some nongovernmental – will play a crucial role here! The result will be that digitization can be better appreciated and individually shaped. Increasingly, the effects that were expected from digitization at the beginning will come to pass: a better life for everyone, thanks to digital technology.

“The more that self-evident and controlled digital technology becomes part of our lives, the better. People will not let this aspect of control be taken away from them. The dystopias presented so far, some of which have prophesied a life ‘in the matrix,’ will not become an issue. So far, sanity has almost always triumphed, and that will not change. The more people in the world can use the internet sensibly, the more difficult it will be for despots and dictators.”

Willie Currie, who became known globally as an active leader with the Independent Communications Authority of South Africa, predicted, “The combination of antitrust interventions in the U.S. and algorithm regulation in the European Union will rein in the tech companies by 2035. Organised civil society in both territories as well as increasing digital literacy will drive the demand for antitrust and regulatory action. This is the way it always is with technological development. If one regards the internet as a hyper-object similar to global warming, regulating its problematic aspects will require considerable global coordination. Whether this will be possible in the current global political space is unclear. Legislation to fix problems arises after the implementation of new technologies. During the 2000s there was an opportunity to introduce global internet regulation through a treaty process, but the process broke down into two global blocs with different views on the matter. So global regulation is unlikely before 2035.

“What is most likely to happen is that the European Union will be the main regulatory reference point for many countries, with the U.S. following an antitrust approach and the authoritarian countries seeking greater control over their citizens’ use of the internet. As the lack of accountability and political and psychological abuse perpetuated by tech leaders in social media continues to multiply, the backlash against them will grow. The damage to democracy and the social fabric caused by unregulated tech companies in the West will continue to become more visible and we will reach a point where the regulation of algorithms will become a key demand.”

Evan Leibovitch, director of community development at Linux Professional Institute, commented, “The extent to which governments can create – and preferably collaborate – on these issues will determine everything. This can go either way. The internet can be used as a tool for elite control of the masses – China is already providing a blueprint – or for massive social progress. Whether the transformation of digital spaces becomes a net positive or negative is dependent upon political and economic factors that are too volatile to predict. Much depends upon the level of regulation that governments will choose to impose, both in concentration of monopoly power and the necessity to make computer users responsible for what they say. This will impact laws and regulations on monopoly concentration, libel/slander and intellectual property.”

A futurist and consultant based in Europe urged, “We need radical regulation, transparency and policy changes enacted at scale. This has to happen. Movement toward regulation feels like pushing one very small boulder up a very big hill. We need more regulation, more dissent within platforms, more whistleblowers, more deplatforming of hate speech/harmful content, more-aggressive moderation of platforms of misinformation/disinformation, etc. We just need more.”

The founder and director of a digital consultancy observed, “The last 20 years of the internet have very effectively answered the question, ‘how do we profit materially from the online world?’ The next 20 years need to answer the question, ‘how do we profit humanely and humanly from the online world?’ Government initiatives that target algorithms are an excellent start in protecting citizens. If we are to survive the coming decade, change is essential. The platforms we all use to communicate online must be reoriented toward the good of their users rather than only toward financial success. Classifying some entities as ‘information utilities’ might be a good first step. Legislation around technology has to be pitched at a truly effective level. Tackling the rules governing algorithms is a good meta-level for this. Applications or platforms, like Facebook may change or even vanish but the rule set will remain. This kind of thinking is not common in the world of government, so technologists and designers need to be engaged to help guide the legislation conversation and ensure it’s happening at an effective level. Arguably, a lot of the social progress (e.g., LGBTQ+ rights) that’s been made in recent decades can be credited to the access the internet has given us to other ways of thinking and to other ways of life and the tolerance and understanding this access has bred. If we can reorient technology to serve its users rather than its oligarchs, perhaps that path of progress can be resumed.”

“A series of crises like those occurring now regarding election rigging and related conspiracy theories will force changes to publishing laws, so that posters must be identified personally and take personal responsibility for their actions.”


Hume Winzar, a professor and director based at Australia’s Macquarie University with expertise in econometrics and marketing theory

Hume Winzar, a professor and director based at Australia’s Macquarie University with expertise in econometrics and marketing theory, commented, “A series of crises like those occurring now regarding election rigging and related conspiracy theories will force changes to publishing laws, so that posters must be identified personally and take personal responsibility for their actions. AI-based fact-checkers and evidence collection will be automatic, delivering a validity score on each post and on each poster. Of course, that also means we will see more-sophisticated attempts at ‘gaming’ such systems.”

Francine Berman, distinguished professor of computer science at Rensselaer Polytechnic Institute, said, “There are a lot of horror stories – false arrests based on bad facial recognition, data-brokered lists of rape victims, intruders screaming at babies from connected baby monitors – but there is surprisingly little consensus about what digital protections – specific expectations for privacy, security, safety and the like – U.S. citizens should have. We need to fix that. Europe’s General Data Protection Regulation (GDPR) is based on a well-articulated set of digital rights of European Union citizens. In the U.S. we have some specific digital rights – privacy of health and financial data, privacy of children’s online data – but these rights are largely piecemeal.

“What are the digital privacy rights of consumers? What are the expectations for the security and safety of digital systems and devices used as critical infrastructure? Specificity is important here because to be effective, social protections must be embedded in technical architectures. If a federal law were passed tomorrow that said that consumers must ‘opt in’ to personal data collection by digital consumer services, Google and Netflix would have to change their systems (and their business models) to allow users this kind of discretion. There would be trade-offs for consumers who did not opt in: Google’s search would become more generic, and Netflix’s recommendations wouldn’t be well-tailored to your interests. But there would also be upsides – opt-in rules put consumers in the driver’s seat and give them greater control over the privacy of their information.

“Once a base set of digital rights for citizens is specified, a federal agency should be created with regulatory and enforcement power to protect those rights. The FDA was created to promote the safety of our food and drugs. OSHA [Occupational Safety and Health Administration] was created to promote the safety of our workplaces. Today, there is more public scrutiny about the safety of the lettuce you buy at the grocery store than there is about the security of the software you download from the internet. Current bills in Congress that call for a Data Protection Agency, similar to the Data Protection Authorities required by the GDPR, could create needed oversight and enforcement of digital protections in cyberspace.

“Additional legislation that penalizes companies, rather than consumers, for failure to protect consumer digital rights could also do more to incentivize the private sector to promote the public interest. If your credit card is stolen, the company, not the cardholder, largely pays the price. Penalizing companies with meaningful fines and holding company personnel legally accountable – particularly those in the C suite – provides strong incentives for them to strengthen consumer protections. Refocusing company priorities would positively contribute to shifting us from a culture of tech opportunism to a culture of tech in the public interest.”

Theresa Pardo, senior fellow at the Center for Technology in Government at University at Albany-SUNY, commented, “There is an increasing appreciation for the potential of technology to create value and, more importantly, there is an increasing recognition of the risk to society from a lack of deep understanding of the potential unintended consequences of the use of new and emerging technologies. It is this recognition among both leadership and the public that will drive tech leaders and politicians to fulfill their unique roles and responsibilities by addressing the need for and creating the governance required to ensure that necessary understanding is built among those leaders and the public. Lack of understanding of the need for governance of new and emerging technologies, that requires trustworthy AI for example, is a problem that is just beginning to be diminished.”

A professor of information science based in California said, “Regulation is always a balance between competing values. It seems that it is time to for the pendulum to swing toward more restrictions for both social media and other forms of media (broadcast, cable, etc.) in terms of consolidation of ownership and the way content is distributed. It is important to remember that the technical systems we have are often a series of accidental or almost arbitrary choices that then become inevitable. But we can rethink these choices. For instance, video sites do not have to allow anyone to upload anything for instant viewing. Live streaming does not have to be available for for-profit reasons. Shares and likes and followers do not have to be part of an online system. These choices allow one or a few companies to make use of network externalities and become the largest, but not necessarily the best for individuals or society.”

A portion of respondents were confident that regulation will emerge soon.

Ed Terpening, industry analyst with the Altimeter Group, predicted, “Increased regulatory oversight will result in uniform rules that ensure digital privacy, equity and security. Tech markets – such as those involved in development of the Internet of Things (IoT) – have shown that they aren’t capable of self-regulation and the harms they have caused seldom have consequences that change their behavior. Legislative action will succeed through a combination of consumer groundswell as well as political input from business leaders whose operations have been impacted by digital crimes such as ransomware attacks and intellectual property theft. Still, while the scope and value of digitally connected devices will help consumers save time and money in their daily lives, in future the threat of bad international state actors who target those systems will increase the risk of disruption and economic harm for consumers.”

Tim Bray, founder and principal at Textuality Services, previously a vice president in the cloud computing division at Amazon, wrote, “There’s a surge of antitrust energy building. Breaking up a few of the big techs is very likely to improve the tenor of public digital conversation. There is an increasing awareness that social media that is programmed for engagement in a way that’s oblivious to truth and falsehood is damaging and really unacceptable.”

Jan Schaffer, director of J-Lab, said, “I believe digital spaces will transform for the public good by 2035. I expect it to happen due to government, and perhaps economic, intervention. I expect there will be legislation requiring internet platforms to take more responsibility for postings on their sites, particularly those that involve falsehoods, misinformation or fraudulent fundraising. And I suspect that the social media companies themselves will bow to public pressure and implement their own reforms.”

An information science professional based in Europe responded, “Before 2035 we shall see improved mechanisms for recognizing, identifying and then following up on each and every discriminatory or otherwise improper action by the public, politicians or any group that does harm. Digital spaces and digital life will be transformed due to more and better regulation and the education of public audiences along with the setting of explicit rules of acceptable use and clear consequences for abuse. Serious research and analysis are needed in order to increase our understanding of the situation before establishing new rules and regulation.”

The director of a cognitive neuroscience group predicted, “There will be regulatory reform with two goals: increased competition and public accountability. This has to be developed and led by political leaders at all levels and it will require active engagement by technology companies.”

Daniel S. Schiff, a Ph.D. student at Georgia Tech’s School of Public Policy, where he is researching artificial intelligence and the social implications of technology, commented, “In the near-term future (next 10 to 15 years), I expect that top-down regulation will have the biggest impacts on digital environments, particularly through safeguarding privacy and combating some of the worst cases of misinformation, hate speech, and incitements to violence. Regulations shaping data governance and protecting privacy rights like GDPR and CCPA [California Consumer Privacy Act] are well suited to tackle a subset of current problems with digital spaces and can do so in a relatively straightforward fashion. Privacy by design, opt-in consent, purpose limitation for data collection and other advances are likely to accelerate through diffusion of regulatory policy, buttressed by the Brussels and California Effects, and the pressure applied to technology companies by governments and the public. For example, there may be enough policy pressure along the lines of the EU’s Digital Services Act and Digital Markets Act to limit the use of micro-targeted advertising, perhaps for vulnerable populations and sensitive issues (e.g., politics) especially. A rare consensus in U.S. politics also suggests that federal action is likely there as well. These would no doubt constitute improvements in digital life.”

A number of respondents noted that the largest amount of democratic regulation of digital technology has been emerging in Europe first and said they expect this trend to continue.

Christopher Yoo, founding director of the Center for Technology, Innovation and Competition at the University of Pennsylvania, said, “Digital spaces have become increasingly self-aware of the impact that they have on society and the responsibility that goes along with it. The government interventions that have gained the most traction have been in the area of economic power, highlighted by the EU and U.S. cases against Google and Facebook and proposed legislation, such as the EU’s Digital Markets Act and the bloc of bills recently reported by the House Judiciary Committee. Interestingly, the practices that are the focus of these interventions are the most ambiguous. Digital platforms now generate trillions of U.S. dollars in economic value each year, with many of the practices playing essential roles, and much of the supposed harms are backed more by theory than empirical evidence. Any economic interventions that are justified must be carefully targeted to curb abuses proven by evidence rather than conjecture in ways that do not curtail the benefits on which consumers now depend. Interestingly, the impact of digital platforms on political discourse is more important. In the U.S., the First Amendment limits the government’s ability to intervene. Any reforms must come from the digital platforms themselves. Fortunately, they are showing signs of greater conscientiousness on that front.”

Rick Lane, founder and CEO of Iggy Ventures, wrote, “I believe that policy makers around the world, the general public and tech companies are coming to the realization that the status quo around tech public policy that was created during the 1990s is no longer acceptable or justified. The almost unanimous passage of FOSTA/SESTA, the EU’s NIS2, the UK’s recent child safety legislation, Australia’s encryption law, and the continued discussions around modifying Section 230 of the U.S. 1996 Communications Decency Act and privacy laws here in the U.S. highlight how views have drastically changed since the SOPA/PIPA fights.”

A futurist and consultant based in Europe predicted, “Regulation will significantly impact the evolution of digital spaces, tackling some of the more egregious harms they are currently causing. The draft UK ‘online safety’ legislation – in particular the proposed duty of care for platforms – is an example of a development that may help here, together with measures to remove some of the anonymity that users currently exploit. A move away from the current, largely U.S.-centric model of internet governance will enable the current decline to be reversed. The current ‘digital sovereignty’ focus of the European Commission will be helpful in this regard, given that progress only seems to be made when tech companies are faced with the threat or actual imposition of controls back by significant financial penalties, potentially with loss of access to key markets.”

A foresight strategist based in Washington, D.C., wrote, “I believe interventions such as enforceable data-privacy regulations, antitrust enforcement against ‘big tech,’ better integration of humanities and computer science education and continued investment in internet-freedom initiatives around the globe may help create conditions that improve digital life for people everywhere. This is necessary. By 2035, exogenous factors such as climate change and authoritarianism will play even more significant roles in shaping global society at large and social adoption of digital spaces in particular. The net results will be both the increased use of pervasive digital surveillance/algorithmic governance by large state and commercial actors and increased grassroots techno-social liberatory activity.”

Thomas Streeter, a professor of media, law, technology and culture at Western University, Ontario, Canada, commented, “The character of digital life will largely be determined by nondigital issues like global warming, the state of democracy and globalization, etc. That said, if an international coalition of liberal social democracies are able to dramatically reorganize digital technologies, perhaps through first breaking up the big companies with antitrust law and then regulating the pieces according to a mixture of common carrier and public media principles, while replacing advertising with subscriptions and public subsidies, that will help. There is no way to know if such efforts would succeed, but stranger things have happened in the past, and if we don’t try, we will guarantee failure.”

Several respondents specified particular approaches they expect might be most effective.

The co-founder of a global association for digital analytics responded, “What reforms or initiatives may have the biggest impact by 2035? I expect:

  • Effective regulation of social media companies and major service providers, such as Amazon and Google. These monopolies will be broken up.
  • The rise of better citizen awareness and better digital skills.
  • The rise of indie resistance – anti-surveillance apps, small-scale defensive AI, personal servers, cookie blocking, etc.
  • The for-profit tech leaders will not be a source of positive contribution toward change. Some politicians will continue to seek regulation of abusive monopolies, but others may have an equally negative effect. I think the most influence will come via demands for social/cultural change arising from the general public.
  • Monopoly domination by current leaders may be removed or reduced, however, emergent technology will drive new monopoly domination by large corporations in aspect of tech and society that are currently unpredictable.
  • Common, cheap and widespread AI applications will dominate concerns and create the most challenges in 2035.”

Jonathan Taplin, director emeritus at the University of Southern California’s Annenberg Innovation Lab and a member of the advisory board of the Democracy Collaborative at the University of Maryland, commented, “In the face of a federal judge’s recent dismissal of the FTC’s monopoly complaint against Facebook, it is clear that breaking up big tech may be a long, drawn out battle. Better to focus now on two fairly simple remedies. First, remove all ‘safe harbor’ liability shields from Facebook, YouTube, Twitter and Google. There are currently nine announced bills in Congress to address this issue. As soon as these services acknowledge that they are the largest publishers in the world, the sooner they will have to take the responsibilities that all publishers have taken since the invention of the printing press. Second, Facebook, Google, YouTube, Instagram and Twitter have to start paying for the content that allows them to earn billions in ad revenues. The Australian government has passed a new law requiring Google and Facebook to negotiate with news outlets to pay for their content or face arbitration. As the passage of the law approached, both Facebook and Google threatened to withdraw their services from Australia. But Australia called their bluff and they withdrew their threats, proving that they can still operate profitably while paying content creators. The Journalism Competition and Preservation Act of 2021 that is currently before the Judiciary Committee in both House and Senate would bring a similar policy to the United States. There is no reason Congress couldn’t fix these two problems before the end of 2021.”

Robin Brewer, professor of information, electrical engineering and computer science at the University of Michigan, said, “As AI is woven into every aspect of digital life, we must be careful to protect digital spaces while mitigating harms that affect marginalized communities (e.g., age, disability, race, gender). Reforms with the biggest impact will be those that enforce regulation of AI-based technologies with routine audits for potential bias or errors. The most noticeable improvements about digital life by 2035 will likely be better ways for digital residents/users to report AI-related harms, more accountability for such harms, and as such, more trust in using digital spaces for every aspect of our lives (e.g., communication, driving, health) across age groups.”

A machine learning expert predicted, “Regulations associated with privacy, reporting, auditing and access to data will have the largest impact. Uprooting the deep web and dark web to remove malicious, illicit and illegal activity will eventually be done for the public good. There will also be more research and understanding associated with challenges to individuals’ digital/physical balance as more-immersive technology becomes mainstream (e.g., virtual reality). There will be limits imposed and technology enablers will work to ensure that individuals still also get together IRL [in real life].”

Richard H. Miller, CEO and managing director at Telematica and executive chairman at Provenant Data, wrote, “What reforms or initiatives may have the biggest impact?

  1. Those that revolve around data sovereignty, the capture of personal data, rights to use, and the ability of individuals and corporate entities to delegate or license rights to use by third parties. Accompanying the reforms will be technical solutions regarding fairly radical approaches to transparency through the use of zero knowledge data storage and retrieval approaches. By these means, clarity in the use (or definitive indications of misuse) of personal data is accomplished with reasonably strong means of protecting privacy. And technologies that retain tamper-proof/tamper-evident data along with the provenance and lineage of data will result in provable chains of data responsibility.
  2. Telecommunication/Data Services reform that establishes principles of fairness in access, responsibility and liability for transgressions, establishment of common carriage principles to be applied by law and the willingness of governments (federal, state, regional and so on) to clearly identify, call out and appropriately penalize cartel or monopolistic business practice.

“What beneficial role can tech leaders or politicians or public audiences play in this evolution? In both cases one and two above, technology leaders are capable of clearly describing the risks of not addressing the issues and can clearly present them in an understandable fashion to legislative bodies and to the populace so there is an informed public. Politicians, insofar as they are responsible for the establishment and enforcement of law, are potentially the most important contributors. But should they continue (as they have in the past 20 years, to abrogate responsibility for and modernization of regulation and its enforcement) they also represent the most impactful threat.

“What will be noticeably improved about digital life for the average user 2035? Trust in the knowledge that there is greater transparency and control over the use of personal data. Trust in identification of the source of information. Legal recourse and enforcement regarding data usage, information used for manipulation, and active pursuit of cartel and monopolistic behavior by technology, telecom and media hyperscalers.”

Christina J. Colclough, founder of the Why Not Lab, commented, “Where I expect governments to act is on the requirement for all fake news, fake artefacts, fake videos/texts, etc., to be labelled as such. I expect also we will see advancements in the labelling of ‘bots’ so we know what we are interacting with. I also believe we will see advancements in data rights – both for workers and citizens, including much stronger collective rights to the data extracted and generated (including inferences) and stricter regulations on what Shoshanna Zuboff calls ‘Markets in Human Futures.’”

Tom Wolzien, inventor, analyst and media executive, suggested the following:

  1. “Civil accountability for all platforms as publishers for what appears on/in them by any contributor, similar to the established regulation for legacy media publishers (broadcast and print).
  2. Appropriate legislation by politicians and acceptance by tech leaders.
  3. Platforms must not allow anonymity of contributors or persons retransmitting messages of others. Persons retransmitting should be held accountable for material re-transmitted by platform and in litigation. This will force individual contributors to accept personal accountability as enforced by the platforms, which should fear civil liability. This will diminish, but not eliminate a lot of the current issues.”

Rich Salz, a senior director of security services at Akamai Technologies, responded, “I hope that large social media companies will be broken up and forced to ‘federate’ without instances, so that global interaction is still possible but it’s not all under the control of a few players. This can be done, although some tricky (not hard) problems have to be solved. In spite of recent failed court actions tied to suits against Facebook, I maintain that the European Union and perhaps the U.S. Congress will do something.”

Valerie Bock, principal at VCB Consulting, wrote, “It has taken a very long time for the digital cheerleaders to understand how seriously destructive the use of online spaces could become. The Jan. 6, 2021, insurrection at the U.S. Capitol served as a wake-up call not only to the digerati, but to our lawmakers. I expect that the future will see creation of legislation that will hold platforms liable for providing space for the promulgation of likes and the planning of illegal activities. There will be actual, meaningful enforcement of such legislation. Of course, if such efforts are successful, they will drive a great deal of activity ‘underground,’ but the upside of that is that casual users will no longer be exposed to casual conspirators. Once the price of malfeasance goes up, it will concentrate the hardcore who are willing to pay up to finance fines, legal fees, etc., undertaken by their costs.”

Eileen Rudden, co-founder of LearnLaunch, commented, “Pressure from the public, governments and tech players will push for change, which is why I believe the future will be more positive than today. Internet spaces will evolve in a positive direction with the help of new legislation (or the threat of new legislation) that will cause tech spaces to modify what is considered acceptable behavior. External forces such as governments are being forced to act because the business model of the internet spaces is based on targeted advertising and the attention economy and the tech industry will not respond without governments getting involved. Tech players’ rules for what is acceptable content will become subject to norms that have developed over time, such as those already in place offline for libel. Whether a rating system to identify reliable information can be developed is open to question. Laws were created to address shared views of what is acceptable human behavior.”

Meredith P. Goins, a group manager connecting researchers to research and opportunities, said, “The internet is being used to track people’s every waking moment so that they can either be found or be advertised to. Tech leaders will continue to make billions from reselling content the general public produces while the middle class goes extinct. This will continue until broadband and internet service becomes regulated like telephone, TV, etc. If not, Facebook, Twitter and all social media will continue to devolve into a screaming match with advertising.”

Sean Mead, strategic lead at Ansuz Strategy, responded, “Twitter exists on and is programmed to reward hate, intolerance, dehumanization, libel and performative outrage. It is the cesspool that most clearly demonstrates the monetization of corruption. Many people sought out addiction to strawman mischaracterizations of other people who hold any beliefs that are in anyway different from their own. Why have a ‘two-minute hate,’ when you can have a full day of hating and self-righteousness every day, whether its justifications have a basis in reality or not? Algorithms are encouraging indulgence of these hate trips since doing so creates more time for the participants to be exposed to advertising. The social media oligarchy have been behaving not like platforms, but in violation of the intent of Section 230, like publishers promoting some views and disappearing others. If they were treated as publishers since they are behaving as publishers, this would force quite an improvement in community behavior, particularly in regard to libel. Many businesses may choose to move to a more-controlled network where participants are tied to a verified ID and anonymity is removed. That would not remove all issues, but it would dampen much problematic behavior.”

The founder and leader of a global futures research organization wrote, “Information warfare manipulates information channels trusted by a target without the target’s awareness, so that the target will make decisions against their interest but in the interest of the entity conducting the attack. This will get worse unless we anticipate and counter, rather than just identify and delete. We could reduce this problem if we use infowarfare-related data to develop an AI model to predict future actions, to identify characteristics needed to counter/prevent them and match social media uses with those characteristics and invite their actions. Since nation-states are waking up to these possibilities, I think they will clearly do this or come up with even better prevention strategies.”

Counterpoint 1: Some doubt that governance by nation-states will lead the way to significant, effective change

Some experts said they do not expect that people in the government sector will play a key role in helping to better the digital public sphere. A share of the respondents who do not expect significant improvement of the digital public sphere put the blame mainly on tech companies’ highly effective lobbying of and deep-pockets influence over government actors. Following is a selection of representative comments from those who were less optimistic about the near future of government influence.

Alexa Raad, chief purpose and policy officer at Human Security said, “Unfortunately, without significant and fundamental reforms in our system of government, the incentive for politicians is less about public service and transparency and more about holding onto power and reelection. So long as the incentives for government representatives are misaligned with the public interest, we can expect little in the way of meaningful reform. So long as internet services and their delivery continue to get consolidated (think more and more content being pushed into content delivery networks and managed by large infrastructure plays like Amazon Web Services), tech leaders will have greater power to push their own agenda and/or influence public opinion. The incentives for our elected officials are not aligned with public good. There will likely be some regulatory reform, but it will likely not address the root cause.”

Ian Peter, Australian internet pioneer, futurist and consultant, noted, “The reality is that most nation-states are far less powerful than the digital giants, and their efforts to control them have to be watered down to a point where they are often ineffective. There is no easy answer to this problem with the existing world order.”

Miguel Moreno, director of the department of philosophy at the University of Grenada, commented, “Major changes will be needed in regulatory frameworks, in antitrust laws, in privacy cultures and in the standardization of guarantees for users and consumers in different countries. But their experience in disseminating services on a global scale does not seem for now, nor in the near future, replaceable by any other scheme of activity managed by state institutions.”

Peter Rothman, lecturer in computational futurology at the University of California-Santa Cruz, wrote, “A change of direction would require a significant change of law and it can’t happen in the current political environment. As long as digital spaces and social media are controlled by for-profit corporations, they will be dominated by things that make profits and those things are outrage, anger, bad news and polarized politics. I see nothing happening on any service to change this trajectory.”

An expert on media and information policy responded, “I do, in principle, trust in government and believe in the importance of good government solutions, however, I am concerned that the lack of ability of government to solve important problems will limit its ability to find meaningful solutions that are appropriate to meet the challenges we face. Digital spaces and digital life will continue to be shaped by existing social and economic inequalities, which are at the heart of many of the current challenges and will, for a long time, continue to burden the ability to engage in productive dialogue in digital spaces.”

An AI scientist at a major global technology company said, “I would love to believe in the utopian possibility laid out in the article ‘How to Put Out Democracy’s Dumpster Fire,’ where the equivalent of online town halls and civic societies bring people closer together to resolve our toughest challenges, but I cannot. It’s not just the slow pace of bureaucracy that is to blame; graft and self-interest are largely at play. Historically, the most egregious violators of societal good in their own pursuit of wealth and power have only been curbed once significant regulation has been enacted and government agents then enforced those regulations. Unfortunately, Congress and local governments are run by people who must raise hundreds of thousands to millions of dollars to run for office, be elected, and then stay in office. Lobbyists are allowed to protect the interests of the most-powerful companies, organizations, unions and private individuals because the Supreme Court voted in favor of Citizens United.

“Money and power protect those with the most to gain. The global wealth gap is the largest in history, and it has only increased during the pandemic, rather than bringing citizens closer to each other’s realities. The U.S. is battered by historic heat waves and storms, and states with low vaccination rates are seeing new waves of COVID-19 outbreaks, yet a significant portion of Americans still deny science. The richest men in the world are using their wealth to send themselves into space for their own amusement while blindly ignoring nations unable to afford vaccines, food and water. Instead of vilifying these men for dodging taxes and shirking any societal responsibility to the people they made their fortunes off of, the media covers their exploits with awe and the government is either incapable or unwilling to get any of money back that should be going into public infrastructure. How can digital spaces improve when there is so much benefit for those who cause the greatest societal harm while neither government nor society seem capable or willing to stop them?

“Whistleblowers inside powerful companies are not protected. Sexual predators get golden parachutes and move on to cause harm at the next big tech company, startup or university. The evidence that [uses of social media] were at the heart of the two greatest threats to our democracy – the 2016 election … and the Jan. 6 Capitol riot – is overwhelming, but there have been no consequences. Congress puts on a bit of a show and yells at Mark Zuckerberg on TV, but he doesn’t have to worry because no real action will ever be taken. As long as Google and Facebook pay enough, they will continue to recruit the best and brightest minds to ensure that a tiny fraction of white men keep their wealth and power.”

A consultant whose research is focused on youth, families and media wrote, “Without strong governmental regulation, which will not occur, there is no stopping political actors from using any and all possible tools they can to gain advantage and sow division. The drive for maximum private profit on the part of tech industries will prevent them from taking significant action. Foreign entities seek to sow division, create chaos and profit from online disruptions. Diplomacy will not be able to address this sufficiently, and U.S. technological innovation will lag behind.”

An expert in organizational communication commented, “Corporations have taken over the internet. Governments serve corporations and will allow them to do as they wish to profit. Nothing really can be done. Money speaks and the people don’t have the money. The marketplace is biased in favor of profit-making companies.”

A researcher based in Ireland predicted, “Increasing corporate concentration, courts that favor private-sector rights and data use and politicians in the pockets of platforms will make things worse. People who are most made vulnerable in digital spaces will have decreasing power.”

An anonymous activist wrote, “There are too many very powerful public and private interests who control outcomes who have no incentive to make significant changes.”

Counterpoint 2: A portion of experts doubt that reformers have come up with effective solutions and cite a variety of reasons for this point of view

Brooke Foucault Welles, an associate professor of communication studies at Northeastern University whose research has focused on ways in which online communication networks enable and constrain behavior, argued that change via government action is unlikely. She wrote, “I think it is possible for online spaces to change in ways that significantly improve the public good. However, current trends conspire to make that unlikely to happen, including:

  • An emphasis in law and policymaking that focuses on individual autonomy and privacy, rather than systemic issues: Many policymakers have good intentions when they propose individual-level protections and responses to particular issues. However, as a network scientist I know these protections may stem the harm for individuals, but they will never root out the problems. For example, privacy concerns are (as a matter of policy or practice) often dealt with by allowing individuals to opt out of tracking or sharing identifying information. However, data brokers do not need to know the details of a large number of individuals – only a few are needed to accurately infer information about everyone in a network. So, it is my sense that these policies may make people feel as if they are protected when they are likely to not be protected well at all. There should be a shift toward laws and policies that de-incentivize harms to individual autonomy and privacy. For example, laws that prevent micro-targeting, instead only allowing targeted advertising to segments no larger than some anonymity-preserving size (maybe 10,000 people).
  • Persistent inequalities in the training, recruitment and retention of diverse developers and tech leaders: This has been a problem for at least 30 years, with virtually no improvement. While there has been some public rumbling of late, I see few trends that indicate that tech companies or universities are seriously committed to change. It does not help that many tech companies are, as a matter of policy, not contributing to a tax base that might be used to improve public education, community outreach, and/or research investments that might move the needle on this issue.
  • The increasing privatization of research funding and public-interest data: That makes it virtually impossible to monitor and/or intervene in platform-based issues of public harm or public good. We frankly have no idea how to avoid algorithmic bias, introduce community-building features, handle the deleterious effects of disinformation, etc., because there is no viable way for objective parties to study and test interventions.”

Gary Marchionini, dean and professor at the School of Information and Library science at the University of North Carolina-Chapel Hill, wrote, “I expect that there will be a variety of national and local regulations aimed at limiting some of the more serious abuses of digital spaces by machines, corporations, interest groups, government agencies and individuals. These mitigation efforts will be insufficient for several reasons: The incentives for abuse will continue to be strong. The effects of abuse will continue to be strong. And each of these sets of actors will be able to masquerade and modify their identity (although corporations and perhaps government agencies will be more limited than machines, individuals and especially interest groups). On the positive side, individuals will become more adept at managing their online social behaviors and cyberidentities.”

Eugene H. Spafford, leading computer security expert and professor of computer science at Purdue University, predicted, “Balkanization due to politics and ideology will still create islands of belief and information in 2035. Some will embrace knowledge and sharing, but too many will represent slanted and restricted views that add to polarization. Material in many of these spaces will be viewed (correctly or not) as false by people whose beliefs are not aligned with it. Governments will be further challenged by these polarized communities in regulating issues of health, finance and crime. The digital divides will likely grow between the haves and have-nots in regard to access to information and resources. Trans-border propaganda and crime will be a major problem.”

Charles Ess, emeritus professor in the department of media and communication at the University of Oslo, said, “I have seen calls and suggestions for what amounts to an internet/social media technology environment that is developed as yet one more form of public good/service by national governments. Treating internet-facilitated communication, including social media, as public goods in these ways might further include both education and legal arrangements that would teach and enforce the distinctions between protected speech that contributes to informed and reasonable civil debate clearly contributing to democratic deliberation, norms, processes, etc. – and nonprotected expression that fosters, e.g., hatred, racism and the stifling of open democratic deliberation. Such a system and infrastructure would thereby avoid at least some of the commercial/competitive drivers that shape so much of current internet and social media use. Ideally, it would develop genuine and far more positive environments as alternatives to the commercially driven version we are currently stuck with. But all of this will depend on foundational assumptions of selfhood, identity and meaning, along with the proper governmental roles vis-à-vis public goods vis-à-vis capitalism, etc., that are largely alien to the U.S. context. It is hard to be optimistic that these underlying conceptions will manage to diffuse and make themselves felt in the U.S. context anytime soon.”

A researcher at the Center for Strategic and International Studies wrote, “Absent external threats or strong regulatory action at the global or European level, the prospects of substantial positive improvement within the U.S. seem dim. There are a number of forces at work that will frustrate efforts to improve digital spaces globally. These include geopolitics, partisan politics, varied definitions and defenses of free speech, business models and human nature. In the West, U.S. technology companies largely dominate the digital world. Their business models are fueled by extracting personal data and targeting advertising and other direct or indirect revenue generating data streams at users. Because human nature instinctively reacts to negative stimulus more strongly than positive stimulus, feeding consumers/users with data that keeps them on-screen means that they will be fed stimulating, often-divisive data streams. Efforts to change this will be met with resistance by the tech companies (whose business models will be threatened) and by advocates of free speech who will perceive such efforts as limiting freedoms or as censorship. This contest will be fuel for increasingly partisan politics, further frustrating change. These conditions will invite foreign interests to ‘stir the pot’ to keep the U.S. in particular, but Western democracies overall, at war internally and thus less effective globally. The rise of a Chinese-dominated internet environment outside of the West, however, could provide an impetus for more-productive dialogue in the West and more beneficial changes to digital spaces.”

An eminent expert in technology and global political policy observed, “There is insufficient attention paid to risk when assessing digital futures. To date this has enabled substantially positive impacts to take place, but with an underlying undercurrent of constraints on rights, inattention to impacts on (in)equality, environment, the relationships between states/businesses/citizens and many complex areas of public policy. Rapid technological changes, facilitated by market consolidation and a libertarian attitude to innovation (‘permissionless’), can have irreversible impacts before accountability mechanisms can be brought to bear. The pace and potency of these changes are increasing, and there is insufficient will in governments or authority in international governance to address them. There will be substantial gains in some areas of life, though these will be unequally distributed with substantial loss in others. The trajectory of interaction between technology and governance/geopolitics will be crucial in determining that balance, and that future does not currently look good.”

A leading internet infrastructure architect at major technology companies for more than 20 years responded, “Government regulation isn’t going to solve this problem. Governments will step in to ‘solve’ the problem, but their solutions will always move toward increasing government power toward using these systems for government ends. I don’t see a simple solution to this problem.”

A network consultant active in the Internet Engineering Task Force (IETF) commented, “A glimmer of hope may be found in distributed peer-to-peer applications that are not dependent on central servers. But governments, network service providers and existing social media services can all be expected to be hostile to these. That’s not to say that there will be no change – the internet is constantly changing – but what I don’t currently see is any factor that would encourage people to see their fellow humans in greater depth and to look past superficial attributes. Advertising-supported digital services have an inherent need to encourage engagement, and the easiest way to do that is to promote or favor content that is divisive, promotes prejudice or otherwise stirs up enmity. These are exactly the opposite of what is needed to make the world better. In addition, the internet – which was originally based on open standards not only for its lower-layer protocols but for applications also – is increasingly becoming siloed at the application layer, which results in further division and unhealthy competition. Right now, I don’t know what incentives would encourage a change away from these trends. I have little faith in laws or regulations to have a positive effect, beyond protecting freedom of speech, and there are increasing, naive public demands for both government and tech industries to engage in censorship.”

Natalie Pang, a senior lecturer in new media and digital civics at the National University of Singapore, said, “Although there is now greater awareness of the pitfalls of digital technologies – e.g., disinformation campaigns, amplification of hate speech, polarisation and identity politics – such awareness is not enough to reverse the market dynamics and surveillance capitalism that have become quite entrenched in the design of algorithms as well as the governance of the internet. Broader governance, transparency and accountability – especially in the governance of the internet – is instrumental in changing things for the better.”

A Pacific Islands-based activist wrote, “While the problem of centralisation of the internet to the major platforms is clear to most, solutions are not. Antitrust/monopoly legislation has been discussed for decades but has not been applied. In fact:

  • Corporate concentration has been encouraged by nation-states in order ‘to produce local enterprises that can compete on the world market.’
  • In addition, nation-states have profited from the concentration of communication in platforms in order to have a minimal number of ‘points of control’ and to gain access to the data that they can provide.
  • In addition, some of the proposals aimed at controlling the behaviour of anti-competitive companies seem worse than the problems they are meant to solve, for instance, requiring such companies to censor or not censor, on the pain of immense fines – in essence privatising government powers and leaving little to no ability to appeal decisions. This is already in place for copyright in many countries where the tendency is to expand the system to whatever legislators wish for. Governments can then proclaim that it is the companies that are doing the censorship, and companies can state they have no choice because the government required it, leaving citizens who are unfairly censored with little recourse.

“Another related area is the increasing push to limit encryption that is under the control of individual citizens. If states, or companies to which they have delegated powers, cannot read what is being written, filmed, etc., and then communicated, then the restrictions on content proposed will have limited impact. But taking away encryption capabilities from individual citizens leaves them at the mercy of criminals, snoopers, governments, corporations, etc. The initial promise of the internet – to enable ordinary citizens to communicate with each other as freely as the wealthy and/or powerful have been able to in the past seemed in large part to have been realised. BUT this seems to have shaken the latter group enough to reverse this progress and again limit citizens’ communication. Time will tell.”

Jessica Fjeld, assistant director of the Cyberlaw Clinic at Harvard’s Berkman Klein Center for Internet & Society, commented, “I have hope for the future of digital spaces because we are rightly beginning to understand the issues as systemic, rather than the result of the choices of individual people and companies. Dealing with threats to democracy, free expression, privacy and personal autonomy become possible when we view these issues through a structural lens and governments begin to take ownership of the issue.”

A writer and editor who reports on management issues affecting global business said, “I am not confident the disparate coalition of state, country and international governing bodies needed to correctly influence and monitor commercialized digital public spaces will be able to come to agreement and have enough clout to push back against the very largest and growing larger tech players, who have no loyalty to customer, country or societal norms.”

A professor of political communication based in Hong Kong observed, “Digital technologies will intensify their negative impact on civil society through more-sophisticated micro-targeting, improved deepfake technologies and improved surveillance technologies. Minimizing negative impacts will require government regulation, which is too difficult to accomplish in democracies due to strong lobbying and political polarization. Authoritarian countries, on the other hand, will use these technologies not only to suppress civil society, but also to gain a technological advantage over democracies.”

A veteran investigative reporter for a global news organization said, “The transformation of digital spaces into more-communitarian, responsible fora will happen mostly at the local and regional level in the United States and may not achieve national or global dominance. This presupposes a dim view of the immediate future of the United States, which is in grave danger of breaking up. I believe the same antidemocratic forces that threaten the integrity of the United States as a country also threaten the integrity of digital spaces, the reliability of the information they carry and their political use. I see a global balkanization of the internet in the near term with the potential for eventual international conventions and accords that could partially break down those barriers. But the path may be rocky and even war-studded.”

The general public’s digital literacy will improve and people’s growing familiarity with technology’s dark sides will force change

A portion of respondents to this canvassing said internet users themselves are a big part of the problem. People’s political, social and economic behaviors in digital spaces are threatening other’s identities, agency and rights, according to these experts. Some argue it is the public’s responsibility to learn about the opportunities and threats in digital spaces and apply that knowledge to reduce the dystopic influences of tech applications. These experts push for increased “digital literacy” to help drive a shift in norms so that people are continuously attuned to and ready to adapt to technological change.

Alan S. Inouye, director of the Office for Information Technology Policy at the American Library Association, responded, “The haves-and-have-nots dichotomy will not be about access to technology or information, but rather on the cognitive ability to understand, manage and take advantage of the ever-growing abstractions of digital space. The configuration of digital spaces is greatly influenced by the fundamental forces that shape society. The greater bifurcation of society that developed in the last few decades will continue to 2035. Knowledge workers, often college graduates, will do relatively well; they have the education and improving skills from their profession that will enable them to navigate the voluminous and complex digital spaces to serve their purposes. Other workers will not do so well, with no replacement for the blue-collar, unionized, factory jobs (and other similar employment) that placed them in the middle class in the 20th century. As the possibilities of digital spaces become increasingly numerous and complex with nuanced interconnections, these workers will have more difficulty in navigating them and shaping them to accommodate their needs. Indeed, there will be increasing opportunities to manipulate these workers through ever more sophisticated technology.”

Amy Zalman, futures strategist and founder of Prescient Foresight, wrote, “I would like to see schools, governments, civil society and businesses participate in better education in general so future generations can apply critical thinking skills to how they live their lives in digital spaces. People should understand how to better evaluate what they see and hear. We need to shape a positive culture on and in digital spaces, starting with simply recognizing they are an extension of our daily lives. There are also many unspoken rules of behavior that help us generally get along with those around us.”

“We need to shape a positive culture on and in digital spaces, starting with simply recognizing they are an extension of our daily lives. There are also many unspoken rules of behavior that help us generally get along with those around us.”

Amy Zalman, futures strategist and founder of Prescient Foresight

Jesse Drew, a professor of media at the University of California-Davis, urged, “The public must take a lead. I see people shedding their naivete about technology and realizing that they must take a more involved role in deciding how tech will be used in our society. This assumes democracy is able to survive both the perils of right-wing totalitarianism as well as neoliberal surrender to corporations.”

Barry Chudakov, founder and principal at Sertain Research, said, “Today we are struggling to grapple with managing the size and scope of certain tech enterprises. That is presently what proposed reforms or initiatives look like. But going forward we are going to have to dig deeper. We are going to have to think more broadly, more comprehensively. Our educational systems are based on memorization and matriculation norms that are outmoded in the age of Google and a robotic and remote workforce. Churches are built around myths and stories that contain injunctions and perspectives that do not address key forces and realities in emerging digital spaces. Governments are based on laws which are written matrices. While these matrices will not disappear, they represent an older order. Digital spaces, by comparison, are anarchic. They do not represent a new destination; they are a new disorder, a new way of seeing and being in the world. So, to have the biggest impact, reforms and initiatives must start from a new basis. This is as big a change as moving from base 10 arithmetic to base two. We cannot reform our way into new realities. We have to acknowledge and understand them.

“Like pandemics that morph from one variation to another, digital spaces and our behavior in them change over time, often dramatically and quickly. Proof on a smaller scale: In one generation, virtually every teenager in the Western world and many the world over considers a cellphone a bodily appendage as important as her left arm and as vital to existence as the air going through his lungs. In a decade, that phone will get smaller, will no longer be a phone but instead will be a voice prompt in a headset, a streaming video in eyeglasses, a gateway in an embedded chip under the skin. Our understanding of digital spaces will have to evolve as designers use algorithms and bots to create ever more sticky and seamless digital spaces. Nothing here is fixed or will remain fixed. We are in flux, and we must get used to the dynamics of flux.

“The No. 1 initiative or reform regarding digital spaces would be to institute a grammar, dynamics and logic training for digital spaces, effectively a new digital spaces education, starting in kindergarten going through graduate school. This education/retraining – fed and adjusted by ongoing digital spaces research – is needed now. It is as fundamental to society and the public square as literacy or STEM. Spearheading this initiative should be the insistence among technologists and leaders of all stripes that profit and growth are among a series of goods – not the only goods – to consider when evaluating and parachuting a new technology into digital spaces.

“New digital spaces will be like vast cities with bright entertainments and dark areas; we will say we are ourselves in them but we will also be digital avatars. Cellphones caused us to become more alone together (see the work of Sherry Turkle). Emerging digital spaces which will be much more lifelike and powerful than today’s screens, may challenge identity, may become forces of disinformation, may polarize and galvanize the public around false narratives – to cite just a few of the reasons why a new digital spaces curriculum is essential.

“The nature of identity in digital spaces is intimately involved with privacy issues; with dating and relationship issues; with truth and the fight against disinformation. We think of reforms and initiatives in terms of a slight alteration of what we’re already doing. Better oversight of online privacy practices, for example. But to create the biggest impact in digital spaces, we need to understand and deeply consider how they operate, who we are once we engage with digital spaces and how we change as we engage. Example: Porn is one digital space phenomenon that has fundamentally changed how humans on the planet think about and engage in sex and romance. We hardly know all the ramifications. While it appears the negative effects of porn have been exaggerated, the body dysmorphia issues associated with ubiquitous body images in digital spaces have created new problems and issues. These cannot be resolved by passing laws that abolish them. Can we fix hacking or fraud in digital spaces by abolishing them? While that would be a noble intent, consider it took centuries for the effects of slavery, for example – once abolished – to be recognized, addressed and reconciled (still in process). Impersonation and altering identity are fundamental dynamics of digital spaces. These features of digital spaces enable hacking. We are disembodied in digital spaces which is a leading cause of fraud. This is not an idle example.”

Evan Selinger, a professor of philosophy at Rochester Institute of Technology, wrote, “Increased platform literacy might be the primary driver for improving digital spaces. Simply put, the idea that widely used platforms aren’t neutral spaces for information to flow freely but are intermediaries that exercise massive amounts of power when deciding how to design user interfaces and govern online behavior has gone from being a vanguard topic for academic researchers and tech reporters to a mainstream sensibility. Indeed, while there are diverse and often conflicting ideas about how to reform corporate-controlled digital spaces to promote public-interest outcomes better, there is widespread agreement that the future of democracy depends on critically addressing, right here and now, central civic issues such as privacy and free speech.”

Alf Rehn, a professor of innovation, design and management at the University of Southern Denmark, said, “The real progress will stem from improvements in media literacy, the capacity to for individuals to critically assess claims made in digital spaces and social behavior in digital spaces. We are already seeing some positive moves in this direction, particularly among younger groups who are more aware regarding how digital spaces can be co-opted and perverted, and less gullible when it comes to ‘digital-first falsehoods.’”

An anonymous respondent responded, “The past seven years and recent events have shown us the limits of the early days of a technology and how naive the ‘build it and they will come’ approach to the digital sector was. Hindsight shows that the human species still has a lot to learn about how to use the power of digitally enhanced networking and communications. The unconsidered and unaddressed issues baked into the current form of our digital spaces have been exposed to us more clearly now, especially by the activities of Vladimir Putin’s Internet Research Agency, which many see to be a key causal factor in the political outcomes of Brexit, Trump 2016, and Brazil’s populist swing. These are examples of geopolitical abuse of digital spaces fostering perception manipulation tantamount to mind control. Inequalities in education and access to development pathways for critical thinking skills have set the stage for these kinds of influence campaigns to succeed.”

An expert in marketing and commercialization of machine learning tools commented, “I believe regulators, academics, tech leaders and journalists will develop systems and processes that society will need to partake in and work with to learn how to better communicate and collaborate in digital spaces. At first this will be painful, but it will become normalized and more efficient over time, using greater levels of digital signatures and processes. Means will evolve for advancements in communicating the rising complexity associated with digital identity, traces and how information might be used in malicious and inappropriate means. It is incredibly challenging to simplify and communicate and to achieve having a vast audience cognitively process their role in keeping information secure and maintaining a level of accuracy while sharing information.”

A share of these experts say the public’s role should go beyond simply understanding how tech-designed digital spaces come together for good and bad; they say the public has to be digitally savvy so it can more actively lobby for its rights. They also argued that tech companies and governments should invite the public to be more directly involved in shaping and creating better public spaces, advising and motivating government and tech leaders to develop, adopt and continuously evolve the types of digital political, social and economic levers that might help promote a more-positive future for the digital public sphere.

An internet architecture expert based in Europe said, “Some problems may be diminished if citizens are full participants in the governance of digital spaces; if not, the problems can worsen. Citizens must reconquer digital spaces, but this is a long path, like the one toward democracy and freedom. Digital life will improve if the whole population has access to these spaces and digital literacies are learned. It might be useful to create especially targeted digital spaces, governed by appropriate algorithms, for all of the people who want to express and vent their rage.”

Francine Berman, distinguished professor of computer science at Rensselaer Polytechnic Institute, wrote, “Today it is largely impossible to thrive in a digital world without knowledge and experience with technology and its impacts on society. This knowledge has become a general education requirement for effective citizenship and leadership in the 21st century. And it should be a general education requirement in educational institutions that serve as a last stop before many professional careers, especially in higher education.

“Currently, forward-looking universities are creating courses, concentrations, minors and majors in public-interest technology – an emerging area focused on the social impacts of technology. Education in public interest technology is more than just extra computer science courses. It involves interdisciplinary courses that focus on the broader impacts of technology – on personal freedom, on communities, on economics, etc. – with the purpose of developing the critical thinking needed to make informed choices about technology. And students are hungry for these courses and the skills they offer.

“Students who have taken courses and clinics in public-interest technology are better positioned to be knowledgeable next-generation policymakers, public servants and business professionals who may design and determine how tech services are developed and products are used. With an understanding of how technology works and how it impacts the common good, they can better promote a culture of tech in the public interest, rather than tech opportunism.”

A professor of sociology and anthropology commented, “Ultimately citizens will demand government regulation that limits the worst downsides of digital spaces. These changes will be supported by increased public awareness and knowledge of digital spaces brought about by both demographic change and better education about such spaces. The key problem is an advertising model which – coupled with socio-psychometric profiling algorithms – incentivizes destructive digital spaces.”

The CEO of a technology futures consultancy said, “As we advance into the Fourth Industrial Revolution – the digital age – there is a heightened focus on digital privacy, digital inclusion, digital cooperation and digital justice across governments, society and academia. This is causing tech companies to face the consequences, hearing and responding to those who loudly advocate for digital safety and having to comply with regulation and guidance and join in sustainable collaborative efforts to ensure tech is trustworthy. The average user in 2035 will not have experienced the world before tech and will have grown up as a tech consumer and data producer.

“I foresee users developing social contracts with tech companies and governments in exchange for their data. This could look like public oversight, and there will be engagement of efforts, initiatives that require or request public data. I foresee more tech-savvy and data-privacy-oriented elected officials who have a strong background in data advocacy. I believe society will continue to demand trust in the use, collection, harvesting and aggregation of their data. This will diminish misuse. However, law enforcement’s use of data-driven tools used to augment their work will continue to present a challenge for everyday citizens.”

Aaron Chia Yuan Hung, associate professor of education technology at Adelphi University, responded, “As much power as technology companies have, they do tend to bend toward the demands of their users. In that sense, I have more hope in the public than in companies. Of course, the public is not a monolithic group and some will want to push digital life in a negative direction (e.g., entities that conduct troll farming, manufactured news, mis/disinformation, etc.). I believe most people don’t want that and will push back, through education, through public campaigning, through political pressure. 2035 will bring about its own problems, of course, and every era can seem dire. It’s hard to imagine what those new concerns would be, just as it was hard to imagine what our current concerns were back in 2005.”

Pia Andrews, an open- and data-driven government leader for Employment and Social Development Canada (ESDC), observed, “What I am seeing is a trend to the internet bringing out both the best and worst of people, and with new technologies creating greater challenges for trust and authenticity, people are starting to get activated and proactive in saying they want to create the sorts of spaces that improve quality of life, rather than naturally allowing spaces to devolve without purpose. This engagement by normal people in wanting to shape their lives rather than waiting to have their lives shaped for them sees a trend of more civic engagement, civil disobedience and activism, and a greater likelihood that digital and other spaces will be designed by humans for good human outcomes, rather than being shaped by purely economic forces that value the dollar over people.”

A director of a research project focused on digital civil society wrote, “Civil society has been and will be playing a key role in raising public awareness, and we are likely to see groups from a wide spectrum of civil society (not just those promulgating digital rights) coming together to confront issues. I imagine there will be growing awareness among the public of the dangers and harms of digital spaces; the main business model of our current digital spaces is advertisement and data extraction. Unless something is done, that – coupled with the rise of political authoritarianism – will continue to shape digital spaces in ways that are harmful and effectively erode trust in democracy and public institutions.”

People will evolve and improve their use of digital spaces and make them better

History shows that people do not stand still when problems in information spaces arise. They learn and they act to change those spaces. A share of these experts predict the same will be true of the digital era. They argue that users will become more facile using digital spaces, learn how to work around problem areas and move toward collective action when problems become unbearable. Schools will play a role, too, in teaching digital literacy, according to these experts.

Robert Bell, co-founder of Intelligent Community Forum, said, “As long as providers can make big profits from the ‘dumpster fire,’ I don’t expect them to change. But people will evolve, and that takes much more time than just a few years. We will eventually adapt to use digital spaces in more-positive ways. I don’t expect the solution to be technological but in human behavior, as more people have negative experiences with false information, misleading advice and the general-panic level of concern that digital spaces seek to generate.”

Jeremy West, senior digital policy analyst at the OECD, wrote, “I am optimistic that improvements will be made. The fixes won’t all be technical, though. Some of the most effective solutions will be found in education, transparency and awareness. Take awareness, for example – experience with social media grows all the time, and I think we are already seeing embryonic inklings in the general public that perhaps their social media spheres aren’t actually representative of viewpoints in the wider population (or of reality, for that matter). Those inklings may grow, and/or be followed by awareness that sometimes the distortions are intentionally aimed at them. This should, in principle, lead to greater resilience against mis/disinformation.”

A computer science professor said business, governmental and social norms will develop as society’s capacity to understand new digital spaces expands. They predicted, “Digital space will evolve in ways that improve society simply because the 2035 space does not exist now and will develop. Just as with email, I believe a new and better equilibrium can eventually be reached. At present, the governance of digital spaces is limited by our capacity to understand how to deploy these tools and create or manage these spaces. By 2035, that capacity problem will be mitigated at least to some degree. In terms of the management of existing spaces, I anticipate investment will stabilize many of the problems that currently cause worry. Consider email and, to a lesser extent, websites used for things like fraud and malware distribution. Early on, many of the same concerns were prevalent around these spaces, yet today we have new social norms, new governance structures and investment in tools and teams to police these spaces in effective ways. A worrying development is the trans-jurisdictional nature of digital spaces, which might require new agreements to manage enforcement that requires cooperation among many parties. These will emerge as driven by need, as has happened in the management of malware, fraud and spam. In some cases, this will create barriers to accountability or governance. … One worry I have related to the development of online spaces in the next 10 years is the emerging misinformation-as-a-service business model and other new methods of monetizing activity considered malign.”

The founder and chief scientist of a network consultancy commented, “Generational change will make a difference. The vast majority will have had the experience of ‘digitalhood’ by that time, importantly, their parents will have had experience as well. Issues of veracity will remain, but it is to be hoped that their consumption will be better tempered. The real remaining issue will be one that has existed in the physical world for centuries: closed (and self-isolating) communities. The notion of ‘purity of interaction’ will still exist, as it has in various religious-/cultural-based groups. The ‘Plymouth Brethren‘ of the internet has arrived, and managing that tribalism and its antagonistic actions will remain. It is clear that it will not be a smooth ride, it is clear that both society and individuals will suffer in mental and physical ways. However, it is my hope that people will adapt and learn to filter and engage constructively. That said, I have seen low-level mental illness in very intelligent individuals explode into full-fledged ‘QAnon-ness,’ so I can only say that this is a hope, not something I can evidence.”

Zak Rogoff, a research analyst at the Ranking Digital Rights project, wrote, “In 2035 … most people will have more control and understanding of algorithmic decision-making that affects them in what we currently think of as online spaces. I also feel that physical space will be more negatively impacted, in ways that online space is today, for example through the reduction of privacy due to ubiquitous AI-powered sensor equipment.”

John L. King, a professor at the University of Michigan School of Information Science, said, “It’s a matter of learning. As people gain experiences with these technologies, they learn what’s helpful and what’s not. Most people are not inclined toward malicious mischief – otherwise there would be a lot more of it. A few are inclined toward it, and of course, they cause a lot of trouble. But social regulation will evolve to take care of that.”

A Southeast Asia-based expert on the opportunities and challenges of digital life responded, “Technologies do not determine culture. Instead, they allow people to more easily see divides that already exist. The new generation of digital media users came of age at a time when the internet promised to them an alternative to ‘mainstream’ culture – new digital economies, certainly, and special prices and products only available online – and the application of this sales pitch to information has been initially unhealthy. … In coming years, the disruptive effects of these new conversations will be minimized. Users will accustom themselves to having conversations with others, and content providers will be better able to navigate the needs of their audiences.”

An associate professor whose research focuses on information policy wrote, “I believe in the good in human nature. I also believe that humans, in general, are problem solvers. The use of digital spaces currently is a problem, particularly for civil communication and, hence, democracy, but it is a problem we can address. Raising younger generations to think critically and write kindly would be a good start to changing norms in digital spaces.”

Charles Anaman, founder of waaliwireless.co, based in Ghana, said, “While the media tends to rally to the negatives (because the public tends to react to that kind of information), the reality is that better conversations are now taking place in real-life interactions in digital spaces. When better conversation can be had – discussing ideas without shaming the ‘ignorant’ – society will benefit greatly in the long term, rebuilding trust. It will be a slow process.

“It is taking us a while to realise that we have been manipulated by wealthy entities playing off all sides to achieve their own goals. Transparency has been a farce for some time. Reality is fueling a new wave of breaking down digital silos to develop better social awareness and a review of facts to understand the context and biases of the sources being used. Cybersecurity, as it is being taught now, is going to have to be applied with the understanding that all attack tools can be misused (NSO tools/Stuxnet/et al.) to cause real-world damage in unexpected ways. Open-source solutions to proactive security from trustless authentication can and should be applied to all online resources to develop better collaboration tools.”

Counterpoint: Some of these experts do not think that the general public will become more savvy or that ‘literacy’ will be enough

A share of these experts believe people’s critical-thinking skills are in decline in the digital age; some said they doubt that effective digital literacy education about the ins and outs of the light and dark areas of rapidly changing digital spaces will improve digital discourse.

Kent Landfield, a chief standards and technology policy strategist with 30 years of experience, noted, “Critical thinking is what made Western societies able to innovate and adapt. The iPhone phenomenon has transformed our society to one of lookup instead of learning. With the lack of that fundamental way of looking at the world being mastered today, generations that follow may become driven by simple herd mentality. The impact of social media on our society is dangerous as it propels large groups of our populations to think in ways that do not require original thinking. Social media platforms are ‘like or dislike’ spaces that foster conflict, causing these populations to be more susceptible to disinformation, either societal or nation-state. ‘Us versus them’ is not beneficial to society at all. The days of compromise, constructive criticism and critical thinking are passing us by. Younger generations’ minds are being corrupted by half-truths and promises of that which can never be achieved.”

An angel and venture investor who previously led innovation and investment for a major U.S. government organization commented, “The educational system is not creating people with critical-thinking skills. These skills are essential for separating what is real from what is fake in any space. Further, the word fake has become, itself, fake. So, we’re creating a next generation of digital consumers/participants who are not prepared to separate reality from fantasy. Lastly, state actors and nonstate actors are rewarded by and wish to continue to take advantage of this disconnect. The disconnect will continue to affect politics, social norms, education, health care and many other facets of society.”

A professor of political science expert in e-government and technology policy noted, “Because these digital spaces are forms of mass communication and spaced together with groups promoting the public interest, the views of extremists are easily spread and digested by the public and often appear to be quite legitimate. I see these digital spaces as becoming even more commonplace for political extremists, especially white power and antidemocratic groups. Government is always behind the curve in dealing with these types of groups, and internet governance tends to take a hands-off or ad hoc approach. I don’t think things will change for the better. I can’t say I have the answers on how to counter this.”

A researcher, educator and international statesman in the field of medicine responded, “Our current uses of technology have not contributed to a better society. We are ‘always on,’ ‘present but absent,’ ‘alone in the company of others’ and inattentive. Many of the problems in the digital sphere are simply due to the ways humans’ weaknesses are magnified by technology. People have always faced challenges developing meaningful relationships, and conspiracy theories are not new. Digital technology is a catalyst. There has been a change in our communication parameters and there are cyber effects. The biggest burden is on educators to help each generation continue to develop psychologically and socially.

“When trying to use this technology to communicate, too many fail to consider others and appreciate differences. Many messages are performances and not part of building anything together. Too many people are compulsive users of this technology. Many have moved from overuse to compulsive use and from compulsive use to addiction. We have invented terms to describe our attempts to control our behavior – technology deprivation, technology detox or internet vacations are expressions suggesting people are becoming more mindful of their use.

“Many have not used the technology to be responsive to others. To ask meaningful questions, provide encouraging nonverbal communication that encourages others to continue talking, or even use a paraphrase to signal or check on understanding and to confirm others has always been difficult because it requires focusing outside oneself and on others. Now, too many post a comment and leave the field, and too many cannot seem to provide that third text (A’s message, B’s response, A’s response) in the stream that indicates closure on even the most-simple task coordination. Many create dramatic messages that are variations of ‘pay attention to me’ while failing to pay attention to others! …

“I am afraid we are losing our sense of appropriateness, disclosure and intimacy in an era of disposable relationships. We are using our limited time and mental capacity to ‘keep in touch’ or ‘lurk.’ There are more than 22,000 YouTube sites with over a million followers each. There are a lot of people online to be entertained and relieve ‘boredom’ instead of developing a network of meaningful relationships. …

“Civic engagement has had a resurgence, and people have used technology to develop activist networks. However, these will be temporary manifestations unless people form sustainable groups aimed at accomplishing renewable goals. Otherwise, these efforts will fade. Instead, people seem to have found like-minded people to confirm their biases, creating consequent social identities that dominate individuals’ personal identities.

“Most online conflict about public issues becomes ego-defensive or dramatic declarations instead of simple conflict recognizing differences and solving problems. All of this has brought many people to confuse their sense of reality. We live in a hybrid world in which our technologies have become indispensable. We seem to have lost our ability to discriminate events, news, editorials or entertainment. Indeed, some have lost their ability to discriminate simulated and virtual experiences from the rest of their lives. Advances in artificial intelligence encourage this trend. …

“There is very little that business leaders or politicians can do beyond modeling behaviors and limiting abuses associated with general use. ‘Alternate facts’ and repeated efforts to explain away what the rest of us can see and hear do not help. Using the internet to attack scientists, educators, journalists and government researchers creates the impression that all reports and sources of reports are equally true or false. Some people’s facts are more validated and reliable than others. Confirmation bias and motivated reasoning are the problems here. When the population begins to reject the garbage, there will be less of it, but this will take a while since so many have staked their sense of themselves on different positions.”

New internet governance structures will appear that draw on collaborations among citizens, businesses and governments

The most promising initiatives will be those in which the business, governmental, academic and civil society sectors work together with the public to solve problems, according to a number of these expert respondents. Some suggest that this work could be enabled by funding from a coalition of industry, government and philanthropies. Some are hopeful this can happen but say it will require change in the ethics and ethos of tech, in the venture capital funding model underlying tech and in the hierarchical structure of governance, which typically tips toward serving the needs of the power elite.

Paul Jones, emeritus professor of information science at University of North Carolina-Chapel Hill, urged, “Technologists have to learn to think politically and socially. Politicians have to learn to think about technology in a broader way. Both will have grown up with these problems by 2035 and will have seen and participated in the construction of the social, legal and technical environments. From that vantage point, the likelihood of being able to strike a balance between control and social and individual freedoms is increased. Not perfected but increased. The hard work of regulation and of societal norms is to allow for benefits from new technologies to grow and spread while restricting the detriments and potential harms.”

William Lehr, an associate research scholar at MIT’s Computer Science & Artificial Intelligence Laboratory with more than 25 years of internet and telecommunications experience, wrote, “We need to adapt both our society and our technology because digital spaces are changing the nature of public life and being human. The rise of fake news is one obvious bad outcome and if post-truth discourse continues, things will get worse before they can get better. The fixes will require joint effort across the spectrum from technologists to policymakers. There is the potential for digital spaces to produce public goods, but also potential for the opposite. Neither outcome is a foregone conclusion. Digital spaces will be a critical part of our future in any case, and either that future will be mostly good or mostly bad, but a future without digital spaces is unrealistic.”

“The fixes will require joint effort across the spectrum from technologists to policymakers. There is the potential for digital spaces to produce public goods, but also potential for the opposite.” 



William Lehr, an associate research scholar at MIT’s Computer Science & Artificial Intelligence Laboratory with more than 25 years of internet and telecommunications experience

Lucy Bernholz, director of Stanford University’s Digital Civil Society Lab, said, “The current situation in which a handful of commercial enterprises dominate what is thought of as ‘digital spaces’ will crash and burn, not of its own accord but because the combined weight of climate catastrophe and democratic demise will force other changes that ultimately lead to a re-creation of the digital sphere. The path to this will be painful, but humans don’t make big changes until the cost of doing so becomes less than the cost of staying the same. The collapse of both planetary health and democratic governance are going to require collective action on a scale never before seen. Along the way, the current centralized and centralizing power of ‘tech companies’ will expand, along with autocracy. Both will fail to address the needs of billions of people, and, in time, be undone. Whether this will all happen by 2035, who knows. Just as climate changes is compressing geologic time, digital consolidation is compressing political time. It’s possible we’ll push through both the very worst of our current direction and break through to a more pluralistic, less centralized, participatory set of governing systems – including digital ones – in 24 years. If not, and we only go further down the current path, then the answer to this question becomes a NO.”

Ginger Paque, an expert in and teacher of internet governance with the Diplo Foundation, observed, “Today’s largest problems are not all about digital issues. They are all human issues, and we need to – and we will – start tackling important human issues along with their corresponding online facets. Addressing health (COVID-19 for the moment), climate change, human rights and other critical human issues is vital. The internet must become a tool for solving species-threatening challenges. 2035 will be a time of doing or dying. To continue a negative trend is unthinkable, and how we imagine and use the internet is what we will make our future into. The internet is no longer a separate portion of our lives. Online and offline have truly merged, as shown by the G7 proposal for a minimum corporate tax of 15% for the world’s 100 largest and most profitable companies with minimum profit margins of 10%; it involves tech giants like Google, Amazon and Facebook, and this was undertaken in consideration of digital issues.”

Wendell Wallach, senior fellow with the Carnegie Council for Ethics in International Affairs, commented, “The outstanding question is whether we will actually take significant actions to nudge the current trajectory of digital life toward a more-beneficial trajectory. Reforms that would help: 

  1. Holding social media companies liable for harms caused by activities they refuse, or are unable, to regulate effectively.
  2. Shifting governance away from a ‘cult of innovation’ where digital corporations and those who get rich investing in them have little or no responsibility for societal costs and undesirable impacts of their activities. The proposed minimum 15% tax endorsed by the G7/G20 is a step in the right direction, but only if some of that revenue is directed explicitly toward governing the internet, and ameliorating harms caused by digital life, including the exacerbation of inequality fostered by the structure of the digital economy.
  3. Development of a multistakeholder network to oversee governance of the internet. This would need to be international and include bottom-up representation from various stakeholder groups including consumers and those with disabilities. This body, for example, might make decisions as to the utilization of a portion of the taxes the G7/G20 said should be collected from the digital oligopoly.”

A program officer for an international organization focused on supporting democracy said, “We should not underestimate the ability of the public and civil society to innovate positive changes that will incentivize constructive behavior and continue to provide crucial space for free expression. The COVID-19 pandemic has demonstrated that digital connectivity is more important to societies around the world than ever. Western tech platforms, for all their faults, are making an effort to be more receptive and responsive to civil society voices in more-diverse settings. In particular, there is growing recognition that voices from the global south need to be heard and involved in discussions about how platforms can better respond to disinformation and address privacy concerns.

“Civil society and democratic governments need to be more involved in global internet governance conversations and in the standards-settings bodies that are making decisions about emerging technologies such as artificial intelligence and facial recognition. If civil society sectors unite around core issues related to protecting human rights and free expression in the digital sphere, I am cautiously optimistic that they can affect a certain degree of positive change. One major area of concern relates to the role of authoritarian powers such as China, Russia and others that are redesigning technology and the norms surrounding it in ways that enable greater government control over digital technologies and spaces. We should be concerned about how these forces will affect and shape global discussions that affect platforms, technologies and citizen behavior everywhere.”

A senior economic analyst who works for the U.S. government wrote, “Over time, society in its broadest sense will develop government policies, rules and laws to better govern digital space and digital life.”

A professor whose research is focused on civil society and elites responded, “It may not be too late to take corrective steps, but it will require a highly coordinated set of actions by stakeholders (e.g., government, intelligence agencies, digital intermediaries and platforms, mainstream media, the influence industry – PR, advertising, etc. – educators and citizens). We will likely need supra-national regulation to steer things in the right direction and fight the current default settings and business models of dominant social media platforms. Throughout, we need to be alert and guard against the negatives that can arise from each type of stakeholder intervention (especially damage to human rights). There are numerous social and democratic harms arising from what we could term the ‘disinformation media ecology’ and its targeted, affective, deception. It impacts negatively on citizenship and citizens in fundamental ways. These include attacks on:

  • Our shared knowledge base – Can we agree on even the most basic facts anymore?
  • Our rationality – Faulty argumentation is common online, as evidenced by conspiracy theorists.
  • Our togetherness – Social media encourage tribalism, hate speech and echo chambers.
  • Our trust in government and democratic institutions and processes – Disinformation erodes this trust.
  • Our vulnerabilities – We are targeted and manipulated with honed messages.
  • And our agency – We are being nudged, e.g., by ‘dark design’ and influenced unduly.”

A director with an African nation’s regulatory authority for communications said, “It is very important that all members of society play an equal role in devising and operating the evolving framework for the governance of digital spaces. Most services – both economic and social – will be delivered through digital platforms in 2035. … The current environment, in which digital social media platforms are unregulated, will be strongly challenged. The dominance of developed countries in the digital space will also face a strong challenge from developing countries.”

Terri Horton, work futurist at FuturePath, observed, “The challenges lie in bridging the global digital divide, reducing equity gaps, governing privacy, evolving ethical use and security protocols and rapidly increasing global digital and AI literacy. Mitigating these challenges will require substantial collaborative interventions that merge private and public industries, governments and global technology organizations. The desire to create a future that is equitable, inclusive, sustainable and serves the public good is human. I believe that desire will persist in 2035. The growth and expansion of novel digital spaces and platforms will enable people across the globe to use them in positive ways that drive the energy and combustion for improving the lives of many and creating a future that serves society. In the future, people will have more choices and opportunities to leverage AI, ML, VR and other technologies in digital spaces to improve how they work, live and play; amplify passions and interests; and drive positive societal change for people and the planet.”

A computer science professor based in Japan said, “Although the internet as a technology is already about 50 years old, its use in society at large is much more recent, and in terms of society adapting to these new uses, including the establishment of laws and general expectations, this is a very short time span. Tech leaders will have to invest in better technology to detect and dampen and cull aggressive/negative tendencies on their platforms. Such understanding may only be possible with the ‘help’ of some laws and public pressures that penalize the tolerance of overly negative/aggressive tendencies. Figuring out how to apply such pressure without leading to overly strict limitations will require extreme care and inventiveness. Education will also have to play quite a role in making sure that people value true communications more than negative clickbait.”

An executive with an African nation’s directorate in finance for development wrote, “It would be utopian of us to underestimate the impact of the lack of ethics in the assembly of certain technologies. They can cause disasters of all kinds, including exacerbated cyber terrorism. People must collaborate to put in place laws and policies that have a positive impact on the evolution of the digital ecosystem. The regular adaptation of existing technologies will be reworked to offer the options of the possible. Teleworking and medical assistance at home will be generalized. By 2035, the digital transformation of space will be obvious in all countries of the world, including poor countries. The mixing of scientific knowledge and the opening up of open-access data in the world will be an opportunity for progress for each of the peoples. The transparency imposed by the intangible tools of artificial intelligence can make public service more and more available than it has ever been in the past.”

Sam Lehman-Wilzig, professor and former chair of communications at Bar-Ilan University, Israel, commented, “As with most new technologies that have significant social impact, the beginning is full of promise, then the reality sets in as it is misused by malevolent forces (or simply for self-aggrandizement), and ultimately there is societal pushback or technological fixes. Regarding social media, we seem to be now in the latter stage as policymakers are considering how to ‘reform’ its ecology and as public pressure grows for additional self-supervision by the social media companies themselves. I also expect the educational establishment will enter the fray with ‘media literacy’ education at the grade school and high school level. As a result of all these, I envision some sort of ‘balance’ being reached in the near future between free speech and social responsibility.”

Peter Padbury, a Canadian futurist who has led hundreds of foresight projects for federal government departments, NGOs and other organizations, wrote:

  1. “Artificial intelligence will play a large role in identifying and challenging mis- and disinformation.
  2. There could be a code of conduct that platforms use and enforce in the public interest.
  3. There could be a national or, ideally, international accreditation body that monitors compliance with the code.
  4. Reputable service providers could then block the non-code-compliant platforms.
  5. The education system has an important role to play in creating informed citizens capable of critical thinking, empathy and a deep understanding of our long-term, global, collective interest.
  6. Politicians have a very important role to play in informing, acting and supporting the long-term, global, public interest.”

Alejandro Pisanty, professor of internet and information society at the National Autonomous University of Mexico (UNAM), said, “By 2035 it is likely that there will be ‘positive’ digital spaces. In them, ideally, there will be enough trust in general to allow significant political discussion and the diffusion of trustworthy news and vital information such as health-related content. These are spaces in which digital citizenship will be exerted in order to enrich society. This is so necessary that societies will build it, whatever the cost.

“However, this does not mean that all digital spaces will be healthy, nor that the healthy ones will be the ones we have today. The healthy spaces will probably have a cost and be separated from the others. There will continue to be veritable cesspools of lies, disinformation, discrimination and outright crime. Human drivers for cheating, harassment, disconnection from the truth, ignorance, bad faith and crime won’t be gone in 15 years. The hope we can have is that enough people and organizations (including for-profit) will push the common good so that the positive spaces can still be useful. These spaces may become gated, to everyone’s loss. Education and political pressure on platforms will be key to motivating the possible improvements.”

An internet pioneer working at the intersection of technology, business/economics and policy predicted, “Digital spaces will be even more ubiquitous in 2035 than today, so I hope we won’t even have to think about ‘am I online or not?’ by then. That’s only not creepy if it’s a positive experience. I don’t think we’re going to get there through policing or enforcement by technology, technology companies or governments. I do think we need support from all of those as well as public support for improved discourse, but there is no magic bullet, and there is nothing to enforce. What will help is having some level of accountability and a visible history of all interactions in digital spaces for identifiable individuals and for organizations.”

Better civic life online will arise as communities find ways to underwrite accurate, trustworthy public information – including journalism

Some respondents argued that all sectors of society must work quickly now to create an effective strategy to defeat the digital “infodemic” and rein in the spread of mis- and disinformation. They said support for accurate journalism and global access to fact-based public information sources is essential to help citizens responsibly participate in democratic self-governance.

Alexander B. Howard, director of the Digital Democracy Project, wrote, “Just as poor diets and sedentary lifestyles affect our physical health, today’s infodemic has been fueled by bad information diets. We face intertwined public health, environmental and civic crises. Thousands of local newspapers have folded in the last two decades, driving a massive decline in newsroom employment. There is still no national strategy to preserve and sustain the accountability journalism that self-governance in a union of, by and for the People requires – despite the clear and present danger data voids, civic illiteracy and disinformation merchants pose to democracy everywhere.

“Research shows that the loss of local newspapers in the U.S. is driving political polarization. As outlets close, government borrowing costs increase. The collapse of local news and nationalization of politics is costing us money, trust in governance and societal cohesion. Information deprivation should not be any more acceptable in the politics of the world’s remaining hyperpower than poisoning children with lead through a city water supply. A lack of shared public facts has undermined collective action in response to threats, from medical misinformation to disinformation about voter fraud or vaccination to the growing impact of climate change. 

  1. Investors, philanthropists, foundations and billionaires who care about the future of democracy should invest in experiments that rebuild trust in journalism. They will need to develop, seed, and scale more-sustainable business models that produce investigative journalism that doesn’t just depend upon grants from foundations and public broadcasting corporations – though those funds will continue to be part of the revenue mix. 
  2. Legislatures and foundations should invest much more in digital public infrastructure now, from civic media to public media to university newspapers. News outlets and social media platforms should isolate viral disinformation in ‘epistemic quarantines’ and inject trustworthy information into diseased media ecosystems, online and off. Community leaders should inspire active citizenship at the state and local level with civics education, community organizing. Congress should fund a year of national service for every high school graduate tied to college scholarships.
  3. Congress should create a ‘PBS for the Internet’ that takes the existing Corporation for Public Broadcasting model and reinvents it for the 21st century. Publishers should build on existing public media and nonprofit models, investing in service journalism connected to civic information needs. Journalists should ask the ‘people formerly known as the audience’ to help them investigate. State governments should subsidize more public access to publications and the internet through libraries, schools and wireless networks, aiming to deploy gigabit speeds to every home through whatever combination of technologies gets the job done. Renovate and expand public libraries to provide digital and media literacy programs, and nonpartisan information feeds to fill data voids left by the collapse of local news outlets. 
  4. The U.S. government, states, and cities should invest in restorative information justice.How can a national government that spends hundreds of billions on weapon systems somehow have failed to provide a laptop for each child and broadband internet access to every home? It is unconscionable that our governments have allowed existing social inequities to widen in 2020. Children were left behind by remote learning, excluded from the access to information, telehealth, unemployment benefits, and family leave that will help them and their guardians make it through this pandemic.

“By 2035, we should expect digital life to be both better and worse, depending on where humans live.There will be faster, near-universal connectivity – for those who can afford it. People who can pay to subscribe will be able to browse faster, without ads or location and activity tracking. The poor will trade data for access that’s used by corporations and insurance companies unless nations overcome massive lobbying operations to enact data protection laws and enforce regulations. Smartphones will evolve into personalized virtual assistants we access through augmented reality glasses, health bands, gestural or spoken interfaces, and information kiosks. Information pollution, authoritarianism and ethnonationalism supercharged by massive surveillance states will pose immense risks to human rights. Climate change will drive extreme weather events and migration of refugees both within countries and across borders. Unless there are significant reforms, societal inequality will destabilize governments and drive civil wars, revolutions and collapsed states. Toxic populism, tribalism and nativism antagonistic to democracy, science and good governance will persist and grow in these darkened spaces.”

Courtney C. Radsch, author and free-expression advocate, said, “The decline in the concept of truth and a shared reality is only going to be worsened by the increasing prevalence of so-called deepfake videos, audio, images and text. The lack of a shared definition of reality is going to make democratic politics, public health, journalism and myriad aspects of life more challenging.”

Stowe Boyd, founder of Work Futures, predicted, “Decreasing the amplification of disinformation is the most critical aspect of what needs to be done. Until that is accomplished, we are at risk of growing discord and division. Policy makers – elected officials, legislatures, government agencies and the courts – must take action to counter the entrenched power of today’s social platforms. The coming antitrust war with major platform companies – Facebook and its competitors – will lead to more and smaller social media companies with more-focused communities and potentially lessened commercial goals. That will diminish the amplification potential of social media and will likely lead to better ways to root out disinformation.”

Scott Santens, senior advisor at Humanity Forward, commented, “We really have no choice but to improve digital spaces, so ‘no’ isn’t an option. We are coming to realize that the internet isn’t going to fix itself and that certain decisions we made along the way need to be rectified. One of those decisions was to lean on an ad-driven model to make online spaces free. This was one of the biggest mistakes. In order to function better, we need to shift toward a subscription model and a data ownership model, and in order for that to happen, we’re going to need to make sure that digital space users are able to afford many different subscriptions and are paid for their data. That means potentially providing digital subscription vouchers to people in a public-funded way, and it also means recognizing and formalizing people’s legal rights to the data they are generating.

“Additionally, I believe universal basic income will have been adopted by 2035 anyway, which itself will help pay for subscriptions, help free people to do the unpaid work of improving digital spaces, and perhaps most importantly of all, reduce the stress in people’s lives, which will do a lot to reduce the toxicity of social media behavior. The problem of disinformation and misinformation will also require investments in the evolution of education, to better prepare people with the tools necessary to navigate digital spaces so as to better determine what is false or should not be shared for other reasons, versus what is true or should be shared for other reasons. We can’t keep teaching kids as we were once taught. A digital world is a different place and requires an education focused on critical thinking and information processing versus memorization and information filing.”

Melissa Sassi, the Global Head of IBM Hyper Protect Accelerator, said, “Media misinformation and disinformation are two of the largest challenges of our time. The current trend in social media networks raises significant concern around the role access to the information shared by users in a platform plays when it comes to causing strife around the world that could drive genocide, authoritarianism, bullying and crimes against humanity. Equally, it is concerning when governments shut down internet connectivity or access to specific sites to curtail dissent or adjust the narrative to benefit their own political party and/or agenda.”

Craig Newmark, the founder of Craigslist, now leading Craig Newmark Philanthropies, observed, “Social media becomes a force mainly for good actors when the platforms (and mass media) no longer amplify disinformation. I hope for this by 2035.”

Brooke Foucault Welles, an associate professor of communication studies at Northeastern University whose research has focused on ways in which online communication networks enable and constrain behavior, commented, “The current consolidation of media industries – including new media industries – leaves little room for alternatives. This is an unstable media ecosystem and unlikely to allow for, much less incentivize, major shifts toward the public good. There is, by fiduciary duty, little room for massive, consolidated media companies to serve the public good over the interests of their investors.”

Andy Opel, professor of communications at Florida State University, wrote, “As with all systems of social control and surveillance, capillary, bottom-up resistance builds and eventually challenges the consolidation of power. We are seeing that resistance from both ends of the political spectrum, with the right calling for regulation of social media to prevent the silencing of individual politicians while the left attempts to respond to the viral spread of misinformation. Both groups recognize the dangers posed by the current media-ownership landscape and, while their solutions differ, the social and political attention on the need for media reform suggests a likely time when a digital bill of rights becomes a major issue in near-term political election cycles.”

Daniel S. Schiff, a Ph.D. student at Georgia Tech’s School of Public Policy, responded, “There is reason for moderate hopefulness about the fight against misinformation. While I don’t expect public news media literacy or incentives to change dramatically, social media platforms may have enough in the form of technical and platform control tools to mitigate certain issues like bot accounts and viral spreading of untrustworthy sources. Significant research and pressure, along with compelling examples of actions that can be taken, suggest improvements are available. However, this positive transformation for some is complicated by the willingness of unscrupulous actors, authoritarian governments and criminal groups to promote misinformation, particularly for the many countries and languages that are less well monitored and protected. Further, it is not clear whether a loss of participants from mainstream social media platforms to more fringe/radical platforms would increase or decrease the spread of misinformation and polarization overall. Deepfakes and plain old fake news are likely to (continue to) have significant purchase with large portions of the global population, but it is possible that platforms will be able to minimize the most harmful misinformation (such as misinformation promoting violence or genocide) especially around key periods of interest (such as elections). For a portion of the world then, I would expect the misinformation problem to improve, though only in the more well-regulated and high-income corners. However, deepfakes could throw a wrench in this. It is unclear whether perpetrators or regulators will stay ahead in the informational battle.”

Some, including Aaron Falk, senior technical product manager at Akamai Technologies, suggested that any improvement in the tone of the digital public sphere is likely to require that those who share information have an accountable identity – no more anonymity. He commented, “Pervasive anonymity is leading to the degradation of online communications because it limits the accountability of the speaker. By 2035, I expect online fora will require an accountable identity, ideally one that still permits users to have multiple personas.”

Counterpoint: Many doubt that leaders and the public can come together on these issues

There is pushback against this enthusiasm. Many respondents said that while they hope for, wish for and even expect to see people come together to work on all of these important issues, they do not anticipate significant positive changes in the digital public sphere by 2035. A small selection of their responses are shared here; many more are included in the next section of this report, which illuminates four more themes.

Joseph Turow, professor of media systems and industries at the University of Pennsylvania, said, “Correcting this profound problem will require a reorientation of 21st century corporate, national and interpersonal relationships that is akin to what is needed to meet the challenge of reducing global warning. There are many wonderful features of the internet when it comes to search, worldwide communication, document sharing, community-oriented interactions and human-technology interconnections for security, safety and health. Many of these will continue apace. The problem is that corporate, ideological, criminal and government malefactors – sometimes working together – have been corrupting major domains of these wonderful features in ways that are eroding democracy, knowledge, worldwide communication, community, health and safety in the name of saving them. This too will continue apace – unfortunately often faster and with more creativity than the socially helpful parts of our internet world.”

Kate Carruthers, chief data and insights officer at the University of New South Wales-Sydney, observed, “Digital spaces will not magically become wholesome places without significant thought and action on the part of leaders, and U.S. leadership is either not capable or not willing to make the necessary decisions. Given the political situation in the U.S., any kind of positive change is extremely unlikely. All social media platforms should be regulated as public utilities and then we might stand a chance for the growth of civil society in digital spaces. Internet governance is becoming fragmented, and countries like China are Russia are driving this.”

Ivan R. Mendez, a writer and editor based in Venezuela, responded, “The largest danger is no longer the digital divide (which still exists and is wider in 2021, after the pandemic); the largest danger is the further conversion of the public into large, easily marketable digital herds. The evolution of digital spaces into commercialized platforms poses new challenges. The arrival of agile big tech players with proposals that connect quickly with the masses (who are then converted into customers) gives them a large amount of influence in governments’ internet governance discussions. … Other important internet stakeholders – entities that have been attributed the representation of the internet ecosystem in order to work for the betterment of networks through organized cross-sector discussions, such as the Internet Governance Forum (IGF) have not gained enough authority in the governance discussions of governments; they are not given any input and have not been allowed to participate or influence global or nation-state digital diplomacy.”

Richard Barke, an associate professor in the School of Public Policy at Georgia Tech, wrote, “Communications media – book publishers and authors, newspaper editors, broadcast stations – have always been shaped by financial forces. But for most of our history there have been delays between the gathering of news or the production of opinions and the dissemination of that information. Those delays have allowed (at least sometimes) for careful reflection: Is this true? Is this helpful? Can I defend it? Digital life provides almost no delay. There is little time for reflection or self-criticism, and great amounts of money can be made by promulgating ideas that are untrue, cruel or harmful to people and societies. I see little prospect that businesses, individuals, or governments have the will and the capacity to change this. … The meme about crying fire in a crowded theatre might become a historical relic; there is a market for selling untruths and panics, even if they cross or skirt the line between protected speech and provocation. Laws and regulation can’t keep up, and many possible legal remedies are likely to confront conflicting interpretations of constitutional rights.”

An expert in urban studies based in Venezuela observed, “The future looks negative because it is not sufficiently recognized that the current business model of the digital world – the convergence of nanotechnology, biotechnology, information technology and cognitive science (NBIC) plus AI – creates and promotes inequalities that are an impediment to social development. Ethical values that should safeguard the rights of citizens and the various social groups require further review and support based on broad consultations with the multiple stakeholders involved.”

A North America-based entrepreneur said, “It seems clear that digital spaces will continue to trend toward isolationist views and practices that continue to alienate groups from one another. I foresee a further splintering and divide among class, race, age, politics and most any other measures of subdivision. Self-centered views and extreme beliefs will continue to divide society and erode trust in government, and educational and traditional news sources will continue to diminish. We will continue to see an erosion of communication between disparate groups.”

Sign up for our weekly newsletter

Fresh data delivery Saturday mornings

Sign up for The Briefing

Weekly updates on the world of news & information