The following respondents wrote contributions that consider a wide range of issues tied to humans’ future in the digital age.
Andy Opel, professor of communications at Florida State University, wrote, “The fall of 2022 introduced profound changes to the world with the release of OpenAI’s ChatGPT. Five days later, over a million users had registered for access, marking the fastest diffusion of a new technology ever recorded. This tool, combined with a myriad of text-to-image, text-to-sound and voice-transcription generators, is creating a dynamic environment that is going to present new opportunities across a wide range of industries and professions.
“These emerging digital systems will become integrated into daily routines, assisting in everything from the most complicated astrophysics to the banality of daily meal preparation. As the proliferation of access to collected human knowledge spreads, individuals will be empowered to make more informed decisions, navigate legal and bureaucratic institutions, and resolve technical problems with unprecedented speed and accuracy.
“AI tools will reshape our digital and material landscapes, disrupting the divisive algorithms that have elevated cultural and political differences while masking the enormity of our shared values – clean air, water, food, safe neighborhoods, good schools, access to medical care and baseline economic security. As our shared values and ecological interdependence become more visible, a new politics will emerge that will overcome the stagnation and oligarchic trends that have dominated the neoliberal era.
“Out of this new digital landscape is likely to grow a realization of the need to reconfigure our economy to support what the pandemic revealed as ‘essential workers,’ the core elements of every community worldwide: farmers, grocery clerks, teachers, police and fire, service industry workers, etc. Society cannot function when the foundational professions of a society cannot afford homes in the communities they serve.
“This economic realignment will be possible because of the digital revolution taking place at this very moment. AI will both eliminate thousands of jobs and generate enough wealth to provide a basic income that will free up human time, energy and ingenuity. Through shorter work weeks and a move away from the two-parent income requirement to sustain a family, local, sustainable, communities will reconnect and rebuild the civic infrastructure and social relations that have been the base of human history across the millennia.
“Richard Nixon proposed a universal basic income in 1969 but the initiative never made it out of the Senate. Over half a century later, we are on the precipice of a new economic order made possible by the power, transparency and ubiquity of AI. Whether we are able to harness the new power of emerging digital tools in service to humanity is an open question. I expect AI will play a central role in assisting the transition to a more equitable and sustainable economy and a more accessible and transparent political process. …
“AI and emerging digital technologies have a wide range of possible negative impacts, but I want to focus on two: the environmental impact of AI and the erosion of human skills.
“The creation of the current AI tools from ChatGPT-3 to Stable Diffusion and other text-to-image generators required significant amounts of electricity to provide the computing power to train the models. According to MIT Technology Review, over 600 metric tons of CO2 were produced to train ChatGPT-3. This is the equivalent of over 1,000 flights between London and New York, and this is just to train the AI tool, not to run the daily queries that are now expected among millions of users worldwide.
“In addition, ChatGPT-3 is just one of many AI tools that have been trained and are in use, and that number is expanding at an accelerating rate. Until renewable energy is used to run the server farms that are the backbone of every AI tool, these digital assets will have a growing impact on the climate crisis. This impact remains largely invisible to citizens who store media in ‘the cloud,’ too often forgetting the real cloud of CO2 that is produced with every click on the screen.
“The second major impact of emerging digital media tools is the ephemeral nature of the information and the vulnerability of this information. … As our reliance on digital tools grows – from the simplicity of spell checking to the complexity of astrophysics, our collective knowledge is increasingly stored in a digital format that is vulnerable to disruption. At the same time, the ubiquity of these tools is seductive, allowing the unskilled to produce amazing visual art or music or simulate the appearance of expertise in a wide range of subject areas.
“The growing dependence on this simulation masks the physical skills that are being stripped out, replaced by expertise in search terms and prompt writing skills. This is accelerating a trend that has been in place for many years as people moved from the physical to the digital. Without the mechanical skills of hammers and wrenches, planting and compost, wiring and circuits, entire populations become dependent on a shrinking pool of people who actually do things. When the power goes out, the best AI in the world will not help.”
Tom Valovic, journalist and author, wrote, “AI and ChatGPT are major initiatives of a technocratic approach to culture and governance which will have profound negative consequences over the next 10 years. If there’s one dominant theme that’s emerged in my many years of research, it’s parsing the ingrained tension between the waning humanities and the rising technology regimes that absorb us.
“It’s impossible to look at these trends and their effects on our social and political life without also including Silicon Valley’s push toward transhumanism. We see this in the forward march of AI in combination with powerful momentum toward the metaverse. That’s another contextual element that needs to be brought in. I see the limitations of human bandwidth and processing power to be problematic. I worry about the implications of an organic, evolving, complex, adaptive, networked system that may route around slow human processors and take on an existence of its own. This is an important framework to consider when imagining the future.
“When we awake from this transhumanist fever dream of human perfection that bears little resemblance to the actual world we’ve managed to create, I think steady efforts at preserving the core values of the humanities will have proved prescient. This massive and imposed technological infusion will be seen as a chimera. Perhaps we’ll even learn how to use some of it wisely.
“I do think that AI is going to force some sort of omega point, some moment of truth past this dark age where the necessary balance between technology, culture and the natural world is restored. Sadly, it’s a question of how much ‘creative destruction’ is needed to arrive at this point. With luck (and effort) I believe there will be a developing understanding that while hyper-technology appears to be taking us to new places, in the long run it’s actually resurrecting older, less desirable paradigms, a kind of cultural sleight of hand (or enantiodromia?)
“I found this observation from Kate Crawford, founder of the AI Now Institute at NYU to be useful along these lines: ‘Artificial intelligence is not an objective, universal or neutral computational technique that makes determinations without human direction. Its systems are embedded in social, political, cultural and economic worlds shaped by humans, institutions and imperatives that determine what they do and how they do it. They are designed to discriminate, to amplify hierarchies and to encode narrow classifications.’
“If ChatGPT thinks and communicates, it’s because programmers and programs taught it how to think and communicate. Programmers have conscious and unconscious biases and possibly, like any of us, faulty cognitive assumptions that necessarily get imported into platform development. As sophisticated as that process or program has or will become, it can still be capable of the unintended consequences of human error, albeit still presenting and masking to the end user as machine-based error as sequences propagate. These can be hidden and perpetuated in code. If at some point, the system learns on its own (and I’m just not familiar enough with its genesis to know if that’s already the case) then it will be fully capable of making and communicating its own errors. (That’s the fascinating part.)
“In the current odd cultural climate, we’re all hungry to go back to a world where the ‘truth’ was not so maddeningly malleable. The idea of truth as some sort of objective reality based on purely scientific principles is, in my opinion, a chimera and an artifact of our Western scientific materialism. And yet we still keep chasing it. As Thomas Kuhn pointed out in his books on the epistemology of science, scientific knowledge is to a large extent a social construct, and that’s a fascinating rabbit hole to go down.
“As we evolve, our science evolves. In that sense, no machine, however sophisticated, will ever be able to serve as some kind of ultimate arbiter of what we regard as truth. But we might want to rely on these systems for their opinions and ability to make interesting connections (which is, of course, the basis for creative thinking) or not leave important elements of research out (which happens all the time in academic and scientific research, of course). But the caution is not to be seduced by the illusion of these systems serving up true objectivity. The ‘truth’ will always be a multifaceted, complex, socially constructed artifact of our very own human awareness and consciousness.
“The use of sophisticated computer technology to replace white-collar and blue-collar workers has been taking place for quite a while now. It will become exponentially greater in scope and momentum going forward. The original promise of futurists back in the day (the 1960s and ’70s) was that automation would bring about the four-day work week and eventually a quasi-utopian ‘work less/play more’ social and cultural environment. What they didn’t factor in was the prospect of powerful corporations latching onto these new efficiencies to feather their nest to the exclusion of all else and the lack of appropriate government oversight as a result of the merging of corporate and government power.”
Jonathan Taplin, author of “Move Fast and Break Things: How Google, Facebook and Amazon Cornered Culture and Undermined Democracy,” commented, “Wendell Berry once wrote, ‘It is easy for me to imagine that the next great division of the world will be between people who wish to live as creatures and people who wish to live as machines.’ This is my greatest fear. From the point the technological Singularity was first proposed, the marriage of man and machine has proceeded at a pace that even worries the boosters of artificial general intelligence (AGI).
“I understand the Peter Thiel would like to live to 200, but that possibility fills me with dread. And the notion that AI (DALL-E, GPT 3) will create great ORIGINAL art is nonsense. These programs assume that all the possible ideas are already contained in the data sets, and that thinking merely consists of recombining them. Our culture is already crammed with sequels and knockoffs. AI would just exacerbate the problem.
“We are mired in a culture of escapism-crypto fortunes, living a fantasy life seven hours a day in the metaverse colonies on Mars. The dreams of Elon Musk, Marc Andreessen, Peter Thiel and Mark Zuckerberg are ridiculous and dangerous. They are ‘bread and circuses’ put forth by hype artists at a time when we should be financing the transition to a renewable energy economy, instead of spending $10 trillion on a pointless Martian Space Colony.”
Allison Wylde, senior lecturer at Glasgow Caledonian University and team leader on the UN Global Digital Compact Team with the Internet Safety, Security and Standards Coalition, said, “To help us try to look forward and understand possible futures, two prominent approaches are suggested: examining possible future scenarios and learning from published works of non-fiction and fiction. I’d like to merge these approaches here.
“Royal Dutch Shell has arguably led on the scenario approach since the 1960s. For scenario development, as a starting point, an important consideration concerns the framing of any question. Next, the question opens out by asking ‘what if?’ to help consider possible futures that may be marginal possibilities.
“From published literature, fiction and non-fiction, a recent research project examining robots in the workplace concluded that society may experience gains and/or losses. From classical literature, as William Shakespeare suggested (or perhaps cautioned), consequences are rooted in past actions; ‘what’s past is prologue.’ What can we take from this?
“If we look back to the time of the invention of the World Wide Web by Tim Berners-Lee, we see the internet started out as a space of openness and freedom. During the Arab Spring, citizens created live-streamed material that acted both as a real-time warning of threats from military forces, and as a record of events. Citizens from other countries assisted. Outside help is also being offered via online assistance today in the conflict between Russia and Ukraine. Is this one possible future: open and sharing?
“Alternative futures, for instance those predicted by H.G. Wells at the end of the 19th century, suggest that we are being watched by intelligences greater than ours, ‘with intellects vast and cool and unsympathetic.’ While we humans are ‘so vain and blinded by vanity that we couldn’t comprehend that intelligent life could have developed; so far … or indeed at all.’ Right now, we can see around us the open-source community developing AI-enhanced tools designed to help us, Dall-E, ChatGPT and Hugging Face are examples of such work. At the same time, malicious actors are turning these tools against us.
“Currently AI is viewed as a binary: good or bad. So, are we facing a binary problem, with two possible avenues? Or are our futures with AI, and indeed as with the rest of our lives, more complex, with multiple and interlinked possibilities? In addition, from literature (in particular, fiction), is the future constantly shifting – appearing and disappearing?
“At this point in time, the United Nations is shaping the language for a Global Digital Compact (GDC) that calls for a trusted, free, open and secure internet, with trust and trust-building as a central and underpinning foundation. Although the UN calls for trust and trust building, there is a silence on the mechanisms for such. Futures discussed here are but possibilities. The preliminary insights of those working toward finding their way to creating a widely accepted GDC share common threads, the importance of considering beyond good and bad, recognising the past and the present, and being alert, and thus well-prepared and well-resourced, to participate and anticipate possible multiple futures.
“Arguably, just what and who will be in our futures may be more complex that we can imagine. Kazuo Ishiguro in the novel ‘Klara and the Sun’ paints yet another picture: a humanoid robot, pining for the attention of a human and seeking comfort in the ‘hum of a fridge.’ This image may chime with the views of a recent Google staffer fired in 2022 for suggesting that AI chatbots may already be sentient. While like children ‘who want to help the world,’ their creators need to take responsibility as illustrated by the drive toward the use of explainable AI (XAI). (As a final note, Mary Shelley was not invoked.)”
Karl M. van Meter, author of “Computational Social Science in the Era of Big Data,” commented, “At this period in the development of digital technology I am both excited and concerned. That attitude will probably evolve with time and future developments. My major concerns are with governance and the environment. Given hominine ingenuity, proven over millions of years, and the current economic pressure for new developments – including in technology – the fundamental question is ‘how will our societies and their economies manage future technological developments?’ Will the economic and profit pressure to obtain more and more personal data with new technology continue to generate major abuses and override individuals’ wishes for privacy? This is a question of governance and not of technology and new technological developments. It is up to humanity. In personal work in scientific research, the vast availability of information and contacts with others has been a major advantage and has resulted in great progress, but the same technologies have served to give a voice and assistance to creating serious obstacles to such progress and increasingly bringing ideological extremism into daily [life] both in developed and less-developed countries.”
Warren Yoder, longtime director at Public Policy Center of Mississippi, now an executive coach, said, “As the 21st century picks up speed, we are moving beyond a focus on the protocol-mediated computation of the Internet. The new focus is on computation that acts upon itself, not yet with autonomous agency, but certainly moving in that direction. Three beneficial changes stand out for the medium-term promise they offer: machine learning, synthetic biology and the built world.
“ChatGPT and other large language models command most of the attention at the moment because they speak our languages. Text, images and music are how we communicate with each other and now, with computation. But machine learning offers much more. It promises to revolutionize math and science, disrupt the economy and change the way we produce and engage information. Educators are rethinking how they teach. Many of the rest of us will realize soon that we must do the same. COVID-19 vaccines arrived in the nick of time, a popular introduction to the potential of synthetic biology. Drug discovery, mRNA treatments for old diseases, modifying the immune system to treat autoimmune disorders and many other advances in synthetic biology promise dramatic improved treatments. Adding computation to the built environment is generally called the Internet of Things. But that formulation does not at all prepare the imagination for the computational changes we are now experiencing in our physical world. Transportation, manufacturing, even the normal tasks of everyday life will see profound gains in efficiency.
“Haunting each of these beneficial changes are the specters of gross misuse, both for the entrepreneur class’s vanity and for big-business profit. We could lose not only our privacy, but also our freedom of voice and of exit. Our general culture is already adapting. Artists quickly protested the appropriation of their freely shared work to create the machine learning tools that could replace them. We do not generally acknowledge the speed of culture change, which happens even faster than technology change. Culture slurps tech with its morning coffee.
“Governance, on the other hand, is a messy business. The West delegates initial governance to the businesses that own the tech. Only later do governments try to regulate the harmful effects of tech. The process works poorly, but authoritarian regimes are even worse. In the medium-term, how well we avoid the most harmful effects of machine learning, synthetic biology and the built world depends on how well we cobble together a governance regime. The pieces are there to do an adequate job in the United States and the European Union. Success is anyone’s guess.”
Valerie Bock, principal at VCB Consulting, wrote, “We are going to go through a period of making serious mistakes as we integrate artificial intelligence into human life, but we can emerge with a more-sophisticated understanding regarding where human judgment is necessary to modify any suggestions made by our artificially intelligent assistants. Just as access to search engines and live mapping has made life better informed and more efficient for those of us privileged enough to have access to them, AI, too, will help people make better decisions in their daily lives.
“It is my hope that we will also become more sophisticated in our use of social networks. People will become aware of how they can be gamed, and they will benefit from stronger regulations around what untruths can be shared. We will also learn to make better use of our access to the strongest thinkers in our personal social circles and in the wider arenas in our societies.
“By 2035, I am hopeful that our social conventions will have adapted to the technological advances which came so quickly. Perhaps we will instruct our personal digital assistants to turn off their microphones when we are dining with one another or entertaining. We will embrace the basket into which our smartphones go when we are having face-to-face interactions at work and at home. There will be a whole canon of sound advice regarding when and under what circumstances to introduce our children to the tech with which they are surrounded. I’m hopeful that that will mean practicing respectful interaction, even with the robots, while understanding all the reasons why time with real people is important and precious.
“I was once an avid fan of the notion that markets will, with appropriate feedback from consumers, adjust to serve human welfare. I no longer believe that to be true. Decades of weakening governmental oversight have not served us. Technology alone cannot serve humanity. We need people to look out for one another, and government is a more likely source of large-scale care than private enterprise will ever be.
“I fear that the tech industry ethos that allows new technologies to be released to the public without serious consideration of potential downsides is likely to continue. Humans are terrible at imagining how our brilliant inventions can go wrong. We must commit to regulation and adequately fund regulators in a way that allows them the capacity to keep abreast of developments and encourage industry to better pre-identify the unexpected harms that might emerge when they are introduced to society. If not, we could see a nightmarish landscape of even worse profiteering in the face of real human suffering.”
Ray Schroeder, senior fellow at the University Professional and Continuing Education Association, said, “The dozen years ahead will bring the maturing of the relationship between human and artificial intelligence. In many ways, this will foster equity through enhanced education, access to skill development and broader knowledge for all – no matter the gender, race, where people live or their economic status.
“Education will be delivered through AI-guided online adaptive learning for the most part in the first few years, and more radical ‘virtual knowledge’ will evolve after 2030. This will allow global reach and dissemination without limits of language or disability. The ubiquity of access will not limit the diversity of topics that are addressed.
“In many ways, the use of AI will allow truths to be verified and shared. A new information age will emerge that spans the globe.
“Perhaps the most impressive advances will come with Neuralink-type connections between human brains and the next evolution of the internet. Those without sight will be able to see. Those without hearing will be able to hear. And all will be able to connect to knowledge by just tapping the connected internet through their virtual memory synapses. Virtual learning will be instant. One will be able to virtually recall knowledge into the brain that was never learned in the ways to which we are accustomed. Simply think about a bit of information you need, and it will pop into your memory through the connected synapses. The potential for positive human impact for brain-implanted connectivity is enormous, but so too is the potential for evil and harm.
“The ethical control of knowledge and information will be of the utmost importance as we move further into uses of these digital tools and systems. Truth is at the core of ethics. Across the world today, there seems to be a lower regard for truth. We must change this trend before the power of instant and ubiquitous access to knowledge and information is released. My greatest concern is that politics will govern the information systems. This may lead to untruths, partial truths and propaganda being disseminated in the powerful new brain-connected networks. We must find ways to enable AI to make judgments of truth in content, or at least allow for access to the full context of information that is disseminated. This will involve international cooperation and collaboration for the well-being of all people.”
John Hartley, a research professor in media and communications at the University of Sydney in Australia, predicted, “The most beneficial changes will come from processes of intersectional and international group-formation, whereby digital life is not propounded as a species of possessive individualism and antagonistic identity, but as a humanity-system, where individuality is a product of codes, meanings and relations that are generated and determined by anonymous collective systems (e.g., language, culture ethnicity, gender, class).
“Just as we, the species, have begun to understand that we live in a planetary biosphere and geosphere, so we are beginning to feel the force of a sense-making semiosphere (Yuri Lotman’s term), within which what we know of ourselves, our groups and the world is both coded and expressed in an open, adaptive, complex system, of which the digital is itself a technological expression. At present, the American version of digital life is the libertarian internet as a soft-power instrument of U.S. global cultural hegemony. The direction-of-travel of that system is toward the reduction of humanity to consuming individuals; digital affordances to an internet of shopping; and human relations to corporate decisions.
“Within that setup, users have, however, discovered their own interlinked identities and interests and have begun to proliferate across platforms designed for consumerism, not as market influencers but as intersectional activists. A paradigm example is Greta Thunberg. A lone teenager, she showed the world that innovation can come from anywhere in a digital system, and that collective action is possible to imagine at planetary scale to address a human-made planetary crises.
“Looking forward, ordinary users are becoming conscious of their own creative agency and are looking for groups in which worldbuilding can be shared as a group-forming change agency. Thus, intersectionality, collective action and planetary or species-level coding of the category of ‘we’ are what will be beneficial and of great benefit in digital life, to address the objective challenges of the Anthropocene, not as a false and singular unity of identity, but as a systemic population of difference, each active in their own sphere to link with common cause at group-level.
“At the same time, they are becoming more conscious of their individual ignorance in the context of cultural, political and economic multiplicity. Digital literacy includes recognition of what you don’t know. Knowledge is already driven by power and antagonism, but the developing understanding of how the system works both negatively and positively is another emergent benefit of digital literacy at humanity scale. However, incumbent powers, both political and commercial, are propagating stories in favour of conflict. These are now weaponized strategic forces; audiences, viewers, players and consumers are encouraged to forget they are citizens, the public of humanity-in-common, and to cast themselves as partisans and enemies whose self-realization requires the destruction of others. The integration of digital life into knowledge, power and warfare systems is already far advanced. By 2035 will it be too late to self-correct?”
Seth Finkelstein, principal at Finkelstein Consulting and Electronic Frontier Foundation Pioneer Award winner, wrote, “AI has arrived. I’ve seen many cycles of AI hype. I’m going out on a limb and saying, this time, it’s real. It now passes a key indicator which signals an actual technological advance: The Porn Test – do people use this to create pornography, and are the results appealing? The outcome here isn’t ringing a bell, it’s blaring a siren, the technology has reached a point where consumer applications are being built. Further, there’s another reliable key indicator which is evident: The Lawyer Test – are expensive corporate lawyers suing over this? When professional hired guns start shooting at each other, that usually indicates they’re fighting over something significant.
“Now, this has nothing to do with the scary AI bogey beloved of writers who dress up a primal monster in a science-fiction skin. Rather, there have been major breakthroughs in the technology which have advanced the field, which will ultimately be truly world changing. And I have to reaffirm my basic realism that we won’t be getting utopia (the Internet sure didn’t give us that). But we will be getting many benefits which will advance our standard of living.
“To just give a few examples: Even as I type this, at the very start of the development, I’m seeing practical tools which significantly improve the productivity of programmers. I don’t believe it will replace programmers as a profession. But there’s going to be a shift where some of the bottom level coding will be as obsolete as the old job of manually doing calculations.
“Entertainment is going to undergo another major improvement in production quality. I’m not going to make the silly pundit prediction of ‘democratization,’ because that never works, for economic reasons. But I will point out the way CGI (Computer Generated Imagery) changed movies and animation, and AI will take that to another level.
“We’re already seeing experiments with new ways of searching. Search has always been an arms-race between high-quality information versus spammers and clickbait farms. That’ll never stop because it’s human nature. But the battlefield has just changed, and I think it’ll take a while for the attackers to figure out how to engage with heavy artillery being rolled out.
“There’s a whole set of scientific applications which will benefit. Medical diagnostics, drug discovery, anything to do with data analysis. Essentially, we can currently generate and store a huge amount of data. And now we have a new tool to help make sense of it all. While pundits who predict the equivalent of flying cars are not justified, that shouldn’t cause us to ignore that both flying (commercial air transportation) and cars (mass produced automobiles) had profound effects over the last century.
“Nowadays, I’m deeply troubled by how much just trying to keep my digital footprint low is starting to make me feel like I’m an eccentric character in a dystopian SF novel (quirkily using ‘by Stallman’s beard!’ as an exclamation – a reference to Richard M. Stallman, who has been relentlessly arguing about freedom and technology for decades now). Every item I buy, every message I send, every physical place I go, every e-book I read, every website I browse, every video I watch … there’s a whole system set up to record it.
“When we think of the world of the book ‘1984,’ I believe one aspect which has been lost over the years is how the idea of the telescreen was, for the time, extremely high tech. Television wasn’t even widespread when it was written. Who would have thought that when such technology arrived, people would be eager to have telescreens installed in their homes for the consumer benefits? We consider the phrase ‘Big Brother’ to be chilling. But in that fictional world, maybe to an apolitical person it has a meaning more like ‘Alexa’ or ‘Siri.’
“There was a fascinating moment this year, when just after the U.S. Supreme Court overturned nearly 50 years of federal protection of abortion rights, the chattering class had a brief realization that all this surveillance could be extremely useful for the enforcement of anti-abortions laws. There’s a principle that activists should try to relate global concerns to people’s local issues. But it was very strange to me seeing how this huge monitoring system could only be considered in terms of a ‘hot take’ in politics (‘Here’s this One Weird Trick’ which could be used against pregnant women seeking an abortion). And then the glimmer of insight just seemed to disappear.
“I wish I knew more about how this was playing out in China or Singapore or other places which fully embrace such governmental population controls. The little bit I’ve read about the Chinese ‘social credit’ system seem to outline a practical collaboration of government and corporate power that is very disturbing.
“By Stallman’s beard, I worry!”
William L. Schrader, advisor to CEOs, previously co-founder of PSINet, wrote, “I am disappointed with humankind and where it has taken the internet. I hope the dreams we old Internet folks had that kept us sleeping soundly after working for 18 hours a day, seven days a week to build the greatest communications system ever do come true. So far there have been good and bad outcomes.
“1) Health and scientific advances moving twice or three times faster
“This is not limited to big pharmaceuticals; it is focused on many massive improvements. One is fully remote surgeries in small towns without doctors, with only lightly medically trained medical assistants or one registered nurse to be on site. This would include all routine surgical procedures. For more complex surgeries, the patient would need to be flown or driven hundreds of miles and possibly die in the process. This would be global so that we all had access, not just the rich. THAT is what we imagined in 1985 and before. It only takes really outstanding robotic 3D motion equipment installed in a surgical suite that is maintained by the local team, high bandwidth supporting the video for the expert surgeon in a big medical center and the robotic controls from the experts’ location to the surgical site, and a team on both sides that is willing to give it a try and not get hung up on insurance risk. This must involve participants from multiple locations. This is not simply a business opportunity for a startup to assemble (the equipment is almost there, with the software and the video). This is a life saver.
“2) Truth beating fascism is now required
“We built this commercial Internet to stop the government from limiting the information each of us could access. We imagined that only a non-government-controlled Internet would enable that outcome. Freedom for all, we thought. FALSE. Over the past decade or so political operatives in various parts of the world have proven that social media and other online tools are excellent at promulgating fear and accelerating polarization. Online manipulations of public sentiment rife with false details that spread fear and create divisiveness have become a danger to democracy. I would like the Internet, the commercial Internet, to fight back with vigor. What Internet methods, what technologies, what timing, all remains to be seen. But people (myself included) understand it is time to build strong counter measures. We want all sides to be able to talk openly.
“3) Climate change and inflation receive a lot of attention in the press for both Main Street and Wall Street
“Looking at inflation, I have trust in our financial balancing system with the Federal Reserve Board, the thousands of brilliant analysts worldwide that watch their movement using the latest online tools and, of course, other nations’ central banks are just as in tune as ours if, like ours, they are a bit focused on their own country. Inflation will resolve itself. Climate change, however, will not be solved. Not by politicians of any persuasion, not by the largest power companies, the latest gadgets in electric vehicles (EVs), not by carbon-capture technology and possibly not by anything. That could result in the end of the planet supporting Homo sapiens. Alternatively, the commercial Internet could encourage 2 to 4 to 6 billion people who use it to not drive for one hour, turn off all electricity for the same hour, essentially a unified strike to tell the elected, appointed, monarchs or autocrats in charge or part of the government of all countries that the time has come do something so our grandchildren can survive. Only the Internet can do this. Please, someone start and support these movements.
“4) Science tells us that we MUST expect more pandemics
“Bill Gates has stated it clearly and funded activities that promise to help. We must stop listening to ‘it’s over’ or ‘it’s not any worse than a cold’ when our beloved grandparents have died or expect to if they mingle with their children’s children. In total, over 6.7 million people have died. In the last year, 85% of the dead were elderly (older than 65) in all countries (rich and poor). If only the commercial Internet could band together to convince those people who don’t believe in pandemics or don’t care about their grandparents to stop voting or to die from COVID-19, or the next one that comes along. Yes, this is a positive statement. There is a way for the Internet to persuade naysayers to stay away from the elderly or shop when they do not.
“5) War in Ukraine and Russia will expand beyond Ukraine whether it ‘loses’ or ‘wins’
“The Internet can continue to support the tens of thousands of Ukraine voices – videos showing hundreds of indictable war crimes by the head of Russia who started the war a year ago. The Internet can communicate from any one person to any other one person or to millions. The truth matters. Lives are being lost hourly on all sides, all because we fail to say something or do something.”
Deirdre Williams, an independent internet governance consultant, responded, “There will be a great saving of time as digital systems replace cumbersome paper-based systems. There will be better planning facilitated by better records. Data collection will improve. Weather forecasting will become more precise and accurate. What we have here is an opportunity to advance global equity and justice but, judging by what has happened in humanity’s past, it is unlikely that full advantage of the opportunity will be taken. In regard to human rights, digital technology will abet good outcomes for citizens. The question is, which citizens? The citizens of where?
“Humanity is becoming more selfish and individualistic. Or rather a portion of humanity is, and sadly, while it may be a minority, it has a loud and wide-ranging voice and a great deal of influence. More and more, people seem to live on ‘hype’ – an excitement which depends on neither fact nor truth, but only on the extremity of the sensation. This is shared and amplified by the technology. It isn’t just a space that allows people individual freedom of expression, it is also a space on which some people encourage or seek homogenisation. The movement toward ‘binary thinking’ rules out the middle way, although there are and should be many middle ways, many ‘maybes.’ Computers deal with 1 and 0, yes and no, but people are not computers. Binary human thinking is doing its best to turn people into computers.
“Subtleties are being eroded, so that precise communication becomes less and less possible. Reviewing history, it is apparent that humanity is on a pendulum swinging between extremes of individualism and community. Sometimes it seems that the period of the swing is shortening; it certainly seems that we are getting closer to the point of return now, but it is difficult to stand far enough back so as to be able to get a proper view of the time scale.
“When the swing reverses, I expect we’ll all be more optimistic because, as someone said during the Caribbean Telecommunications Union’s workshop on legislative policy for the digital economy last week, the PEOPLE are the heart, soul and everything in the digital world. Without the people, the technology has no meaning.”
Russell Neuman, professor of media technology at New York University, wrote, “We can expect to see artificial intelligence as complementing human intelligence rather than competing with it. We tend to see AI as an independent agent, a robot, a willful and self-serving machine that represents a threat because it will soon be able to outsmart us. Why do we think that? Because we see things anthropomorphically. We are projecting ourselves onto these evolving machines.
“But these machines can be programmed to complement and augment human intelligence rather than compete with it. I call this phenomenon evolutionary intelligence, a revolution in how humans will think. It is the next stage as our human capacities co-evolve with the technologies we create. The invention of the wheel made us more mobile. Machine power made us stronger. Telecommunication gave us the capacity to communicate over great distances. Evolutionary Intelligence will make us smarter.
“We tend to think of technology as ‘out there’ – in the computer, in the smart phone, in the autonomous car. But computational intelligence is moving from our laptop and dashboard to our technologically enhanced eyes and ears. For the last century, glasses helped us to see better, and hearing aids improved our hearing. Smart glasses and smart ear buds will help us think better. Imagine an invisible Siri-like character sitting on our shoulder, witnessing what we witness and from time to time advising us, drawing on her networked collective experience. She doesn’t direct, she advises. She provides optimized options based on our explicit preferences. And, given human nature, we may frequently choose to ignore her good advice no matter how graciously suggested.
“Think of it as compensatory intelligence. Given our history of war, criminality, inhumanity, ideological polarization and simple foolishness, one might be skeptical that Siri’s next generations would be able to make a difference in our collective survival. Much of what has plagued our existence as humans has been our distorted capacity to match means with ends.
“Unfortunately, among other things, we’ve gotten good at fooling ourselves. It turns out that the psychology of human cognitive distortions is actually quite well understood. As humans, we systematically misrepresent different types of risk, reward and probability. We can computationally correct for these biases. Will we be able to design enhanced decision processes so that demonstrably helpful and well-informed advice is not simply ignored? Our survival may depend on it.”
Charles Ess, emeritus professor of ethics at the University of Oslo, said, “In the best-case scenario, more ethically informed approaches within engineering, computer science and so on promise to be part of the package of developments that might save us from the worst possibilities of these emerging technologies. My brief paraphrase from the executive summary of the first edition of the IEEE paper: These communities should now recognize that the first priorities in their work are to design and implement these technologies for the sake of human flourishing and planetary well-being, protecting basic human rights and human autonomy – over the current focus on profit and GNP.
“There would be real grounds for optimism if this thinking should catch further hold in other disciplines and approaches. If such ethical shaping and informed policy development and regulation succeed in good measure, then the manifest benefits of AI/ML will be genuinely significant and transformative. All of this depends, however, on our taking to heart and implementing in praxis the clear lessons of the past 50 years or so. AIs designed to date are said to have about a 70% failure rate, thus human judgment must remain central in the implementation of any such system that impinges on human health, well-being and flourishing, rather than acquiescing to the pressures of profit and efficiencies in seeking to offload such judgment to AI/ML systems.
“Even more problematic is how offloading human judgment and responsibility in these ways may de-skill us, i.e., worst-case, we simply forget how to make such judgments on our own. Stated more generally: The more we engage with them, the more we become like them. We have been sold – literally – on ‘the digital’ as the universal panacea for all of humankind’s ills, all too often at the cost of the analogue, the qualitative, the foundational experience of what it is and might mean to be a human being. This does not bode well for human/e futures for free moral agents capable of pursuing lives of flourishing in liberal-democratic societies, nor for the planet.
“Turning so much over to these digital technologies has robbed us of critical abilities to concentrate or exercise critical reflection of a sustained and systematic sort. We also appear to be reducing our central capacities or abilities of empathy, perseverance, patience, care and so on. Twenty years ago, the early warnings along these lines were dismissed as moral panics. Pun intended; we should have paid better attention.
“A very worst-case scenario is that ‘We are the Borg’: We ourselves have become the makers and consumers of technologies that risk eliminating that which is most central to living out free human lives of meaning and flourishing.
“Resistance may not be entirely futile but getting along without these technologies is simply not a likely or possible choice for most people. Could we somehow reshape and redesign our uses and implementations of these technologies? Perhaps there is some hope. It remains to be seen whether enough professional and business organizations undertake the sorts of changes needed; whether or not our legal and political systems will nudge/force them to do so; and most of all, whether or not enough of us, the consumers and users of these technologies, will successfully resist current patterns and forces and insist on much more human/e directions of development and implementation. Failure to do so will mean that whatever human skills and abilities affiliated with freedom, empathy, judgment, care and all else required for lives of meaning and flourishing may be increasingly offloaded – it is always easier to let the machines do the dirty work. And, very worst-case, fewer and fewer of us would notice or care, as all of that will be forgotten, lost (de-skilled) or simply never introduced and cultivated in the first place.
“Manifestly, I very much hope such the worst cases are never realized, and there may be some good grounds for hoping that they will not. But I fear that slowing down and redirecting the primary current patterns of technology development and diffusion will be very difficult indeed.”
Bob Frankston, internet pioneer and technology innovator, wrote, “The idea that meaning is not intrinsic is a difficult one to grasp. Yet this idea has defined our world for the last half-century. Electronics spreadsheets knew nothing about finance yet allowed financiers and others to leverage their knowledge. Unlike the traditional telecommunications infrastructure, the Internet does not transport meaning – only meaningless packets of bits. Each of us can apply our own meaning if we accept intrinsic ambiguity. It poses a challenge to those who want to do human-centered infrastructure. The idea that putting such intent into the ‘plumbing’ actually limits our ability to find our own meaning is counterintuitive. Getting past that and learning how to manage the chaos is key. Part of this is having an educational system that teaches critical thinking and how to learn.
“We need to accept a degree of chaos and uncertainty and learn to survive it, if we have the time. I might be expecting too much, but I can hope that some of those growing up with the new technologies will see the powerful ideas that made them possible and eschew the hubris of thinking they can define the one true future. I worry about the hubris of those who think they can define the one true future and impose it on us. I see the danger in an appeal to authority or those who do not understand how AI works and thus trust it far too much. Just as we used to use steam engine analogies to understand cognition, we now use problematic computer analogies. …
“We’ve spent thousands of years developing a society implicitly defined by physical boundaries. Today we must learn how to live safely in a world without such boundaries. How do we manage the conflicts between rights in a connected world? How will we negotiate a world that we understand is interconnected physically (with climate as an example) and more abstractly as with the Internet?”
David Barnhizer, author of the forthcoming book “Mass Formation Psychosis in America’s Universities”and emeritus professor at Cleveland State University, noted, “The disintegration of community we are experiencing is being driven to significant degrees by the combination of the Internet, Artificial Intelligence and social media systems. AI-facilitated social media has intensified and accelerated the disintegration of our social forms and norms. Governmental and private sector surveillance and privacy breaches made possible through AI and the Internet have created a culture of intrusion, manipulation, misrepresentation, conflict and lying.
“The almost unbelievable fact is that the vast majority of America’s communications systems are dominated by only six corporations. These include Google, Meta, Amazon, Twitter, Apple and apparently PayPal, given that the platform has repeatedly censored or denied access to groups with which they do not agree. Collaboration between these systems poses a significant threat to America’s democratic republic and to the free speech we have long considered a dynamic and vital part of our social system.
“This has led to the development of enormous companies – Big Tech – that can know every aspect of our lives and are able to shape the way we think and act through overt and covert manipulation and ‘messages.’ The Big Tech companies have become so powerful that they represent a form of ‘quasi-government’ with which we have not yet learned how to cope. Along with these is a significant blurring of the boundaries between government and these ‘quasi-governmental’ entities. This includes the fact that governmental agencies such as the FBI, Homeland Security, the Department of Justice, and even the State Department have developed strategies and relationships by which they have been using the resources and informational and monitoring systems of Big Tech in ways that would be blocked by the U.S. Constitution if government tried to do the surveillance, monitoring, censoring, ‘messaging’ and minimizing of disfavored posts and reports themselves.
“As shown by the internal communications revealed in the recent release of Twitter files, secretive relationships and collaborations between government and Big Tech have evolved to the point that a large-scale censorship and surreptitious monitoring system exists. As reflected by the significant financial exchanges between the FBI and a single monitoring group within Twitter that the FBI paid $3.5 million for its services, questionable activities have taken place on a regular basis. Such Big Tech-federal agency relationships in which an agency such as the FBI or even (apparently) a little-known element of the State Department provided Twitter with lists and recommendations concerning people and disfavored posts governmental actors felt needed diminished or censored. This represents a dangerous and significant increase and diversification of the power our federal government wields over us.
“This has clearly created new kinds of political and social silencing of disfavored voices under open-ended ‘eye of the beholder’ rubrics such as ‘disinformation,’ ‘misinformation,’ and ‘domestic terrorism’ allegations dangerously applied to parents protesting school board policies. The fear of being monitored, censored and condemned has also led to the atrophy of formal education (especially at the university level but also in K-12 systems), and created a culture of argumentation that is all about power rather than inquiry, discourse and the discovery of truth.
“As to where all this leads over the next 12 to 15 years, I am not hopeful that the consequences will be positive. Regardless of laws restricting monitoring, illegal surveillance and serious ongoing privacy abuses, our experience shows quite clearly that corporations, government agencies, information brokers, Big Tech, activist groups of all persuasions, hosts of scam artists and propagandists and individuals with agendas cannot resist poking, probing, spying, propagandizing, intimidating and terrorizing others through the AI-facilitated mechanism of social media and information acquisition. This doesn’t even begin to touch on the predatory and often criminal dynamics of the Dark Web. The truth is that the rapidly evolving AI, Internet and social media systems, as an integrated mechanism, will become more and more intrusive, manipulative, and morally and spiritually destructive. The implications for Western democratic republics are severe.
“The extreme social fragmentation the U.S. and Europe are experiencing is not reversible. Until now, people who harbored the worst, sickest or most contemptible thoughts, or who drew conclusions based on biases and ignorance, operated locally and spoke only to their most trusted associates. Until empowered and legitimated by the Internet, they were uncertain and apprehensive about revealing their true self and understood they could not safely communicate their views in ‘polite society’ because they could not be certain the people you were speaking to face-to-face shared your prejudices. Looking ahead, all those inhibitions have vanished and show no signs of returning – thanks in large measure to corporate technical systems that enable those destructive human tendencies and profit from them.”
Matt Moore, a knowledge-management entrepreneur with Innotecture, which is based in Australia, observed, “Human beings will remain wonderful and terrible and banal. That won’t change. We’ll see greater use and abuse of artificial intelligence. ChatGPT will seem just like the iPhone seems to us today – so 2007. Many mundane tasks will be undertaken by machines – unless we chose to do them for our own pleasure (artisanal drudgery). We will be more productive as societies. There will be more content, more connection, more everything. We will have ecological and climate-related technologies in abundance. We will have digital-twin ecosystems that allow us to model and manage our complex world better than ever. We’ll probably have more bionic implants and digital medicine. A subset of society will reject all that (the Neo-Amish) in different ways, as it can be overwhelming. We will use these technologies to hurt, exploit and persecute each other. We will surveil, wage war and seek to maximise profit. Parts of our ecosystem will collapse, and our technologies will both accelerate and mitigate that. Fertility will probably drop as people don’t just opt out themselves but also opt out their potential children.”
Pamela Rutledge, director of the Media Psychology Research Center, wrote, “All change, good and bad, relies on human choices. Technology is a tool; it has no independent agenda. There are tremendous opportunities in digital technologies for humans to enhance their experiences and well-being. Digital technologies can increase access to health care and fight climate change. They can change education by automating repetitive tasks and running adaptive-learning experiences, allowing teachers to focus on teaching soft skills like creative thinking and problem-solving. In art, literature and music, generative AI and imagery tools like DALL-E can enable cost-effective exploration and prototyping, facilitating innovation.
“The ubiquity of technology highlights the need for better media literacy training. Media literacy must be integrated into the educational curriculum so that we teach each generation to ask critical questions and develop the skills necessary to understand the design of digital tools and the motivations behind them, including the agendas of content producers. Young people need to learn smart practices in regard to privacy and data management, how to manage their time online and how to take action in the face of bullies or inappropriate content. These are skills transferable on- and offline, digital and in person. A better-educated public will be better prepared to make the demands for Big Tech to pull back the curtain on the structural issues of technology, including issues tied to blackbox algorithms and artificial intelligence.
“Used well, these technologies offer tremendous opportunities to innovate, educate and connect in ways that make a significant positive difference in people’s lives. Digital technologies are not going away. A positive outcome depends on us leaning into the places where technology enhances the human experience and supports positive growth. As in strengths-based learning, we can apply the strengths of digital technologies to identify needs and solutions.
“There are challenges, however. The inherent tendency of humanity is to resist change as innovation cycles become more rapid, particularly when innovation is economically disruptive. The world will have to grapple with dealing with all of this in an atmosphere in which trust in institutions has been undermined and people have become hyper-sensitized to threat, making them more reactive to fear, heightening the tendency to homophily and othering.
“The devaluation of information puts us at social and political risk. Bad actors and lack of transparency can continue to increase distrust and drive wedges in society. Technology is persuasive. Structural decisions influence how people interact, what they access and how they feel about themselves and the world.”
Oksana Prykhodko, director of INGO European Media Platform, an international nongovernmental organization based in Ukraine, wrote of her experiences while her country is at war. “I live in Ukraine,” she wrote, “under full-scale, unprovoked aggression from Russia, and even now, after nearly 12 months of cyberattacks and the bombing of our citizens, ISPs, energy infrastructure and so on, I have an Internet connection.
“Before the war we had more than 6,500 different ISPs. Now nearly every large household, every office, every point of invincibility has its own Starlink satellite connection and a generator and shares its Wi-Fi with its neighbours. I am sure that the Ukrainian experience of ‘keeping Ukraine connected’ (with the help of many stakeholders from around the world) can help to ensure human-centered, government-decentralised Internet connection. I am hoping that by 2035 we will have several competitive decentralised private satellite providers and platforms for connectivity and to improve our social and political interactions in the future with all democratic countries.
“I am not optimistic about the future of human rights, but perhaps there will be better awareness-raising in support of them in the next decade and the establishment of litigation processes in support of rights that result in clear and practical outcomes. The Russians are doing their best to commit the genocide of the Ukrainian people. We in Ukraine are extremely worried about our personal data protection and cybersecurity, the forced deportation of children to the country-aggressor, fake referendums with fake lists of ‘voters’ and acts of torture committed on people found on e-registries. These crimes will demand future investigation and the trial of those who must take responsibility.
“We in Ukraine fully support the multistakeholder model of Internet governance. Because we have free speech, fierce discussions often break out among our stakeholders as we excitedly discuss the big issues tied to the future of the Internet. Russians have no such rights, no multistakeholder participation in decision-making, only the governing class. The fact that there is no multistakeholder participation in non-democratic countries undermines the full realization of the global multistakeholder model.
“During this war, Ukrainian schoolteachers have had to become e-teachers (very often against their own wishes, against their technical capabilities) because it became unsafe to stay in Ukrainian schools in areas targeted for Russian bombings). This is the worst way to further the development of e-learning.”