Numbers, Facts and Trends Shaping Your World

The Future of Human Agency

2. Expert essays on human agency and digital life

Most respondents to this canvassing wrote brief reactions to this research question. However, a number of them wrote multilayered responses in a longer essay format. This essay section of the report is quite lengthy, so first we offer a sampler of a some of these essayists’ comments.

  • Paul Saffo warned that it is likely that in the future, “those who manage our synthetic intelligences will grant you just enough agency to keep you from noticing your captivity.”
  • Raymond Perrault predicted where the lines will be drawn on decisions made by autonomous systems vs. humans: “The higher the risk of the AI system being wrong and the higher the consequences of a bad decision, the more important it is for humans to be in control.
  • Jamais Cascio shared several compelling 2035 scenarios, ranging from humans benefiting greatly from “machines of loving grace” to a digital dictatorship that might even include “a full digital duplication of a notorious authoritarian leader of years past.”
  • Andre Brock said future automated decision-making will be further “tuned to the profit/governance models of extraction and exploitation integrated into legal mechanisms for enhancing the profits of large corporations.”
  • Alf Rehn wrote that if things play out well, algorithms can be as considerate to human needs as they are wise. “We need AIs that are less ‘Minority Report’ and more of a sage uncle, less decision-makers than they are reminders of what might be and what might go wrong.”
  • Barry Chudakov said society is facing a massive paradigm shift: “We cannot fully grasp the recency of the agency we have gained nor the encroachments to that agency that new tools represent. … We can no longer simply pick up and use, or hand over to children, devices and technologies that have the ability – potential or actual – to alter how we think and behave.”
  • danah boyd urged people to focus on the forces behind digital tools and systems and their goals. “What matters is power. Who has power over whom? Who has the power to shape technologies to reinforce that structure of power?”
  • Maggie Jackson predicted a damaging level of dependence on powerful devices could further evolve to eliminate most agency. “Human agency could be seriously limited by increasingly powerful intelligences other than our own due to humans’ innate weakness.”
  • Maja Vujovic wished for uncomplicated decision-making user interfaces. “If we don’t build in a large button, simple keyword or short voice command for clearly separating what we agree to give out willingly … and what we don’t … then we’re just dumb. And doomed.”
  • Ben Shneiderman offered encouragement. “The hopeful future we can continue to work toward is one in which AI systems augment, amplify and enhance our lives. We must value humans’ capabilities and seek to build technologies that support human self-efficacy, creativity, responsibility and social connectedness.”
  • David Weinberger touched on the light and dark sides of AI and ML decision-making. “As we delegate higher-order decisions to the machines, we may start to reassess the virtue of it. Autonomy posits an agent sitting astride a set of facts and functions. That agent formulates a desire and then implements. Go, autonomy! But this is a pretty corrupt concept.”
  • Claudia L’Amoreaux said the digital divide will widen, “creating two distinct classes with a huge gap between a techno-savvy class, and a techno-naive class. Techno-naive humans are easily duped and taken advantage of – for their data, for their eyeballs and engagement metrics and for political gain by the unscrupulous groups among the techno-savvy.”
  • Neil Davies commented, “One of the enduring problems of widescale, ubiquitous, autonomous systems is that mistakes get buried and failures aren’t shared; these things are prerequisites for people to learn from.”
  • Marcus Foth said that, considering the many problems humanity and the planet are facing, “having the humans who are in control now not being in control of decision-making in 2035 is absolutely a good thing that we should aspire toward.”
  • Gillian Hadfield optimistically declared, “Democracy is ultimately more stable than autocratic governance. That’s why powerful machines in 2035 will be built to integrate into and reflect democratic principles, not destroy them.”
  • Gary Grossman worriedly predicted that humans will increasingly live their lives on autopilot. “The positive feedback loop presented by algorithms regurgitating our desires and preferences contributes information bubbles, reinforcing existing views, making us less open to different points of view, and it turns us into people we did not consciously intend to be.”
  • David Barnhizer warned, “The tech experimenters, government and military leaders, corporations, academics, etc., are engaged in running an incredible experiment over which they have virtually no control and no real understanding.”
  • Lia DiBello pointed out that technology has always “shown itself to free human beings to focus on higher-order decision-making by taking over more practical or mundane cognitive processing,” from Global Positioning Systems to automated processes.
  • Russ White predicted, “Humans could lose the ability to make decisions, eventually becoming domesticated and under the control of a much smaller group of humans.”
  • Stephen Downes pointed out that AI is already shaping options, nudging individuals’ beliefs and activities in one direction or another and setting differential pricing. He predicted, “Where people will not have a sufficient range of control is in the choices that are available to us. … Companies have no incentive to allow individuals control.”
  • Doc Searls noted the important work being done by tech designers in these early years of digital life. “Human agency and ingenuity are boundlessly capable. We need to create our own tools for exercising both. We will succeed in the long run because human agency in industrial and digital worlds is best expressed by humans using machines, not machines using humans.”
  • Sara M. Watson said in 2035 technology should “prioritize collective and individual human interests above all else, in systems optimized to maximize for the democratically recognized values of dignity, care, well-being, justice, equity, inclusion and collective- and self-determination.”
  • Jim Dator spelled out the new contours of human agency, identity and intelligence, arguing, “Humanity can no longer be considered to be the measure of all things, the crown of creation. We are participants in an eternal evolutionary waltz that enabled us to strut and fret upon the Holocene stage.”

What follows is the full set of essays submitted by a number of leading experts responding to this survey.

‘Those who manage our synthetic intelligences will grant you just enough agency to keep you from noticing your captivity’

Paul Saffo, longtime Silicon Valley foresight guru, observed, “We have already turned the keys to nearly everything over to technology. The most important systems in our lives aren’t the ones we see, but the systems we never notice – until they fail. This is not new. Consider the failure of the Galaxy IV satellite a quarter century ago: puzzled consumers who never noticed the little dishes sprouting atop gas stations discovered they couldn’t fill their tank, get cash from ATMs, or watch their favorite cable TV programs.

“We have experienced 16 Moore’s Law doublings since then. Our everyday dependence on technology has grown with even greater exponentiality. We carry supercomputers in our pockets, our homes have more smarts than a carrier battle group, and connectivity has become like oxygen – lose it for more than a few moments and we slip into digital unconsciousness, unable to so much as buy a latte, post a tweet or text a selfie.

“Technologists are optimists. They promise that the next wave of technology will solve the failings of prior innovations and make glitches a thing of the past. Empowered by AI, Richard Brautigan’s ‘machines of loving grace’ will keep omniscient watch over our lives in a harmonious cybernetic meadow. There is no reason why the next technological wave can’t expand human agency, giving us greater satisfaction and control. It is just a matter of design. Or, rather, if it was just a matter of design, the now ubiquitous spell-checkers that so annoy us would actually be helpful – and come with an off switch to flick when they weren’t. This is just a minor example, but if we can’t make the small, simple stuff work for us, how will more complex systems ever live up to our expectations?

No matter how brilliant AIs, avatars and bots become, they will never be truly autonomous. They will always work for someone – and that someone will be their boss and not you, the hapless user.

Paul Saffo, longtime Silicon Valley foresight guru

“But don’t blame the machines. No matter how brilliant AIs, avatars and bots become, they will never be truly autonomous. They will always work for someone – and that someone will be their boss and not you, the hapless user. Consider Uber or any of the other mobility services: In theory, their ever more brilliant algorithms should be working tirelessly to enhance the customer experience and driver income. Instead, they answer to their corporate minders, coldly calculating how far fares can be boosted before the customer walks – and how much can be salami-sliced off the driver’s margin before they refuse to drive.

“Nearly a century ago, Will Durant observed that ‘history reports that the men who can manage men manage the men who can manage only things, and the men who can manage money manage all.’ If Durant were here today, he would surely recognize that those who manage our synthetic intelligences will inevitably become the ones who manage all. And they will instruct their intelligences to grant you just enough agency to keep you from noticing your captivity.”

The higher the risk of the AI system being wrong and the higher the consequences of a bad decision, the more humans should be in control

Raymond Perrault, a distinguished computer scientist at SRI International (he directed the AI Center there from 1988-2017), wrote, “Current AI systems based on machine learning are showing continued improvement on tasks where large amounts of training data are available. However, they are still limited by their relative inability to incorporate and interact with symbolic information.

“The role of symbolic information and reasoning is one of the major outstanding questions in AI, and there are very different opinions as to whether and how integration should be achieved. I believe that robust, verifiable AI systems, needed for high-reliability systems such as self-driving cars, depend on progress in this area and that this technical problem will eventually be solved, though whether that will be sufficient to field high-reliability systems remains to be seen. I accept that it will, but I don’t know when.

“AI is and will continue to be used in two kinds of scenarios, those where the AI operates completely autonomously, as in recommender systems and those where humans are in ultimate control over the decisions suggested by the AI, and as in medical diagnostics and weapons. The higher the risk of the AI system being wrong and the higher the consequences of a bad decision, the more important it is for humans to be in control.

“Let’s look at a few of the main categories where that sorting will likely occur:

  • “Major personal and life-and-death decisions (education, marriage, children, employment, residence, death): I don’t see full automation of decision-making in major personal decisions, though support of decisions could improve, e.g., with respect to choices in education and employment.
  • “Financial decisions (buying a house, personal investments, more): Financial decisions will continue to get more support, and I could see significant delegation of investment decisions, especially of simple ones. But I can’t see an AI system ever deciding which house you should buy.
  • “Use of major services (health care, transportation): AI support for health care and transportation will continue to increase, but I can’t see life-and-death health decisions ever being completely automated. I doubt that self-driving cars will operate at scale except in controlled conditions until the availability of highly reliable AI systems.
  • “Social decisions (government, national security): Government faces enormous challenges on many fronts. We could save large amounts and improve fairness by streamlining and automating tax collection, but it is hard to see the will to do so as long as minimizing government remains a high priority of a large part of the population. I don’t see another 15 years changing this situation. The use of AI for national security will continue to increase and must continue to be under the control of humans, certainly in offensive situations. With appropriate controls, AI-based surveillance should actually be able to reduce the number of mistaken drone attacks, such as those recently reported by major news organizations.”

Scenarios for 2035 and beyond are likely to range from humans benefiting from ‘machines of loving grace’ to being under the thumb of digital dictators

Jamais Cascio, distinguished fellow at the Institute for the Future, predicted, “Several scenarios will likely coexist in the future of agency by 2035.

1) “Humans believe they are in control but they are not: The most commonly found scenario will be the one in which humans believe themselves to be in control of important decision-making in the year 2035, but they’re wrong. This will (largely) not be due to nefarious action on the part of rogue AI or evil programmers, but simply due to the narrowing of choice that will be part of the still-fairly-simple AI systems in 2035. Humans will have full control over which option to take, but the array of available options will be limited to those provided by the relevant systems. Sometimes choices will be absent because they’re ‘obviously wrong.’ Sometimes choices will be absent because they’re not readily translated into computer code. Sometimes choices will be absent because the systems designed to gather up information to offer the most relevant and useful options are insufficient.

“In this scenario, as long as the systems allow for human override to do something off-menu, the impact to agency can be minor. If it’s not clear (or not possible) that humans can do something else, partial agency may be little better than no agency at all.

2) “Humans know they are not in control and they’re OK with that: Less common will be the scenario where humansdo NOT believethemselves to be in control of important decision-making in the year 2035 and they like it that way. Humans are, as a general rule, terrible at making complex or long-term decisions. The list of cognitive biases is long, as is the list of historical examples of how bad decision-making by human actors have led to outright disaster. If a society has sufficient trust and experience with machine decision-making, it may decide to give the choices made by AI and autonomous systems greater weight.

“This would not be advisable with current autonomous and AI systems, but much can happen in a decade or so. There may be examples of AI systems giving warnings that go unheeded due to human cognitive errors or biases, or controlled situations where the outcomes of human vs. machine decisions can be compared, in this case to the AI’s benefit. Advocates of this scenario would argue that, in many ways, we already live in a world much like this – only the autonomous systems that make decisions for us are the emergent results of corporate rules, regulations and myriad minor choices that all add up to outcomes that do not reflect human agency. They just don’t yet have a digital face.

3) “A limited number of AI-augmented humans have control: Last is a scenario that will somewhat muddy the story around human agency, as it’s a scenario in which humans do have control over important decision-making in the year 2035, but it’s a very small number of humans, likely with AI augmentations. Over the past few decades, technologies have vastly extended individuals’ power. Although this typically means extended in scale, where human capabilities become essentially superhuman, it can also mean extended in scope, where a single or small number of humans can do what once took dozens, hundreds, or even thousands of people. By 2035, we’ll likely see some development of wearable augmentations that work seamlessly in concert with their humans; whether or not we think of that person as a cyborg comes down to language fashion. Regardless, the number of people needed to make massive life-or-death decisions shrinks, and the humans who retain that power do so with significant machine backup.

“This may sound the most fantastical of the three, but we’re already seeing signals pointing to it. Information and communication systems make it easy to run critical decisions up the chain of command, taking the yes-or-no choice out of the hands of a low-ranking person and giving it to the person tasked with that level of responsibility. Asking the president for authorization to fire a weapon is just a text message away. Whether or not we go as far as cyborg augmentation, the humans-plus-AI model (as described by Kevin Kelly as ‘centaurs,’ his name for future people who use artificial intelligence to complement their thinking) will deeply enmesh decision-making processes. Advocates will say that it leads to better outcomes by taking the best parts of human and machine; critics will say that the reality is quite the opposite.

“For these scenarios, the canonical ‘important decision-making’ I’ve had in my head regards military operations, as that is the topic that gets the most attention (and triggers the most unrest). All three of the scenarios play out differently.

  • “In Scenario 1, the information and communication systems that enable human choice potentially have a limited window on reality, so that the mediated human decisions may vary from what might have been chosen otherwise.
  • “In Scenario 2, advocates would hope that carefully designed (or trained) systems may be seen as having ‘cooler heads’ in the midst of a crisis and be less likely to engage in conflict over ego or ideology; if the system does decide to pull the trigger (literally or metaphorically), it will only be after deep consideration. One hopes that the advocates are right.
  • “In Scenario 3, there’s the potential for both narrowed information with AI mediation and the ‘wise counsel’ that could come from a well-designed long-term thinking machine; in my view, the former is more plausible than the latter.

“Outside of these scenarios there are some key factors in common. The primary advantage to AI or autonomous decision-making is speed, with machines generally able to take action far faster than can a human (e.g., algorithmic trading). In competitive situations where first-mover advantage is overwhelming, there will be a continued bias toward AI taking charge, with likely diminishing amounts of human guidance over time.

“Another advantage of AI is an imperviousness to tedium, meaning that an AI can undertake the same repeated action indefinitely or pore over terabytes of low-content data to find patterns or anomalies, and give the final pass as much attention as the first. An amount or diversity of information that would be overwhelming to a human could easily be within the capacity of an intentionally designed AI. When decisions can be made more precisely or accurately with more information, machine systems will likely become the decision-makers.

“The most unusual advantage of AI is ubiquity. If an AI system can make better (or at least useful) decisions, it does not need to be limited to the bowels of the Pentagon. Arguably, a military where every human soldier has AI ‘topsight’ that can see the larger dimensions of the conflict is more effective than one that has to rely on a chain of command or potentially biased human decision-making in the field. More broadly, a decision-making system that proves the most insightful or nuanced or aggressive or whatever can be replicated across all of the distributed AIs. If they’re learning systems, all the better – lessons learned by one can very rapidly become lessons learned by them all.

“I suggested at the outset that the conditions of 2045 will likely differ significantly from the world of 2035. The world of mid-century would be an evolution of the world we made in the previous couple of decades. By 2045, I suspect that our three scenarios would be the following:

  • “No AI, No Cry: For many reasons, there are few if any real AIs left by 2045, and humans will be the default important decision-makers. This could be by choice (a conscious rejection of AI, possibly after some kind of global disaster) or by circumstance (the consequences of climate disaster are so massive that infrastructural technologies like power, parts and programmers are no longer available).
  • “All Watched Over by Machines of Loving Grace: The full flowering of the second 2035 scenario, in which our machines/AIs do make significantly smarter and wiser decisions than do humans and that’s OK. We let our technology make the big choices for us because it will simply do a better job of it. It works out.
  • “Digital Dictators: The full flowering of the third 2035 scenario. Here we see a massive consolidation of power in the hands of a very small number of ‘people,’ hybrids of AI top-sight and human biases. Maybe even a full digital duplication of a notorious authoritarian leader of years past, able to live on forever inside everyone’s devices.

“Of course, there’s always some aspects of the #1 scenario across issue areas – the Miserable Muddle. Stuff doesn’t work exactly as we’d like, but we can get enough done to carry on with it. People in power always change, but day-to-day demands (food, shelter, entertainment) don’t. Humans just keep going, no matter what.”

In automated decisions relying on human data, the human point of contact is often culturally framed by institutions that do not represent everyone

Andre Brock, associate professor of literature, media and communication at Georgia Tech and advisor to the Center for Critical Race Digital Studies, wrote, “In 2035, automated decision-making will largely resemble the robo-signing foreclosure courts of the 2020s, where algorithms tuned to the profit/governance models of extraction and exploitation are integrated into legal mechanisms for enhancing the profits of large corporations.

“My grounds for this extraordinary claim draw upon my observations about how governments have been captured by private/business entities, meaning that any pretense of equity based on the recognition of the ‘human’ has begun being supplanted by what Heidegger deemed humanity’s future as a ‘standing reserve’ of technology.

“Many decisions affecting everyday life for those in need of equity and justice already are framed through anti-blackness and extractive models; I’m specifically focused on the United States, whose ‘democratic’ government was conceptualized by white men who worshiped property, owned Black folk, destroyed entire indigenous populations and denied women the vote.

“Decision-making, from this perspective, largely benefits the political and economic interests of particular interests who fight savagely to retrench the gains made by Black folk, Asian folk, queer folk, women and the differently abled. There is no inherent democratic potential in information or algorithmic technologies designed to counter these interests, as the creators are themselves part of a propertied, monied, raced and sexualized elite.

“If anything, rolling out tech-abetted autonomous decision will further entrench the prevailing power structures, with possibilities for resistance or even equitable participation left to those who manage to construct alternate socialities and collectives on the margins.

“I’m intrigued by your question ‘What key decisions will be mostly automated?’ I feel that ‘key decisions’ is a phrase often understood as life-changing moments such as the purchase of a home, or what career one will pursue, or whether to become romantically involved with a possible life partner. Instead, I urge you to consider that key decisions are instead the banal choices made about us as we navigate society:

  • Whether a police officer will pull you over because you’re a Black driver of a late model vehicle
  • Whether a medical professional will improperly diagnose you because of phenotype/race/ethnicity/economic status

“These decisions currently rely upon human input, but the human point of contact is often culturally apprehended by the institutions through which these decisions are framed. I’m already uncomfortable with how these decisions are made; technology will not save us.”

Better tech and better data have improved human decision-making; they are also whittling away at human agency – without us even realizing it

Alf Rehn, professor of innovation, design and management at the University of Southern Denmark, observed, “We need AIs that are less ‘Minority Report’ and more of a sage uncle, less decision-makers than they are reminders of what might be and what might go wrong.

“I do believe that – yes – humans will still be making the big decisions, and if things play out well we may have algorithms that help us do that in more considered, ever-wise ways.

“When it comes to the obvious issues – making decisions about immediate life or death, peace or war, and the most impactful laws – I think we humans will always insist on having our hand on the throttle or finger on the button. The trouble will more likely start brewing in smaller things, decisions we may think are best handled by algorithmic logics, and where we may lack an understanding of long-term consequences.

“Take research funding and innovation projects, for instance. These may seem like things that are best handled ‘objectively,’ with data, and could be an area where we are fairly open to leaving some of our agency to, e.g., an AI system. At the same time, these are often things where the smart move is to fund longshots, things where you have to rely on intuition and imagination more than historical data.

“Or consider things such as education. We have already made things such as school districts and university admittance partially automated, and there seems to be a desire to let assumedly rational systems make decisions about who goes where and who gets to study what. Whilst there might be benefits to this, e.g., lessening bias, these also represent decisions that can affect people for decades and have impacts generations into the future.

“The key issue, then, might be less one of any sort of change in what is perceived as agency, and more one about the short term versus the longer term. We might lose some agency when we let smart machines pick the soundtrack to our everyday life or do some of our shopping for us without asking too many questions beforehand.

“Sure, we might get some dud songs and some tofu when we wanted steak, but this will not have a long-term impact on us, and we can try to teach the algorithm better.

“Allowing an algorithm to make choices where it might be impossible to tell what the long-term effects will be? This is an altogether different matter. We’ve already seen how filter bubbles can create strange effects – polarized politics and conspiracy theories. As smart machines get more efficient there is the risk that we allow them to make decisions that may impact human life for far longer than we realize, and this needs to be addressed.

“We need to pay heed to injections of bad data into decision-making and the weaponization of filtering. That said, such effects are already seen quite often in the here and now. Our perspective needs to shift from the now to address what may come in years and decades.

“We don’t need to fear the machines, but we need to become better at understanding the long-term implications of decisions. Here, in a fun twist, algorithms might be our best friends, if smartly deployed. Instead of using smart machines to make decisions for us, we need to utilize them to scan the likely future impact of decisions.”

‘Today we are not alone; our agency is now shared with our tools’

Barry Chudakov, founder and principal, Sertain Research, wrote, “Before we address concerns about turning the keys to nearly everything over to technology, including life-and-death decisions, it is worthwhile to consider that humanity evolved only recently to its current state after hundreds of thousands of years of existence.”

“The Open Education Sociology Dictionary defines agency as ‘the capacity of an individual to actively and independently choose and to affect change; free will or self-determination.’For much of human history, individual agency was not the norm. David Wengrow and David Graeber asked in ‘The Dawn of Everything’: ‘How did we come to treat eminence and subservience not as temporary expedients, … but as inescapable elements of the human condition?’ In a review of that book, Timothy Burke argues, ‘An association between small-scale foraging societies and egalitarian norms is robust. … If we are to understand human beings as active agents in shaping societies, then applying that concept to societies at any scale that have structures and practices of domination, hierarchy and aggression should be as important as noting that these societies are neither typical nor inevitable.’ …

“Within the context of limited liberal democracies, human agency took a quantum leap with the advent of computers and the smartphone. Today via podcast, YouTube, Snap, TikTok or an appearance on CNN, a Greta Thunberg or Felix Finkbeiner can step out of the shadows to fight for climate change or any other issue. Today humans have a host of tools, from cellphones to laptops to Alexa and Siri to digital twins. These tools are still primitive compared to what’s coming. They don’t only provide opportunities. They can also usurp agency, as when a person driving looks down at a text ping and crashes the car, even ending their life.

“We cannot fully grasp the recency of the agency we have gained, nor the encroachments to that agency that new tools represent. In concert with understanding this, we come to the startling realization – the acknowledgment – that today we are not alone; our agency is now shared with our tools.

Barry Chudakov, founder and principal, Sertain Research

“We cannot fully grasp the recency of the agency we have gained, nor the encroachments to that agency that new tools represent. In concert with understanding this, we come to the startling realization – the acknowledgment – that today we are not alone; our agency is now shared with our tools. … Technology outpaced our awareness of the effects of technology gadgets and devices. For most of us who use these tools, agency today is impinged, compromised, usurped and ultimately blended with a host of tools. This is the new baseline.

“Seeing agency as shared compels response and responsibility. If people are to remain in charge of the most relevant parts of their own lives and their own choices, it is imperative to realize that as we more deeply embrace new technologies to augment, improve and streamline our lives, we are not outsourcing some decision-making and autonomy to digital tools; we are using tools – as we always have done – to extend our senses, to share our thinking and responses with these tools. We have done this with alphabets and cameras, computers and videos, cellphones and Siri.

“We are facing a huge paradigm shift with the advent of new technologies and AI and machine learning. We need to reconfigure our education and learning to teach and incorporate tool logic. Anticipating tool consequences must become as basic and foundational as reading or numeracy. We can no longer thoughtlessly pick up and use, or hand over to children, devices and technologies that have the ability (potential or actual) to alter how we think and behave.

“Agency has no meaning if we are unaware. There is no agency in blindness; agency entails seeing and understanding. From kindergarten to postgraduate studies, we need students and watchers who are monitoring surveillance capitalism, algorithm targeting, software tracking, user concentration and patterns and a host of other issues.”

“Considering agency from this perspective requires a rethink and reexamination of our natures, our behaviors and the subliminal forces that are at work when we pick up technology gadgets and devices. As Daniel Kahneman wrote, ‘Conflict between an automatic reaction and an intention to control it is common in our lives.’ We have little choice but to become more conscious of our reactions and responses when we engage with smart machines, bots and systems powered mostly by autonomous and artificial intelligence (AI).

Stephen Hawking said of AI and human agency, ‘The real risk with AI isn’t malice, but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.’ Our goals start with understanding how humans mostly unconsciously adopt the logic of tools and devices. … We are now – and we will be much more so in the future – co-creators with our tools; we think with our tools; we act with our tools; we are monitored by them; we entrain with their logic. This is a restatement of agency for those who claim the line from ‘Invictus,’ ‘I am the master of my fate: I am the captain of my soul.’ Actually, our technologies are at the helm with us. …

“Like technology itself, agency is complicated. The short history of modern technology is the history of human agency encroached upon by tools that include ever greater ‘intelligence. The Kodak Brownie camera, a cardboard tool released in 1900, had no computing power built into it; today’s digital SLR has a raft of metadata that can ‘take over’ your camera, or simply inform you regarding many dimensions of light intensity, distance, aperture or shutter speed. In this instance, and in many others like it, humans choose the agency they wish to exert. That is true of computers, cellphones, video games or digital twins. We must now become more nuanced about that choice and shun simplistic encapsulations. As the website AI Myths notes:

No AI system, no matter how complex or ‘deep’ its architecture may be, pulls its predictions and outputs out of thin air. All AI systems are designed by humans, are programmed and calibrated to achieve certain results, and the outputs they provide are therefore the result of multiple human decisions.’

“But how many of us are aware of that programming or calibration? Unless we acknowledge how our agency is affected by a host of newer tools – and will be affected to an even greater extent by tools now in the works – our sense of agency is misguided. Our thinking about and assumptions of agency will be erroneous unless we acknowledge that we share agency with these new tools. …

“That’s not all. We are capable of creating new beings. Yuval Noah Harari says, ‘We are breaking out of the organic realm and starting to create the first inorganic beings in the history of life.’ These alt beings will further confound our sense of agency. Along with a question of our proprioception – where does our body start and end as we take ourselves into the metaverse or omniverse – inorganic beings will force us to ask, ‘what is real?’ and ‘what does real mean anymore?’ Will people opt for convenience, romanced by entertainment, and allow the gadgetry of technology to run roughshod over their intentions and eventually their rights?

“The answer to those questions becomes an issue of design informed by moral awareness. Technology must, at some level, be designed not to bypass human agency but to remind, encourage and reward it. Software and technology need to become self- and other-aware, to become consequence-aware.

“Technology seduction is a real issue; without engendering techno-nag, humans must confront AI with HI – human intelligence. Humans must step up to embrace and realize the potential and consequences of living in a world where AI can enhance and assist. Partnering with artificial intelligence should be an expansion of human intelligence, not an abdication of it.”

Focus on who has power over whom, and who has the power to shape technologies to reinforce that structure of power

danah boyd, founder of the Data & Society Research Institute and principal researcher at Microsoft, complained, “Of course there will be technologies that are designed to usurp human decision-making. This has already taken place. Many of the autopilot features utilized in aviation were designed for precisely this, starting in the 1970s; recent ones have presumed the pilot to be too stupid to take the system back. (See cultural anthropologist Madeleine Elish’s work on this.)

“We interface every day with systems that prevent us from making a range of decisions. Hell, the forced-choice, yes-no format of this survey question constrained my agency. Many tools in workplace contexts are designed to presume that managers should have power over workers; they exist to constrain human agency.

What matters in all of these systems is power. Who has power over whom? Who has the power to shape technologies to reinforce that structure of power? But this does not mean that ALL systems will be designed to override human agency in important decisions.

danah boyd, founder of the Data & Society Research Institute and principal researcher at Microsoft

“What matters in all of these systems is power. Who has power over whom? Who has the power to shape technologies to reinforce that structure of power? But this does not mean that ALL systems will be designed to override human agency in important decisions. Automated systems will not control my decision to love, for example. That doesn’t mean that systems of power can’t constrain that. The state has long asserted power over marriage, and families have long constrained love in key ways.

“Any fantasy that all decisions will be determined by automated technologies is science fiction. To be clear, all decisions are shaped (not determined!) by social dynamics, including law, social norms, economics, politics, etc.

“Technologies are not deterministic. Technologies make certain futures easier and certain futures harder, but they do not determine those futures. Humans – especially humans with power – can leverage technology to increase or decrease the likelihood of certain futures by mixing technology and authority. But that does not eliminate resistance, even if it makes resistance more costly.

“Frankly, focusing on which decisions are automated misses the point. The key issue is who has power within a society and how can they leverage these technologies to maximize the likelihood that the futures they seek will come to pass.

“The questions for all of us are: 1) How do we feel about the futures defined by the powerful, and 2) How do we respond to those mechanisms of power? And, more abstractly: 3) What structures of governance do we want to invest in to help shape that configuration?”

We may face ‘a form of chilling human enfeeblement, a dependence on powerful devices coupled with an indifference to this imbalance of power’

Maggie Jackson, award-winning journalist, social critic and author, commented, “Unless urgent steps are taken to protect human autonomy in our relations with AI, human agency in the future will be seriously limited by increasingly powerful intelligences other than our own. I see the danger arising from both humanity’s innate weaknesses and from the unintended consequences of how AI is constructed.

“One point of vulnerability for human agency stems from how standard AI has been formulated. As AI pioneer Stuart Russell has brilliantly noted, we have created AI systems that have one overarching goal: to fulfill the objectives that humans specify. Through reinforcement learning, the machine is given a goal and must solve this objective however it can. As AI becomes more powerful, its foundational motivation becomes dangerous for two reasons.

1) People can’t know completely and perfectly what a good objective is; AI doesn’t account for a device or a person’s interactions within an unpredictable world.

2) A machine that seeks to fulfill a specific objective however it can/will stop at nothing – even dismantling its off switch – in order to attain its goal, i.e., ‘reward.’ The implications are chilling.

“Consider the case of using AI to replace human decision-making. AI is increasingly used to diagnose health problems such as tumors, to filter job candidates, and to filter and shape what people view on social media via recommender systems. While attention has rightly been drawn to the innate bias that is invested in AI, a larger danger is that AI has been created to solely to maximize click-through or other similarly narrow objectives.

“In order to maximize their goals, algorithms try to shape the world, i.e., the human user, to become more predictable and hence more willing to be shaped by the AI system. Social media and search engines, for instance, aren’t giving people what they want as much as modifying users with every click to bend to the goals they were created to pursue. And the more capable AI becomes, the more it ‘will be able to mess with the world’ in order to pursue its goals, write Russell and colleagues in a recent paper on AI’s future. ‘We are setting up a chess match between ourselves and the machines with the fate of the world as the prize. We don’t want to be in that chess match.’ The result may be a form of chilling human enfeeblement, a dependence on powerful devices coupled with an indifference to this imbalance of power. It’s a mark of the seriousness of AI’s perils that leading scientists are openly discussing the possibility of this enfeeblement or ‘Wall-E problem’ (the movie of that name that portrayed humans as unwittingly infantilized by their all-powerful devices).

“A second point of vulnerability can be found in the rising use of caregiver robots. Simple robots are used mainly with vulnerable populations whose capacity to protect their cognitive and physical agency is already compromised. Robots now remind sick and elderly people to take their medicines; comfort sick children in hospitals; tutor autistic youth and provide companionship to seniors. Such ‘care’ seems like a promising use for what I call ‘AI with a face.’ But humanity’s proven willingness to attribute agency to and to develop intense social feelings for simple robots and even for faceless AI such as Siri is perilous. People mourn ‘sick’ Roombas, name and dress their health care assistants, and see reciprocity of social emotions such as care where none exists. As well, patients’ quick willingness to cede responsibility to a robot counters progress in creating patient-centered care.

“While studies show that a majority of Americans don’t want a robot caregiver, forces such as the for-profit model of the industry, the traditional myopia of designers, and the potential for people with less voice in health care to be coerced into accepting such care mean that public reservations likely will be ignored. In sum, human autonomy is threatened by rising calls to use caregiver robots for the people whose freedom and dignity may be most threatened by their use. I am heartened by the urgent discussions concerning ethical AI ongoing around the world and by rising public skepticism – at least compared with a decade or so – of technology in general. But I am concerned that the current rapid devaluation of human agency inherent in AI as it is used today is largely absent from public conversation.

  • We need to heed the creative thinkers such as Russell who are calling for a major reframing of standard models of AI to make AI better aligned with human values and preferences.
  • We need to ignite serious public conversation on these topics – a tall order amidst rising numbness to seemingly ceaseless world crises.

“When it comes to human agency and survival, we are already deeply in play in the chess match of our lives – and we must not cede the next move and the next and the next to powerful intelligences that we have created but are increasingly unable to control.”

We need a large button – a warning mechanism to clearly tell the AI what we want to cede to it and what we want to control ourselves

Maja Vujovic, owner/director of Compass Communications and editor of the Majazine, based in Belgrade, Serbia, wrote, “Whether we are ready or not, we must find ways to restore our control over our digital technology. If we don’t build user interfaces with a large button, simple keyword or short voice command for clearly separating what we agree to give out willingly (that which can be used) and what we don’t (which is off limits), then we’re just dumb. And doomed.

“Let’s look at the larger picture. We don’t need to wait for 2035 to automate our world. We can already patch a half a dozen applets, get our smart fridge to converse with our grocery app and link them both to our pay-enabled smart phone and a delivery service; they could restock our pantry on their own, every week. Yes, in the coming years, we will happily delegate such decisions in this interim period, when a sea of compute power will have to match an ocean of tiny parameters to just propose our next beach read or our late-night dinner-on-wheels.

“But wait! A nosy wearable will sound an alarm about that late-night meal intent and might even independently report it to our family doctor and to our health insurer. Our life insurance plan might also get ‘upgraded’ to a steeper premium, which our smart bank would automatically approve and honour every month. We might then also lose points on our gym score list, which could trigger a deserved bump of our next month’s membership fee, as a lesson.

“And just as we use our Lessons Learned app to proscribe late-night eating (because it makes us sick in more ways than one), we could see a popup flash before us, with a prompt: ‘Over three million of your look-alike peers voted for this candidate in the last election. She fights to protect our privacy, empowers disadvantaged groups and leads a healthy life – no late-night meals in her house! Would you join your peers now and cast your vote, quickly and confidentially?’

“All of this seems not implausible. The systems invoked above would work for each of us as users – we are their ‘Player One.’ Alas, there are also those systems that we are rarely aware of, where we are not users, but items. Any of those systems could – right now – be assessing our credit or dwelling application. Some applicant-tracking systems already blindly filter out certain job candidates or education seekers. Airbnb, hotels and casinos filter out unruly guests. In some countries of Europe, Middle East and Asia, authorities use facial recognition (de facto, though not always de jure) to keep tabs on their perceived opponents. It’s chilling to see the U.S. on the brink beyond which a patronizing governmental body or a cautious medical facility could filter out and penalize people based on their personal life choices.

“The technology to generate all kinds of recommendations already exists and is in use, often in ways that are not best for us. What is conspicuously lacking is real utilities, built for our benefit. Perhaps we might have a say in evaluating those who work for us: professors, civil servants, police officers, politicians, presidents. In fact, electoral voting systems might be equipped with a shrewd AI layer, Tinder-style: swipe left for impeachment; swipe right for second term.

“One reason more useful public-input recommender systems are not widely available is that they haven’t been successfully built and deployed. All other recommender systems have backers. We, the people, could try using Kickstarter to crowdfund our own.

“We can and will draft and pass laws that will limit the ability of technological solutions to decide too many things for us. In the coming decade, we will simply need to balance those two coding capacities of ours – one based on numbers, the other on letters. That’s a level of ‘programming’ that non-techies are able to do to put technology (or any unbridled power, for that matter) on a short leash. That interface has existed for several millennia; in fact, it was our first coding experience: regulation.

“There are already initiatives. An example is California’s ‘Kids’ Code’ (an age-appropriate-design code) that incorporates youth voices and energy. It shows that legislators and users possess impressive maturity around human-computer interaction and its risks, though the tech industry may appear unfazed, for now.”

We must ‘build technologies that support human self-efficacy, creativity, responsibility and social connectedness’

Ben Shneiderman, widely respected human-computer interaction pioneer and author of “Human-Centered AI,” wrote, “Increasing automation does not necessitate less human control. The growing recognition is that designers can increase automation of certain subtasks so as to give humans greater control over the outcomes. Computers can be used when they are reliable, safe and trustworthy while preserving human control over essential decisions, clarifying human responsibility for outcomes and enabling creative use by humans. This is the lesson of digital cameras, navigation and thousands of other apps. While rapid performance is needed in some tasks, meaningful human control remains the governing doctrine for design. As automation increases, so does the need for audit trails for retrospective analysis of failures, independent oversight and open reporting of incidents.”

Shneiderman agreed to also share for this report his following insights from his August 2022 interview with the Fidelity Center for Applied Technology: “The hopeful future we can continue to work toward is one in which AI systems augment, amplify and enhance our lives. Humans have agency over key decisions made while using a vast number of AI tools in use today. Digital cameras rely on high levels of AI for setting the focus, shutter speed and color balance while giving users control over the composition, zoom and decisive moment when they take the photo. Similarly, navigation systems let users set the departure and destination, transportation mode and departure time, then the AI algorithms provide recommended routes for users to select from as well as the capacity to change routes and destinations at will. Query completion, text auto-completion, spelling checkers and grammar checkers all ensure human control while providing algorithmic support in graceful ways. 

“We must respect and value the remarkable capabilities that humans have for individual insight, team coordination and community building and seek to build technologies that support human self-efficacy, creativity, responsibility and social connectedness. Some advocates of artificial intelligence promote the goal of human-like computers that match or exceed the full range of human abilities from thinking to consciousness. This vision attracts journalists who are eager to write about humanoid robots and contests between humans and computers. I consider these scenarios as misleading and counterproductive, diverting resources and effort from meaningful projects that amplify, augment, empower and enhance human performance. 

“The past few years we have seen news stories about tragic failures of automated systems. The two Boeing 737 MAX crashes are a complex story, but one important aspect was the designers’ belief that they could create a fully autonomous system that was so reliable that the pilots were not even informed of its presence or activation. There was no obvious visual display to inform the pilots of the status, nor was there a control panel that would guide them to turn off the autonomous system. The lesson is that the excessive belief in machine autonomy can lead to deadly outcomes. When rapid performance is needed, high levels of automation are appropriate, but so are high levels of human independent oversight to track performance over the long-term and investigate failures.

“We can accelerate the wider, successful adoption of human-centered AI. It will take a long time to produce the changes that I envision, but our collective goals should be to reduce the time from 50 to 15 years. We can all begin by changing the terms and metaphors we use. Fresh sets of guidelines for writing about AI are emerging from several sources, but here is my draft offering:

  1. Clarify human initiative and control
  2. Give people credit for accomplishments
  3. Emphasize that computers are different from people
  4. Remember that people use technology to accomplish goals
  5. Recognize that human-like physical robots may be misleading
  6. Avoid using human verbs to describe computers
  7. Be aware that metaphors matter
  8. Clarify that people are responsible for use of technology.”

Enshrine it in legislation: Everyone should have the right to challenge the outcome of an autonomous decision

John Sniadowski, a systems architect based in the UK, said, “Our lack of agency has arrived. I suggest that the bias toward never challenging the machines is inevitable. Decision systems are generally based on opaque formulas with targeted outcomes [that] usually serve only the best interests of the AIs’ vendors. In most cases, the ultimate outcome from these automated, data-based decisions cannot be challenged and are, in fact, rarely challenged because the human belief is the system is correct often enough to be followed.

“Consider the financial industry today, in 2022. Lending decisions are based on smart systems that are nearly impossible to challenge. In addition, AI is frequently trained on datasets that are biased and may contain hidden anomalies that significantly alter the decision process. The vast majority of the population will be convinced by marketing, propaganda or other opinion-bending messages that these systems are right and any individual’s opinion is wrong. We already see that sort of behaviour in human-based systems operated by Big Pharma, where millions/billions of revenue could be lost if a significant outcome of a product/decision is successfully challenged.

“Life-and-death decisions should always require responsible human input, and they should have a set of criteria that the AI system must present in arriving at its decision that is transparent and capable of human interpretation. This should be enshrined in legislation with punitive consequences for vendors that do not comply with decision transparency.

“I would strongly suggest that this should be incorporated in a global human rights framework, that all humans have the right to challenge the outcome of an autonomous decision. This should be part of the UN charter and put in place as soon as possible.

“Given what we are experiencing on social media, where people can become captured by ‘echo chambers,’ there is a significant danger that AI and autonomous decision processes will exacerbate a broad range of societal inequalities. The vast array of data metrics now harvested from individuals’ internet activities will continue to categorize each person more and more toward an inescapable stereotype without the individual even being aware of the label unfairly applied to them.

“Companies will harvest information from ‘smart cities,’ and AI will build dossiers on each citizen that will be applied for a wide variety of decisions about a person completely without their personal consent. This is very dangerous, and we are already seeing this capability being subverted by some governments to tighten their authoritarian grip on their population.”

Computational systems should be designed to co-evolve, co-mature or co-develop with humans

Clifford Lynch, executive director of the Coalition for Networked Information, wrote, “As I think about the prospects for human agency and how this compares to delegation to computational decision-making in 2035, I’m struck by a number of issues.

1) “As far as I know, we’ve made little progress in genuine partnership and collaboration between computational/AI systems and humans. This seems to be presented as a binary choice: either hand off to the AI, or the human retains control of everything. Some examples: AI systems don’t seem to be able to continuously learn what you already know, information that you have already seen and evaluated, and how to integrate this knowledge into future recommendations it may offer. One really good example of this: Car navigation systems seem unable to learn navigational/routing preferences of drivers in areas very close to their homes or offices.

“Another example: Recommender systems often seem unable to integrate prior history when suggesting things. As far as I can tell, stunningly little real work has been done on computational systems that co-evolve, co-mature or co-develop with humans; this has been largely left to science fiction writers. As an additional issue, some of the research here involves time horizons that don’t fit conveniently with relatively short-term grant funding. Without a lot more progress here, we’ll continue to tend to frame the issue as ‘delegate or don’t delegate agency to computational systems.’

2) “I wonder about the commercial incentives that might exist in maintaining agency as a binary choice (retain or delegate) rather than seeking the cultivation of collaborations between humans and machines. There are many situations when delegation is the easy choice because making a human decision will take a lot of time and have to encompass a lot of complex data; combine this with opaque decision-making by the algorithms once delegation has been made, and this may well advance commercial (or governmental) objectives.

3) “There are staggering commercial incentives to delegate decision-making to computational agents (including really stupid agents like chatbots) in areas such as customer service, billing, fraud detection and the like, and companies are already doing this at massive scale. Most of these systems are really, really bad. Bluntly, the companies deploying these could mostly care less about errors or misjudgments by these computational agents unless they result in a high-visibility public relations blowup. There’s every reason to expect these trends to continue and to get worse rather than better. This represents a really huge abdication of human agency that’s already far advanced.

4) “There are situations where there’s a very strong motivation to default to the machines. Human decision-makers may be overworked, overwhelmed and don’t have time. Not delegating (or delegating and overriding or collaborating) may be risky. There are also still widespread general public beliefs that computational decisions are less biased or more accurate than human decision-making (though there’s been a lot of good research suggesting this is frequently not true). Good examples here: judges going against sentencing or bail recommendations; doctors going against diagnostic/treatment recommenders (often created by health care systems or insurers trying to minimize costs). These overrides can happen, but often only when someone is important enough or persuasive enough to demand and gain the human attention, risk-acceptance and commitment to override the easy default delegation to the AI. Put another way, when they want to, the wealthy and powerful (and perhaps the tech-savvy as well) will have a much better chance of appealing or overriding computational decision-making that’s increasingly embedded in the processes of our society.

5) “As an extension of the last point, there are situations where human decision-makers are legitimately overwhelmed or don’t have time, when they cannot react quickly enough, and algorithmic triage and decision-making must be the norm. We do not understand how to define and agree on these situations. Relatively easy cases include emergency triage of various kinds, such as power grid failures or natural disasters. Computationally directed trading in financial markets might be a middle ground. More challenging cases might include response to minimal warning nuclear strikes (hypersonic vehicles, orbital strikes, close offshore cruise missiles, etc.) where there’s a very short time window to launch a major ‘use it or lose it’ counterforce strike. One can also construct similar cyberwar strike scenarios.

6) “Related to the previous point: As a society we need to agree on how to decide when agency delegation is high-stakes or low-stakes. Also, we need to try to agree on the extent to which we are comfortable delegating to computational entities. For example, can we identify domains where there is a high variance between human and computational predictions/recommendations, hence we should probably be nervous about such delegation?

7) “We haven’t considered augmented humans (assuming that they exist in 2035 in a meaningful way) and how they fit into the picture of computational decision-making, humans and perhaps collaborative middle grounds. This could be important.

8) “I have been tracking the construction of systems that can support limited delegation of decision-making with great fascination. These may represent an important way forward in some domains. Good examples here are AI/ML-based systems that can explore a parameter space (optimize a material for these requirements, running experiments as necessary and evaluating the resultant data); often these are coupled by robotics that allow the computational system to schedule and run the experiments. I think these are going to be very important for science and engineering, and perhaps other disciplines, in the coming years; they may also become important in commercial spheres. The key issue here is to track how specific the goals (and perhaps suggested or directed methodologies) need to be to make these arrangements successful. It’s clear that there are similar systems being deployed in the financial markets, though it’s more difficult to find information about experiences and plans for these. And it’s anybody’s guess how sectors like the intelligence community are using these approaches.”

‘Humans must not be left to feel helpless and hopeless’; they must be able to be owners of their own identities and correct errors in a timely fashion

Amali De Silva-Mitchell, founding coordinator of the UN Internet Governance Forum Dynamic Coalition on Data-Driven Health Technologies, said, “The true, intuitive human decision-making capabilities of technologies are still in their infancy. By 2035 we will have hopefully opened most of the AI developers’ minds to the issues of data quality, trojan data, data warps and oceans, ethics, standards, values, and so forth, that come in a variety of shapes and sizes across segments of society.

“The bias of using the data from one segment on another can be an issue for automated profiling. Using current statistical techniques does not make for strong foundations, for universal decision-making, it only allows for normalized decision-making or even group think.

  • Exceptional issues, small populations, unusual facts will be marginalized, and perhaps even excluded which is an issue for risk management.
  • Data corrections will have lags, impacting data quality if correct at all. Misinformation, issues for semantics and profiling will result.
  • Data translations such as from a holographic source in to a 2D format, may cause illusions and mis-profiling.
  • Quantum technologies may spin data in manners still not observed.
  • An ethical approach of data cleaning may cost money that technology maintenance budgets cannot accommodate.
  • The movement of data from one system to another data system must be managed with care for authenticity, ethics, standards and so forth.

“Lots of caveats have to be made, and these caveats must be made transparent to the user; however, there are some standardized, commonly identified processes that can be very well served by automated decision-making, for example, for repetitive practices that have good procedures or standards already in place. In some instances, automated decision-making may be the only available procedure available, say for a remote location – including outer space. What is critical is that human attention to detail, transparency and continuous betterment is ever-present every step of the way.

“We may be forced to enter into the use of an AI before an application is fully ready for service due to the need to service at speed, fill a gap, and so forth. In these cases, it is especially important that human oversight is ever-present and that members of the public – everyday users – have the opportunity to provide feedback or raise concerns without reprimand.

“Humans must not feel helpless and hopeless with no opportunity for contact with a person when it is necessary. This is something that some developers of bots – for instance – have not taken into account. Humans must also have the opportunity to be the owners of their own identities and be able to check it if they wish to and get it corrected within a reasonable amount of time.

“Assumptions must not be made of persons, and the ‘reasonable person’ concept must always be maintained. Good Samaritans must also have a way to weigh in, as compassion for humans must be at the core of any technology.”

‘Delegating authority can itself be a proper use of autonomy’

David Weinberger, senior researcher at Harvard’s Berkman Center for Internet and Society, commented, “Machine learning models’ interests can and should be regulated and held up for public debate. That could alter our idea of our own autonomy, potentially in very constructive ways, leading us to assume that our own interests likewise affect more than our own selves and our own will. But this assumes that regulators and the public will do their jobs of making machine learning models’ interests – their objective functions – public objects subject to public control.

“Autonomous selves have interests that they serve. Those interests have to be made entirely explicit and measurable when training a machine learning model; they are objects of discussion, debate and negotiation. That adds a layer of clarity that is often (usually?) absent from autonomous human agents.

“There is certainly a case for believing that humans will indeed be in control of making important decisions in the year 2035. I see humans easily retaining decision-making control things like who to marry, what career to pursue, whether to buy or rent a home, whether to have children, which college to go to (if any), and so forth. Each of those decisions may be aided by machine learning, but I see no reason to think that machine learning systems will actually make those decisions for us.

“Even less-important personal decisions are unlikely to be made for us. For example, if an online dating app’s ML models get good enough that the app racks up a truly impressive set of stats for dates that turn into marriages, when it suggests to you that so-and-so would be a good match, you’ll still feel free to reject the suggestion. Or so I assume.

“But not all important decisions are major decisions. For example, many of us already drive cars that slam on the brakes when they detect an obstacle in the road. They do not ask us if that’s OK; they just bring the car to a very rapid halt. That’s a life-or-death ‘decision’ that most of us want our cars to make because the car’s sensors plus algorithms can correct for human error and the slowness of our organic reactions. And once cars are networked while on the road, they may take actions based on information not available to their human drivers, and so long as those actions save lives, decrease travel times, and/or lower environmental impacts, many if not most of us will be OK with giving up a human autonomy based on insufficient information.

“But an uninformed or capricious autonomy has long been understood to be a false autonomy: In such cases we are the puppets of ignorance or short-sighted will. Delegating autonomy can itself be a proper use of autonomy. In short, autonomy is overrated. The same sort of delegation of autonomy will likely occur far more broadly. If smart thermostats keep us warm, save us money and decrease our carbon footprints, we will delegate to them the task of setting our house’s temperature. In a sense, we already do that when we set an old-fashioned thermostat, don’t we?

“But there are more difficult cases. For example, machine learning models may well get better at diagnosing particular diseases than human doctors are. Some doctors well may want to occasionally override those diagnoses for reasons they cannot quite express: ‘I’ve been reading biopsy scans for 30 years, and I don’t care what the machine says, that does not look cancerous to me!’ As the machines get more and more accurate, however, ‘rebellious’ doctors will run the risk of being sued if they’re wrong and the machine was right. This may well intimidate doctors, preventing them from using their experience to contradict the output from the machine learning system. Whether this abrogation of autonomy is overall a good thing or not remains to be seen.

“Finally, but far from least important, is to ask what this will mean for people who lack the privileges required to exercise autonomy. We know already that machine learning models used to suggest jail sentences and conditions of bail are highly susceptible to bias. The decisions made by machine learning that affect the marginalized are likely to be a) less accurate because of the relative paucity of data about the marginalized most affected by them; b) less attuned to their needs because of their absence from the rooms where decisions about what constitutes a successful model are made; and c) are likely to have less power to get redress for bad decisions made by those models. Does this mean that the ‘autonomy gap’ will increase as machine learning’s sway increases? Quite possibly. But it’s hard to be certain, because while machine learning models can amplify societal biases, they can also remove some elements of those biases. Also, maybe by 2035 we will learn to be less uncaring about those whose lives are harder than our own. But that’s a real longshot.

“As for less-direct impacts of this delegation of autonomy, on the one hand, we’re used to delegating our autonomy to machines. I have been using cruise control for decades because it’s better at maintaining a constant speed than I am. Now that it’s using machine learning, I need to intervene less often. Yay!

“But as we delegate higher-order decisions to the machines, we may start to reassess the virtue of autonomy. This is both because we’ll have more successful experience with that delegation, but also perhaps we’ll come to reassess the concept of autonomy itself.

“Autonomy posits an agent sitting astride a set of facts and functions. That agent formulates a desire and then implements. Go, autonomy! But this is a pretty corrupt concept. For one thing, we don’t input information that (if we’re rational) determines our decision. Rather, when in the process of making a decision we decide which information to credit and how to weigh it. That’s exactly what machine learning algorithms do with data when constructing a model.”

‘The public and Big Tech must learn how to build equity into AI and know what levers to pull to assure that it works for the good of humanity’

Kathryn Bouskill, anthropologist and AI expert at the Rand Corporation, said, “Looking ahead, humanity will be challenged to redefine and reimagine itself. It must consider the unprecedented social and ethical responsibilities that the new speed of change is ushering into our lives – including crucial issues being raised by the spread of AI.

“The number of places in which individuals have agency and can take control in this era of swift technological speed is dwindling. Hitting the brakes is not an option. When life happens quickly, it can feel difficult to process change, create a purpose, hold our social ties together and feel a sense of place. This kind of uncertainty can induce anxiety, and anxiety can lead to isolationism, protectionism, fear, gridlock and lack of direction. …

“Is AI going to completely displace human autonomy? We may forget that humanity still has the opportunity to choose what is being developed. That can still be our decision to make. Most people are just passively watching the technology continue to rapidly roll out without being actively engaged as much as they should be with it. For now, I’m leaning toward the optimistic view that human autonomy will prevail. However, this requires the public implementation of educational components, so the black-box aspects of AI are explored and understood by more people. And the public and Big Tech must learn how to build equity into AI and know what levers to pull to assure that it works for the good of humanity. Smart regulation and robust data protection are also critically important.

“The greatest resource in the human toolkit is our ability to cooperate and creatively adapt to or change our surroundings. It will take a concerted effort across multiple stakeholders – citizens, consumers, employers, voters, tech developers and policymakers – to collectively devote attention to vetting and safeguarding technologies of the future to make the world safer.”

‘Where’s the incentive for tech companies to make design choices in favor of human agency?’

Claudia L’Amoreaux, principal at Learning Conversations, a global internet consultancy, and former director of education programs at Linden Lab (developers of Second Life) wrote, “The two words that stand out in your top-level question are ‘designed’ and ‘easily.’ In designing for human agency and decision-making, we do have a choice. Looking at how the EU handled data protection with the GDPR privacy legislation vs. how the U.S. has pretty much continued business as usual shows that we do have a choice. …

In designing for human agency and decision-making, we do have a choice. …However, I am extremely skeptical that choices will be made in favor of human agency here in the U.S. Where’s the incentive?

Claudia L’Amoreaux, principal at Learning Conversations, a global internet consultancy

“However, I am extremely skeptical that choices will be made in favor of human agency here in the U.S. Where’s the incentive? As long as tech companies’ profits are based on separating users from as much of their personal data as possible – for ad targeting, self-serving recommendations that maximize engagement, and resale – this situation will not improve. Broader, more sophisticated applications of AI will only accelerate what is already happening today.

“And as regulations around privacy and data extraction do tighten in the U.S., however slightly, companies in the AI industry are and will continue to exploit the data of people in the less-developed world, as Karen Hao lays out so well in the AI Colonialism series in MIT Technology Review.

“I’ll share two examples that fuel my skepticism about human agency and decision-making. The first example regards the UK Biobank’s transfer of genetic data of half a million UK citizens in a biomedical database to China (reported in The Guardian). The sharing of sensitive genetic data in the UK Biobank project, launched as an ‘open science project’ in 2016, is based on a relationship of trust that is eroding as West/China relations transform. Sharing is not reciprocal. Motives aren’t parallel. The 500,000 humans with their DNA data in the Biobank are asked to trust that researchers will do a good job ‘managing risk.’ Is their agency and decision-making being prioritized in the conversations taking place? I don’t think so.

“A second example is the massive surveillance model employed by China that they are now exporting to countries that want to follow in their footsteps. With large infrastructure projects underway already through China’s Belt and Road Initiative, surveillance tech has become an add-on.

“Regarding the use of the term ‘easily’ in your question – will people ‘easily be in control of most tech-aided decision-making that is relevant to their lives’ – it’s not looking good for 2035.

“What key decisions will be mostly automated?To start to understand what key decisions will be mostly automated, we can look at what’s mostly automated today (and how quickly this has occurred). Let’s look at two related examples – higher education and hiring.

“Many universities have moved to automating the college admissions process for a variety of reasons. Increasing revenue is an obvious one, but some schools claim the move helps reduce bias in the admissions process. The problem with this which has become very clear across many domains today is that it all depends on the datasets used. Introducing AI can have the opposite effect, amplifying bias and widening the equity gap. Students most at risk for paying the price of increased automation in the admissions process are lower-income and marginalized students. 

“Once students do make it into a university, they are likely to encounter predictive analytics tools making academic decisions about their futures that can narrow their options. The education podcast by APM Reports did a good piece on this, titled ‘Under a Watchful Eye.’ While most elite universities are keeping a hands-on approach for now, colleges that serve the majority of students are adopting predictive analytics to point students to what the universities deem as a successful path from their perspective: continued tuition and graduation. This approach benefits the schools but not necessarily the students. 

“Don’t get me wrong – identifying students at risk for failing early on and offering support and options to help them graduate is a good thing. But if the school’s priority is to ensure continuing tuition payments and maximize graduate rates, this can actually co-opt student agency. Once again, predictive analytics relies on historical data, and we know historical data can carry extensive baggage from long-term, systemic bias. Students of color and low-income students can find themselves pushed in different directions than they set out … when an alternative approach that prioritizes around student agency might help them actually succeed on the path of their choice. In this case, predictive analytics helps the schools maintain their rankings based on graduation rates but sacrifices student preferences.

“Then there’s hiring. Hiring is already so heavily automated that job seekers are advised to redesign their CVs to be read by the algorithms. Otherwise, their application will never even be seen by human eyes. These are just a few examples.

“What key decisions should require direct human input? In the military, the use of autonomous lethal weapons systems should be banned. In the 2021 Reith Lecture series ‘Living With Artificial Intelligence,’ Lecture 2: AI in Warfare, Stuart Russell, founder of the Center for Human-Compatible Artificial Intelligence at the University of California, Berkeley, explains, ‘At the moment we find ourselves at an unstable impasse, unstable because the technology is accelerating … we have about 30 countries who are in favor of a ban, as well as the EU parliament, the United Nations, the non-aligned movement, hundreds of civil society organizations, and according to recent polls, the great majority of the public all over the world.’ 

“I stand with Russell – who supports a ban – along with leaders in 30 countries and the majority of people around the world. But as Russell says in his Reith Lecture … 

‘On the other side, we have the American and Russian governments, supported to some extent by Britain, Israel and Australia, arguing that a ban is unnecessary. … Diplomats from both the UK and Russia express grave concern that banning autonomous weapons would seriously restrict civilian AI research. … I’ve not heard this concern among civilian AI researchers. Biology and chemistry seem to be humming along, despite bans on biological and chemical weapons.’

“The U.S. and Russian positions do not speak well for the future of human agency and decision-making, although Russell said he is encouraged by decisions humanity has made in the past to ban biological and chemical weapons, and landmines. It is not impossible, but we have a long way to go to ban autonomous lethal weapon systems. Am I encouraged? No. Hopeful? Yes.

“Because of a long history of structural racism and its encoding in the major databases used for training AI systems (e.g., ImageNet), the justice system, policing, hiring, banking (in particular, credit and loans), real estate and mortgages, and college applications and acceptance all involve life-changing decisions that should require direct human input. And in the medical domain, considering possible life-and-death decisions, we’ve seen that the use of image identification for skin cancer that has been trained predominantly on white skin may misidentify skin cancers. This is just one example in health care. Until we rectify the inherent problems with the training sets that are central to AI solutions, key decisions about life and death should require direct human input.

“How might the broadening and accelerating rollout of tech-abetted, often autonomous decision-making change human society? Because this is already happening now, it’s not hard to see how it will be and is changing human society. For one, it is creating two distinct classes with a huge gap in between – a techno-savvy class, and let’s call it a techno-naive class. Techno-savvy humans understand the basics of AI, algorithms, etc. They have the knowledge and the ability to protect their privacy (as much as is possible), opt out, assess validity and sources of content, detect fakes or at least understand that fakes are proliferating, etc. Techno-naive humans are currently and will be easily duped and taken advantage of – for their data, for their eyeballs and engagement metrics and for political gain by the unscrupulous groups among the techno-savvy.

“And whether savvy or naive, people (especially people of color) will find themselves at the mercy of and in the crosshairs of autonomous decision-making – e.g., misidentification, biases embedded in datasets, locked out of jobs, education opportunities, loans, digitally red-lined. 

“The uses of AI are so vast already, with so little scrutiny. The public’s knowledge of AI is so minimal, agency is already so eroded, people are too willing to trade agency for convenience, most not even realizing that they are making a trade. Sure, I can find out what data Facebook, etc., has on me, but how many people are going to 1) take the time to do it, 2) even know that they can and 3) understand how it all works.

“I’ve made it clear I think we have serious work to do at international and national levels to protect privacy, human agency, access and equity.

“But we also need to make serious efforts in 1) how we teach young people to regard these technologies and 2) in how we put these technologies to work in the pre-K-12 education systems and higher education. Education will play a major role in future outcomes around technology, decision-making and human agency.

“I am encouraged by the efforts of organizations like UNICEF’s AI for Children project, the Harvard Berkman Klein Center’s Youth and AI project, MIT’s Responsible AI for Social Empowerment and Education (RAISE) project, to name a few. I think these projects are exemplary in prioritizing human agency and decision-making. I especially appreciate how they go out of their way to include youth voices.

“The next horizon is already upon us in education. The choices we make in AI-enabled teaching and learning will play a tremendous role in future outcomes around human agency and decision-making. China is currently pushing hard on AI-enabled teaching and adaptive learning with a focus toward helping students perform on standardized testing. And school systems in the U.S. are looking at their success.

“I understand and appreciate the role for adaptive learning systems like Squirrel AI, a dominant tutoring system in China today. But I lean in the direction of educators like Harvard professor Chris Dede, an early innovator of immersive learning, who emphasizes the necessity for an education system that prioritizes creativity, innovation, directed by student interest and passion. To become adults who value human agency and decision-making, young people need to experience an educational system that embodies and models those values. They need opportunities to develop AI literacy that presents a much wider lens than coding – offering opportunities to explore and engage algorithmic justice, biases, ethics, and especially building and testing AI models themselves, from a young age.

“Despite my rather bleak answer of ‘No’ to the primary question, this is where I find encouragement and the possibility of ‘Yes’ for the year 2035. The children in kindergarten today who are training and building robots with constructivist platforms like Cognimates will be entering college and/or the workforce in 2035.

“In the 2019 post “Will AI really transform education?” in The Hechinger Report, writer Caroline Preston reports on a conference on AI in education that she attended at Teachers College, Columbia University. Stephania Druga, who created the Cognimates platform, spoke at the conference, and Caroline summarized: ‘In her evaluations of Cognimates, she found that students who gained the deepest understanding of AI weren’t those who spent the most time coding; rather, they were the students who spent the most time talking about the process with their peers.’”

In many industries ‘the scope of technological change is far beyond the scope, scale and speed of needed regulatory change’

James Hendler, director of the Institute for Data Exploration and Applications and professor of computer, web and cognitive sciences at Rensselaer Polytechnic Institute, commented, “While I would hope we will get better and better at keeping humans in control in many cases, there are three reasons I think we may not get there by 2035 – two positive and one negative:

“Positive 1 – There are a few cases where machines are superior and should be acknowledged as such. This is not something new; for example, there are very few businesses that do payrolls by hand-automated payroll systems (which don’t need AI technology, I note). [Automated systems] have been around a long time and have become trustworthy and relied upon. There will be some cases with existing and emerging AI technologies where this is also true – the key will be a need to identify which these are and how to guarantee trustworthiness.

“Positive 2 – There will be cases where the lack of trained humans will require greater reliance on machine decision-making. As a case in point, there are some single hospitals in the U.S. that have more X-ray analysts than some entire nations in the global south. As machines get more reliable at this task, which is happening at a rapid rate, the potential deployment of such would not be as good as human-machine teaming (which will happen in the wealthier countries that do have the trained personnel to be in the loop) but will certainly be way better than nothing. A good solution that could improve health care worldwide, in certain cases, would be worth deploying (with care) even if it does require trusting machines in ways we otherwise might not.

“The Negative – The main thing holding back the deployment of autonomous technology in many cases has more to do with policy and litigation than technology. For example, many of the current autonomous driving systems can, in certain situations, drive better than humans – and with improvements to roads and such, at least certain kinds of vehicles could be turned over to autonomous systems part, if not all, of the time. However, deploying such has huge risk to the companies doing so, due to long-established rules of liability for automobile-related regulations – which will keep humans in the loop until the technology is provably superior and the rules of the road (if you’ll pardon the pun) are more clearly developed. Thus, these companies opt to keep humans in the loop out of legal, rather than technical, reasons. On the other hand, there are many industries where the speed of technological change is far beyond the scope, scale and speed of regulatory change (face-recognition systems deployment vs. regulation is an example). The companies developing technologies in these less-regulated areas do not have the restrictions on taking humans out of the loop, whether it is a good idea or not, and the economic rewards are still, unfortunately, on the side of autonomous deployment.      

“All of that said, in general, as Alice Mulvehill and I argue in the book ‘Social Machines: The Coming Collision of Artificial Intelligence, Social Networking and Humanity’ and as other authors of similar recent books have argued, in most cases keeping humans in the loop (due to the differences between human and computer capabilities) is still necessary for the foreseeable future. I do consider my prediction a pessimistic one – it would be better to see a world where humans will remain in control of many areas where the speed of technical deployment coupled with the lack of regulation may hinder this happening. This could have potentially disastrous consequences if allowed in high-consequence systems (such as military weaponry, political influence, privacy control – or the lack thereof – etc.).

“Also, investment in human development would be a wonderful thing to see (for example, in the case of X-ray analysts, training more humans to work with automated systems would be preferable to simply deploying the systems), but right now that does not seem to be a political reality in most of the world.”

We are prone to offloading our tasks and responsibilities to machines, resulting in ‘the engineering paradigm overriding the ethical one’

Charles Ess, professor emeritus of digital ethics at the University of Oslo, Norway, wrote, “These past few years of an ‘AI spring’ have distinguished themselves from earlier ones, at least among some authors and projects, as they are accompanied by considerably greater modesty and recognition of the limits perhaps intrinsic to what AI and machine learning (ML) systems are capable of. Notably important resources along these lines include the work of Katharina Zweig, e.g., ‘Awkward Intelligence Where AI Goes Wrong, Why It Matters, and What We Can Do about It’ (MIT Press, 2022).

“On the other hand, I still find that in most of the emerging literatures in these domains – both from the corporations that largely drive the development of AI/ML systems as well as from research accounts in the more technical literature – there remains a fundamental failure to understand the complexities of human practices of decision-making and most especially judgment. Some AI ethicists point out that a particular form of human judgment – what Aristotle called phronesis and what Kant called reflective judgment – are what come into play when we are faced with the difficult choices in grey areas. In particular, fine-grained contexts and especially novel ethical dilemmas usually implicate several possible ethical norms, values, principles, etc.

“In contrast with determinative judgments that proceed from a given norm in a deductive, if not algorithmic fashion to conclude with a largely unambiguous and more or less final ethical response – one of the first tasks of reflective judgment is to struggle to discern just which ethical norms, principles, values indeed are relevant to a specific case, and, in the event of (all but inevitable) conflict, which principles, norms, etc., override the others. As many of us argue, these reflective processes are not computationally tractable for a serious of reasons – starting as they draw from tacit, embodied forms of knowledge and experience over our lifetimes. As well, these processes are deeply relational – i.e., they draw on our collective experience, as exemplified in our usually having to talk these matters through with others in order to arrive at a judgment.

“There is, then, a fundamental difference between machine-based ‘decision-making’ and human-based ethical reflection – but this difference seems largely unknown in the larger communities involved here. In particular, there is much in engineering cultures that sets up ‘decision-making’ as more or less deductive problem-solving – but this approach simply cannot take on board the difficulties, ambiguities and uncertainties intrinsic to human reflective judgment.

“Failing to recognize these fundamental differences then results in the engineering paradigm overriding the ethical one. Catastrophe is sure to result – as it already has, e.g., there is discussion that the financial crises of 2008 in part rested on leaving ‘judgments’ as to credit-worthiness increasingly to machine-based decision-making, which proved to be fatally flawed in too many cases.

“As many of us have argued (e.g., Mireille Hildebrandt, ‘Smart Technologies and the End(s) of Law,’ 2015, as an early example, along with more recent figures such as Virginia Dignum, who directs the large Wallenberg Foundation’s projects on humanities and social science approaches to AI, as well as Zweig, among others), leaving ethical judgments in particular and similar sorts of judgments in the domains of law, credit-worthiness, parole considerations (i.e., the (in)famous COMPAS system), ‘preemptive policing,’ and so on to AI/ML processes is to abdicate a central human capacity and responsibility – and this to systems that, no matter how further refined they may be with additional ML training, etc., are in principle incapable of such careful reflection.

“We are very often too prone to off-loading our tasks and responsibilities to our machineries – especially when the tasks are difficult, as reflective judgment always is. And in this case, failure to recognize in the first place just what it is that we’re offloading to the machines makes the temptations and drives to do so doubly pernicious and likely.

“Like the characters of ‘Brave New World’ who have forgotten what ‘freedom’ means, and so don’t know what they have lost, failing to take on board the deep differences between reflective forms of judgment and AI/ML decision-making techniques – i.e., forgetting about the former, if we were ever clear about it in the first place – likewise means we risk losing the practice and capacity of reflective judgment as we increasingly rely on AI /ML techniques, and not knowing just what it is we have lost in the bargain.

“What key decisions should require direct human input? This category would include any decision that directly affects the freedom and quality of life of a human being. I don’t mind AI/ML driving the advertising and recommendations that come across my channels – some of which is indeed useful and interesting. I am deeply concerned that offloading ethical and legal judgments to AI/ML threatens to rob us – perhaps permanently – of capacities that are central to human freedom and modern law as well as modern democracy. The resulting dystopia may not be so harsh as we see unfolding in the Chinese Social Credit Systems.

“‘Westerners’ may be more or less happy consumers, content with machine-driven options defining their work, lives, relationships, etc. But from the standpoint of Western traditions, starting with Antigone and then Socrates through the democratic and emancipatory movements that mark especially the 18th-20th centuries, that emphasize the central importance of human freedom over against superior force and authority – including the force and authority of unquestioned assumptions and rules that must always be obeyed, or else – such lives, however pleasant, would fail to realize our best and fullest possibilities as human beings, starting with self-determination.”

To build autonomous systems you must handle difficult edge cases and account for bad actors, and can’t account for every possible danger

Neil Davies, co-founder of Predictable Network Solutions and a pioneer of the committee that worked on the UK’s initial networking developments, said, “As someone who has designed and taken to production large-scale systems, I am abundantly aware that feasibility of executing fully autonomous systems is, for all practical purposes, zero. The main reason is the ontological/epistemological chasm: People forget that machines (and the systems they create) can only ‘know’ what they have ‘experienced’ – the things they have been exposed to. They, by definition, cannot reach out to a wider information base and they can’t create ‘ontological’ framing. And that framing is an essential way in which humans – and their societies – make decisions.

As someone who has designed and taken to production large-scale systems, I am abundantly aware that feasibility of executing fully autonomous systems is, for all practical purposes, zero.

Neil Davies, co-founder of Predictable Network Solutions and a pioneer of the committee that worked on the UK’s initial networking developments

“I can see great use for machine-learning tools that look over the shoulders of experts and say, ‘Have you considered X, Y or Z as a solution to your problem?’ But fully autonomous systems cannot be made that detect problems automatically and deal with them automatically. You have to have human beings make certain types of decisions or penalize certain bad behavior. Often behaviour can only be considered bad when intent is included – machines can’t deal in intent.

“The problem is that if you try to build a machine-based system with autonomy, you have to handle not only the sunny-day cases but also the edge cases. There will inevitably be adversarial actors endeavoring to attack individuals, groups or society by using the autonomous system to be nasty. It’s very hard to account for all the dangerous things that might happen and all the misuses that might occur.

“The systems are not god-like, and they don’t know the whole universe of possible uses of the systems. There’s an incompleteness issue. That incompleteness makes these systems no longer autonomous. Humans have to get involved.

“The common problem we’ve found is that is not feasible to automate everything. The system eventually has to say, ‘I can’t make a sensible/reasoned decision’ and it will need to seek guiding human input.

“One example: I work with companies trying to build blockchain-y systems. When designers start reasoning about what to build, they find that systems of formal rules can’t handle the corner cases. Even when they build systems they believe to be stable – things they hope can’t be gamed – they still find that runs on the bank can’t be ruled out and can’t easily be solved by creating more rules. Clever, bad actors can still collapse the system. Even if you build incentives to encourage people not to do that, true enemies of the system don’t care about incentives and being ‘rational actors.’ They’ll attack anyway. If they want to get rid of you, they’ll do whatever it takes, no matter how irrational it seems.

“The more autonomous you make the system, the more you open it up to interactions with rogue actors who can drive the systems into bad places by hoodwinking the system. Bad actors can collude to make the system crash the stock market, cause you to be diagnosed with the wrong disease, make autonomous cars crash. Think of ‘dieselgate,’ where people could collude by hacking a software system to allowing a company to cheat on reporting auto emissions. Sometimes, it doesn’t take much to foul up the system. There are frightening examples of how few pixels you need to change to make the driverless car navigation system misread the stop sign or the speed-limit sign.

“Another example of a problem: Even if you build a system where the rules are working well by reading the same environment and making the same decisions, you can run into a ‘thundering herd problem.’ Say, everyone gets rerouted around a traffic problem to the same side streets. That doesn’t help anyone.

“In the end, you don’t want to give systems autonomy when it comes to life-and-death decisions. You want accountability. If a battlefield commander decides it’s necessary to put troops at risk for a goal, you want to be able to court martial the commander if it’s the wrong choice for the wrong reasons. If an algorithm has made that catastrophic command decision, where do you go to get justice?

“Finally, I am pessimistic about the future of wide-scale, ubiquitous, autonomous systems because no one is learning from the collective mistakes. One of the enduring problems is that many big companies (as well as others, such as researchers and regulators) do not disclose what didn’t work. Mistakes get buried and failures aren’t shared, these things are prerequisites for people to learn from them.

“In the large, the same mistakes get made over and over as the collective experience and knowledge base is just not there (as, say, would be the case in the aircraft industry).

“There is a potential light at the end of this tunnel: the insurance system. They will have a lot to say about how autonomous decision-making rolls out. Will insurers underwrite any of these things? Clearly not, where an autonomous system that can be arbitrarily forced into a failure mode. Underwriting abhors correlations, the resulting claims are an existential risk to their business.

“The battle over who holds the residual risks extant in autonomous systems is already being played out between the judicial, commercial, insurance and political spheres. Beware the pressure for political expediency, either dismissing or capping the consequences of failure. It may well be that the insurance industry is your greatest ally. Their need to quantify the residual risks for them to underwrite could be the driver that forces the whole industry to face up to issues discussed here.”

Democratic processes might determine how these systems make decisions

Gillian Hadfield, professor of law and chair of the University of Toronto’s Institute for Technology and Society, said, “By 2035 I expect we will have exceedingly powerful AI systems available to us including some forms of artificial general intelligence. You asked for a ‘yes-no’ answer, although the accurate one is ‘either is possible and what we do today will determine which it is.’ If we succeed in developing the innovative regulatory regimes we will need – including new ideas about constitutions (power-controlling agreements), ownership of technology and access to technology by the public and regulators – then I believe we can build aligned AI that is responsive to human choice and agency. It is just a machine, after all, and we can decide how to build it. At the same time, it is important to recognize that we already live with powerful ‘artificially intelligent’ systems – markets, governments – and humans do not have abstract, ideal agency and choice within those systems.

“We live as collectives with collective decision-making and such highly decentralized decisions that constrain any individual’s options and paths. I expect we’ll see more automated decision-making in domains in which markets now make decisions – what to build, where to allocate resources and goods and services. Automated decision-making, assuming it is built to be respected and trusted by humans because it produces justifiable outcomes, could be used extensively in resolving claims and disputes. The major challenge is ensuring widespread support for decision-making; this is what democratic and rule-of-law processes are intended to do now. If machines become decision-makers, they need to be built in ways [that] earn that kind of respect and support from winners and losers in the decision.

“The version of the future in which decisions are automated on the basis of choices made by tech owners and developers alone (i.e., implementing the idea that a public services decision should be made solely on the basis of a calculation of an expert’s assessment of costs and benefits) is one in which some humans are deciding for others and reducing the equal dignity and respect that is foundational to open and peaceful societies. That’s a bleak future, and one on which the current tensions between democratic and autocratic governance shed light. I believe democracy is ultimately more stable, and that’s why I think powerful machines in 2035 will be built to integrate into and reflect democratic principles, not destroy them.”

We could overcome human shortcomings by having individuals relinquish their control of some decisions

Marcus Foth, professor of informatics at Queensland University of Technology, Australia, responded, “The question [of] whether humans will or will not be in control of important decision-making in the future is often judged on the basis of agency and control – with agency and control thought of as good and desirable. Compared to the individual realm of conventional decision-making, most humans come from a culture with a set of values where ‘being in control’ is a good thing. And there is merit in staying in control when it comes to the usual use cases and scenarios being described in tech utopias and dystopias.

“However, I want to raise a scenario where relinquishing individual control and agency can be desirable. Perhaps this is a philosophical/conceptual thought experiment and deemed unrealistic by many, but perhaps it is nonetheless useful as part of such futuring exercises. Arguably, the types of wicked problems humanity and the planet face are not a result of lacking scientific ingenuity and inventiveness but a lack of planetary governance that translates collective wisdom and insights into collective action. While we sometimes see positive examples such as with the rapid response to the COVID-19 pandemic, my overall assessment suggests there continue to be systemic failures in the systems of planetary governance. I argue that maintaining individual human agency and control as a value is partly to blame: Human comfort, safety, control and convenience always triumph over planetary well-being. Would relinquishing individual human control in favour of collective human control offer a more desirable future scenario of governance systems that serve not just the well-being of (some) humans but also forgotten humans ‘othered’ to the fringes of public attention, as well as more-than-humans and the planet?

“In essence, what I propose here is arguably nothing new: Many First Nations and indigenous peoples have learnt over millennia to act as a strong collective rather than a loose amalgamation of strong-minded individuals. Relationality – or as Mary Graham calls it, the ‘relational ethos’ – is a key feature of good governance, yet despite all the AI and tech progress we still have not been able to achieve a digitally supported system of governance that bestows adequate agency and control to those who have none: minority groups of both human and nonhuman/more-than-human beings.

“This is why I think having the typical humans (who are in control now) not being in (the same level of) control of important decision-making in the year 2035 – is absolutely a good thing that we should aspire toward. The alternative I envisage is not the black-and-white opposite of handing control over to the machines, but a future scenario where technology can aid in restoring the relational ethos in governance that serves all humans and more-than-humans on this planet.”

The risk is that we will increasingly live our lives on autopilot; it can make us less able to change and less conscious of who we are

Gary Grossman, senior vice president and global lead of the Edelman AI Center for Excellence, previously with Tektronix, Waggener Edstrom and Hewlett-Packard, observed, “The U.S. National Security Commission on Artificial Intelligence concluded in a 2021 report to Congress that AI is ‘world-altering.’ AI is also mind-altering, as the AI-powered machine is now becoming the mind. This is an emerging reality of the 2020s. As a society, we are learning to lean on AI for so many things that we could become less inquisitive and more trusting of the information we are provided by AI-powered machines. In other words, we could already be in the process of outsourcing our thinking to machines and, as a result, losing a portion of our agency.

“Most AI applications are based on machine learning and deep learning neural networks that require large datasets. For consumer applications, this data is gleaned from personal choices, preferences and selections on everything from clothing and books to ideology. From this data, the applications find patterns, leading to informed predictions of what we would likely need or want or would find most interesting and engaging. Thus, the machines are providing us with many useful tools, such as recommendation engines and 24/7 chatbot support. Many of these apps appear useful – or, at worst, benign. However, we should be paying more attention to this not-so-subtle shift in our reliance on AI-powered apps. We already know they diminish our privacy. And if they also diminish our human agency, that could have serious consequences. For example, if we trust an app to find the fastest route between two places, we are likely to trust other apps with a risk that we will increasingly live our lives on autopilot.

“The positive feedback loop presented by AI algorithms regurgitating our desires and preferences contributes to the information bubbles we already experience, reinforcing our existing views, adding to polarization by making us less open to different points of view, less able to change, and turns us into people we did not consciously intend to be. This is essentially the cybernetics of conformity, of the machine becoming the mind while abiding by its own internal algorithmic programming. In turn, this will make us – as individuals and as a society – simultaneously more predictable and more vulnerable to digital manipulation.

“Of course, it is not really AI that is doing this. The technology is simply a tool that can be used to achieve a desired end, whether to sell more shoes, persuade to a political ideology, control the temperature in our homes, or talk with whales. There is intent implied in its application. To maintain our agency, we must insist on an AI Bill of Rights as proposed by the U.S. Office of Science and Technology Policy. More than that, we need a regulatory framework soon that protects our personal data and ability to think for ourselves.”

We need new ways to enforce principles of digital self-determination in order to reclaim the agency and autonomy that have been lost in this era

Stefaan Verhulst, co-founder and director of the Data Program of the Governance Laboratory at New York University, wrote, “We need digital self-determination (DSD) to ensure humans are in the loop for data action 2035. Humans need new ways to enforce principles of digital self-determination in order to reclaim agency and autonomy that have been lost in the current data era. Increased datafication, combined with advances in analytics and behavioral science applications, has reduced data subjects’ agency to determine not only how their data is used, but also how their attention and behavior are steered, and how decisions about them are made.

“These dangers are heightened when vulnerable populations, such as children or migrants, are involved. Together with a coalition of partners in the DSD Network we are working to advance the principle of digital self-determination in order to bring humans back into the loop and empower data subjects. DSD, in general, confirms that a person’s data is an extension of themselves in cyberspace. We must consider how to give individuals or communities control over their digital selves, particularly those in marginalized communities whose information can be used to disenfranchise them and discriminate against them.

“The DSD principle extends beyond obtaining consent for data collection. DSD is centered on asymmetries in power and control among citizens, states, technology companies and relevant organizations. These imbalances distinguish between data subjects who willingly provide data and data holders who demand data. To account for these nuances, we center DSD on 1) agency (autonomy over data collection, data consciousness and data use); 2) choice and transparency (regarding who, how, and where data is access and used); and 3) participation (those empowered to formulate questions and access the data).

  • “The DSD principle should be present throughout the entire data lifecycle – from collection to collation to distribution. We can identify critical points where data needs to be collected for institutional actors to develop policy and products by mapping the data lifecycle experience for different groups, for example, for migrants, for children and others. To accomplish this we must examine policy, process and technology innovations at different stages of the data lifecycle.
  • “Policies must be adopted and enforced in order to ensure that the DSD principle is embedded and negotiated in the design and architecture of data processing and data collection in order to avoid function/scope creep into other areas, as well as to outline robust protections and rights for vulnerable populations in order for them to reclaim control over their information.
  • “DSD implementation processes have to be iterative, adaptive and user-centered in order to be inclusive in co-designing the conditions of access and use of data.
  • “Technologies can be used to aid in self-determination by enabling selective disclosure of data to those that need it. Such tools can perform many tasks, for example reducing the administrative burden and other problems experienced by vulnerable groups, or in the establishment of a ‘right to be forgotten’ portal as a potential solution to involuntary and/or unnecessary data collection.

“DSD is a nascent but important concept that must develop in parallel to technological innovation and data protection policies to ensure that the rights, freedoms and opportunities of all people extend to the digital sphere.”

Sign up for The Briefing

Weekly updates on the world of news & information