The following respondents wrote contributions that consider a wide range of issues tied to the future of human agency.
Henning Schulzrinne, Internet Hall of Fame member and co-chair of the Internet Technical Committee of the IEEE, said, “Agency and recourse are privileges now and they are likely to become more so. By 2035, automated decision-making will affect all high-volume transactions – from hiring and firing to obtaining credit, renting apartments, gaining access to health care, and interactions with the criminal justice system. Wealth, income and social standing will determine the extent to which individuals will have the ability to contest and work around automated decisions. It doesn’t seem likely that any of this will change.
“This is not a new concept. An example is the scheduling of when and where you work; for many hourly workers and gig workers this is automated, with little ability to influence the hours and locations. Employment and termination are also already largely algorithmic (see Amazon warehouses and many gig platforms). High-income, higher-status individuals will likely still get interviewed, hired and evaluated individually, and have at least some leverage. This is also more trivially true for travel – economy class travelers book or rebook online; business class travelers get concierge service by a human. In 2035, the notion of talking to an airline representative, even after waiting for hours in a call center queue, will become a rarity.
“Areas that are currently ‘inefficient’ and still largely human-managed will become less so, particularly in regard to employment, rental housing and health care. Human input is only modestly useful if the input is a call center staff person who mainly follows the guidance of their automated systems. Human input requires recourse, rights and advocacy, i.e., the ability to address unfair, discriminatory or arbitrary decisions in employment, credit, housing and health care.”
Micah Altman, social and information scientist at MIT’s Center for Research in Equitable and Open Scholarship, wrote, “‘The fault, dear Brutus, is not in our stars but in ourselves, that we are underlings.’ Decisions determined by algorithms affecting our lives are increasingly governed by opaque algorithms, from the temperature of our office buildings to what interest rate we’re charged for a loan to whether we are offered bail after an arrest. More specifically complex, opaque, dynamic and commercially developed algorithms are increasingly replacing complex, obscure, static and bureaucratically authored rules.
“Over the next decade and a half, this trend is likely to accelerate. Most of the important decisions affecting us in the commercial and government sphere will be ‘made’ by automated evaluation processes. For the most high-profile decisions, people may continue to be ‘in the loop,’ or even have final authority. Nevertheless, most of the information that these human decision-makers will have access to will be based on automated analyses and summary scores – leaving little for nominal decision-makers to do but flag the most obvious anomalies or add some additional noise into the system.
“This outcome is not all bad. Despite many automated decisions being outside of both our practical and legal (if nominal) control, there are often advantages from a shift to out-of-control automaticity. Algorithmic decisions often make mistakes, embed questionable policy assumptions, inherit bias, are gameable, and sometimes result in decisions that seem (and for practical purposes, are) capricious. But this is nothing new – other complex human decision systems behave this way as well, and algorithmic decisions often do better, at least in the ways we can most readily measure. Further, automated systems, in theory, can be instrumented, rerun, traced, verified, audited, and even prompted to explain themselves – all at a level of detail, frequency and interactivity that would be practically impossible to conduct on human decision systems: This affordance creates the potential for a substantial degree of meaningful control.
“In current practice, algorithmic auditing and explanation require substantial improvement. Neither the science of machine learning nor the practice of policymaking has kept pace with the growing importance of designing algorithmic systems such that they can provide meaningful auditing and explanation.
- “Meaningful control requires that algorithms provide truthful and meaningful explanations of their decisions, both at the individual decision scale and at the aggregate policy scale. And to be actionable, algorithms must be able to accurately characterize the what-ifs, the counterfactual changes in the human-observable inputs and contexts of decisions that will lead to substantially different outcomes. While there is currently incremental progress in the technical and policy fields in this area, it is unlikely to catch up with the accelerating adoption of automated decision-making over the next 15 years.
- “Moreover, there is a void of tools and organizations acting directly on behalf of the individual. Instead, most of our automated decision-making systems are created, deployed and controlled by commercial interests and bureaucratic organizations.
- “We need better legal and technical mechanisms to enable the creation, control and audition of AI agents and we need organizational information fiduciaries to represent our individual (and group) interest in real-time control and understanding of an increasingly automated world.
“There is little evidence that these will emerge at scale over the next 15 years. The playing field will remain slanted.”
Peter Levine, professor of citizenship and public affairs at Tufts University, commented, “Let’s look at three types of agency. One is the ability to make choices among available options, as in a supermarket. AI is likely to accommodate and even enhance that kind of agency, because it is good for sales. Another kind of agency is the ability to construct a coherent life that reflects one’s own thoughtful principles. Social systems both enable and frustrate that kind of agency to varying degrees for various people. I suspect that a social system in which AI is mostly controlled by corporations and governments will largely frustrate such agency. Fewer people will be able to model their own lives on their own carefully chosen principles. A third kind of agency is collective: the ability of groups to deliberate about what to do and then to implement their decisions. AI could help voluntary groups, but it tends to make decisions opaque, thus threatening deliberative values.
“The survey asks about the relationship between individuals and machines. I would complicate that question by adding various kinds of groups, from networks and voluntary associations to corporations and state agencies. I think that, unless we intervene to control it better, AI is likely to increase the power of highly disciplined organizations and reduce the scope of more-democratic associations.”
John Verdon, a Canada-based consultant on complexity and foresight, observed, “First – as Marshall McLuhan noted – technology is the most human part of us. Language and culture are technologies – and this technology liberated humans from the need to translate learnings into genes (genetic code) and enabled learning to be coded into memes (language and behavior that can be taught and shared). This enabled learning to expand, be combined, archived and more. Most of the process of human agency is unconscious.
“The challenge of a civilization sustaining and expanding its knowledge base – its ‘know-how’ (techne) is accelerating – every generation has to be taught all that is necessary to be fluent in an ever-wider range of know-how. Society’s ‘know-how’ ecology is increasing in niche density at an accelerating rate (the knowledge and know-how domains that enable a complex political economy) so, yes, AI will be how humans ‘level up’ to ensure a continuing flourishing of knowledge fluency and ‘know-how’ agency. Like using calculators and computers for math and physics.
“The key is ‘accountability’ and response-ability – for that we need all software to shift to open-source platforms – with institutional innovations. For example – Auditor Generals of Algorithms – similar to the FDA or Health Canada (Does that algorithm do what it says it does? What are the side effects? What are the approved uses? Who will issue ‘recall warnings’? etc.) Humans became humans because they domesticated themselves via techne (know-how) and enabling ‘built environments.’ Vigilance with responsibility is the key to evolving a flourishing world.”
John Laudun, professor of social information systems at the U.S. Army Combined Arms Center, wrote, “As a folklorist, I have spent much of my career studying how people build the worlds in which they live through everyday actions and expressions. Words said here move across a community and then across communities, leaving one mouth and entering another ear, receptivity most often being established by ready transmission. This is how information spreads.
“We construct ourselves using the information available to us. Our social relations are determined by the information we share; good friends are good because we share certain information with them and not with others. Acquaintances are just that because we share a lot less with them. This is a dynamic situation, of course, and friends and acquaintances can swap places, but what does not change is how we construct those relationships: through informational interchange. The self in this view is distributed, not bounded, and thus less in control of itself than it might imagine – or how much it has been told it is in control during its time in formal schooling, which requires the self to be regulated and regimented so it can be assessed, graded and validated.
“My view of this situation is unchanged having now glimpsed into the information operations that lie behind the competition-crisis-conflict continuum engaged in by global actors, both nations and not. What those actors have done is simply harness the algorithmic and gamified nature of media platforms, from social media to social games. Their ability to create an addictive experience highlights rather well, I think, how little we control ourselves, both individually and collectively, at present.
“Despite the howls of concern coming from all corners of modern democracies, I see little hope that either the large corporations profiting, literally, from these infrastructures, or the foreign entities who are building up incredibly rich profiles on us will be willing to turn down their efforts to keep us firmly in their sway. The political will required for us to mute them does not appear to be present, though I hope to be proven wrong in the next few years, not so much for my sake but for the sake of my child and her friends.”
Andy Opel, professor of communications at Florida State University, commented, “The question of the balance between human agency and artificial intelligence is going to be one of the central questions of the coming decade. Currently corporate-designed and controlled algorithms dominate our social media platforms (as well as credit scores, health care profiles, marketing and political messaging) and are completely opaque, blocking individuals’ ability to determine the contents of their social media feeds. The control currently wielded by these privately held corporations will not be given up easily, so the struggle for transparency, accountability and public access is going to be a challenging struggle that will play out over the next 10 to 15 years.
“If the past is any predictor of the future, corporate interests will likely overrule public interests and artificial intelligence, autonomous machines and bots will extend to invisibly shape even more of our information, our politics and our consumer experience. Almost 100 years ago there was a vigorous fight over the public airwaves and the regulation of radio broadcasting. The public lost that fight then and lost even more influence in the 1996 Telecommunications Act, resulting in the consolidated media landscape we currently have that is dominated by five major corporations who have produced a system of value extraction that culturally strip-mines information and knowledge out of local communities, returning very little cultural or economic value back to those local communities.
“With the collapse of journalism and a media landscape dominated by echo chambers, as a society we are experiencing the full effects of corporate domination of our mediascape. The effects of these echo chambers are being felt at intimate levels as families try to discuss culture and politics at the dinner table or at holiday gatherings. As we come to understand how deeply toxic the current mediascape is, there is likely to be a political response that will call for transparency and accountability of these algorithms and autonomous machines. The foundations of this resistance are already in place, but the widespread recognition of the need for media reform is still not fully visible as a political agenda item.
“The promise of artificial intelligence – in part the ability to synthesize complex data and produce empirically sound results – has profound implications. But as we have experienced with climate data over the last 40 years, data often does not persuade. Until artificial intelligence is able to tell stories that touch human emotions, we will be left with empirically sound proposals/decisions/policies that are left unfulfilled because the ‘story’ has not persuaded the human heart. What we will be left with is the accelerated exploitation of human attention with the primary focus on consumption and entertainment. Only when these powerful tools can be wrestled away from corporate control and made transparent, accessible and publicly accountable will see their true benefits.”
Simeon Yates, a professor expert in digital culture and personal interaction at the University of Liverpool, England, and research lead for the UK government’s Digital Culture team, said, “I do not think humans will be in meaningful control of many automated decision-making activities in the future. But we need to put these in two categories. First, those decisions that are better made by well-designed automated systems – for example in safety-critical/time-critical environments where appropriate decisions are well documented and agreed upon and where machines can make decisions more quickly and accurately than people.
“Second, decisions that are based on data analytics and what is often erroneously called AI. Many, many systems described as AI are no more than good statistical models. Others are bad models or simply rampant empiricism linking variables. These are bad enough applied to areas with little ethical implication, but many are applied to social contexts. See, for example, the following reports on bias in predictive algorithms for law enforcement and AI supposedly predicting crime. Whatever the methodological issues one might raise with the research into AI for law enforcement – its conclusion is a recommendation to use the modelling to highlight bias in the allocation of police resources away from deprived areas. The news article focuses on predictive analytics and calls it ‘AI.’ The poor empiricist reading is that the AI can help decide where to allocate policing resources. If implemented that way, then human agency is taken out of a very important set of decisions.
“I predict there will be thousands of such models and approaches sold as AI solutions to cash-strapped municipalities, or to companies or medical care, etc. After which humans will not have a clear role in these decisions. Nor will human agents – and that means citizens who have rights (digital or other) – be able to see or understand the underlying models. Why do I think this will be the case? Because it already is, and it is just creeping ever onward.
“There should be much greater debate over things that fall into my first category. Where the removal of human agency is ethically beneficial – the plane does not crash, the reactor is safe, and the medicine dose is checked. As regards the second category (where there is a serious question over the ethics of passing decision-making to algorithms), we need debates and regulations on the transparency of AI/data-driven decision-making and areas where this is socially acceptable or not, and we need much greater data-use transparency.
“We also must educate our computer science colleagues about ethics and responsible innovation in this domain. Bad analyses and bad social science seem to me too often come from unthinking application of data analytics to social questions. Especially where underpinned by naive understandings of social and political processes. This goes beyond bias in data.”
A futurist and designer based in Europe commented, “I am not at all certain we will have artificial general intelligence by 2035, so between now and that time all decisions will still be the consequence of the humans behind the design and operation of the technologies and those using them. Deceptive and/or poorly thought-through ways of using technology will persist throughout all of humanity’s time. With this in mind, how we as societies and civilizations allow humans to spread the use of these technologies and the gravity of repercussions when they are being misused will steer us toward the future that is only a consequence of consequences, etc.
“Any number of decisions that can be automated will be – the question I would ask concerns who is in charge of putting these causal structures into automation. If it is governments, we are likely to see greater ideological extremes, if companies, we will experience great as well as terrible things, and if this power is with individuals, we will all need to be persistently educated unless these systems are so intuitive to use that literally anyone will understand them.
“I believe an advanced AI should be able to assess if a person understands what it is they are handing over to automation, and if this is the case, I see very few boundaries for it beyond the making of life-and-death decisions. This being said, I would be surprised if automated military technology would not be in practice by then to some capacity.
“The biggest issue from the accelerated rollout will be an economic distribution within and between societies. It is obvious that if you employ thousands of AI engineers today, you will develop IP that multiplies your revenue streams without this offering any certainty for this multiplication to have any sort of general benefit to each person who is impacted by the same technology. If there was certainty concerning wealth distribution, there would be very little reason to fear the rollout beyond the likelihood of ever-more-complex scams.”
Jaak Tepandi, professor emeritus of knowledge-based systems at Tallinn University of Technology, Estonia, wrote, “In 2035, we will probably still be living in an era of far-reaching benefits from artificial intelligence. Governments are beginning to understand the dangers of unlimited artificial intelligence, new legislation and standards are being developed. If people can work together, this era can last a long time.
“The longer term is unstable. Conflicts are an integral part of complex development. History shows that humans have difficulty managing their own intelligence. The relationships between humans and machines, robots and systems are like the relationships between humans themselves. Human conflicts are reflected in artificial intelligence and amplified in large robot networks of both virtual and physical reality. People would do well to defend their position in this process.
“Maintaining control over the physical world and critical infrastructure and managing AI decisions in these areas is critical. Smart and hostile AI can come from sophisticated nation-state-level cyber actors or individual attackers operating anywhere in the world. Those who do not control will be controlled, perhaps not in their best interest.”
A public policy professional at a major global AI initiative said, “In 2035 we will be allowed to have a degree of control over our tech-abetted decisions if we want to have it. But the value of that control may no longer seem important to us. As machines become better at predicting what we want and organizing our decisions for us, many of us are likely to find real value in their contributions to the decisions we make and many people will simply defer to them.
“An imperfect analogy can be found in a hospital visit – when you visit the hospital you still remain in control, but with two clear limitations. The first is that the options set for you are generated by the hospital and doctors, based on knowledge that you lack and in such a way that it is hard for you to evaluate whether there are any other options you should be considering. The second is that your choice is limited to what some aspects of society have collectively decided will be made affordable to you – that experimental or expensive treatment may be available, but only if you foot the bill.
“Now, fast-forward to the use of powerful AI-based decision-making aides and you will see that you are in roughly the same situation: The options generated will be hard to second-guess, and there may well be a legal or regulatory assumption that if you deviate from some or all of them, well, then you may not be covered by your insurance. You can choose to drive yourself, but if you do you assume all liability for what happens next. You can decide among extremely complex investment strategies for your retirement – but you may not know how to generate your own options.
“And if we imagine that these tools are really good – which they have the potential to be – we should also note that there is another possibility: that we will want to follow the advice the tools provide us with. We are in control, but we have learned that the recommendations we get are usually very, very good – and so we simply choose to trust and follow them. We can still deviate from them if we want – we have, as Stanislaw Lem quipped, the free will to make slightly worse decisions. We are in control, but the value of that control has decreased radically.
“How may this then change human society? One thing is, of course, that we will need to ensure that we occasionally force people to make their own decisions, or we will have created a dangerous vulnerability by which hijacking these systems could mean hijacking all of society. You could imagine a society with recommendation as a luxury good, where the wealthy use decision aides for 100% of their decisions, but the poorer will afford only 40% or less. Completely autonomous control in this society will be a class marker.
“The rich will be able to afford high-resolution decision models of themselves, guaranteeing that they make better decisions than they themselves could make and the poor will only be able to use very templatized recommendation engines.
“It is worth noting, by the way, that when we say that we are in ‘control’ of a decision today, that is hardly the case. We make decisions embedded in networks of technology, people and ideologies – and these determine our decisions today. Technology can be helpful in breaking through and going against some of the patterns and habits created in these networks, and perhaps optimize things for better decisions for everyone.”
James Hanusa, futurist, consultant and co-founder at Digital Raign, commented, “I want to be an optimist, but based on my experience in the field to date, I’m afraid society will have declining inputs on decision-making. We are in the first years of a 20-year wave of automation and augmented intelligence that I would say started. Most of the past major computing cycles have also been 20 years, mainframes 1960s-’80s, personal computer 1980s-2000s, internet/mobile 2000s-2020.
“Looking at the advances of the web’s impact on society over 20 years and the direction it is being driven by emerging tech, I can only imagine that the business models, machine-to-machine interoperability and convenience of use will render most people’s lives to ‘autopilot.’ Some current inputs that lead to this conclusion include the Internet of Everything, quantum computing and artificial general intelligence moving toward artificial superintelligence, which has been predicted by leading computer scientists as possibly occurring between 2030-2050.
“Another factor that I believe is important here is the combination of Big Tech, value of data and trust factors in society. The most valuable companies in the world are tech companies, often rivaling countries in their market cap and influence. Their revenue generation is more and more the result of data becoming the most valuable commodity. It will be nearly impossible to change the trajectory of these companies developing autonomous systems at lower costs, especially when AI’s start programming AI’s. Societal trust of institutions is extremely low, but computers, not technology companies, appear to be de facto highly reliable.
“The final point I would submit is based on this observation from Mark Weiser of Xerox Parc: ‘The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it.’
“Think of the complex yet simple developments of our current technology, for example, GPS telling us where to go or recommendation engines showing us what to watch and buy. What sophistication might we ‘outsource’ just based on those examples? Autonomous cars, medical decisions, mate selection, career paths?
“The key decisions that I believe should have direct human input or an override function include life-termination, birth, death and nuclear missile launch.
“A real fear I hold is that in the next 30 years, as the world moves toward 10 billion population with integrated exponential technologies simultaneously having a greater impact on society, that a large part of humanity will become ‘redundant.’ The advances in technology will be far greater and longer lasting than that of the industrial revolution and something that capitalism’s creative destruction cannot overcome.
“Humans have unique capacities for creativity, community and consciousness. Those are the areas I believe we should be focusing our education systems on in developing in the populace. Computers will surpass us in intelligence in most everything by 2035.”
Ray Schroeder, senior fellow at the University Professional and Continuing Education Association, observed, “The progress of technology, and particularly artificial intelligence, inexorably moves forward largely unfettered. Faster machines and a seemingly endless supply of storage means that progress in many areas will continue. By 2035 access to truly effective quantum computing will further fuel the effectiveness and efficiency of AI.
“Society has a history of accepting and embracing the advance of technology, even when consequences seem horrific such as in the case of instruments of war. Far from those cases, the advance of AI and associated technologies promise to enhance and advance our lives and environment, making work more efficient and society more effective overall.
“Artificial intelligence has the potential to shine a bright light on massive predictive analytics and projections of the impact of practices and effects. Advanced machine learning can handle increasingly large databases, resulting in deeply informed decision-making. Most impactful may be the advances in health care and education. Yet, day-to-day improvements in commerce and the production of custom products to meet needs and desires of individuals will be attained.
“The question at hand is whether this deep analysis and projections of predictions will be autonomously enforced, or rather will it be used to inform human decisions. Certainly, in some cases such as autonomous vehicles, the AI decisions will be instant-by-instant, so, while the algorithm may provide human override of those decisions, practically, little can be done to countermand a decision in 1/100th of a second to avoid a collision. In other cases – such as human resources employment decisions, selecting from among medical treatment alternatives, and approval of loans – AI may be tempered by human approvals or subject to human appeals of preliminary machine-made decisions.
“We are now at the important inflection point that demands that governance of AI be implemented on a wide-scale basis. This will come through legislation, industry rules of practice and societal norms of the sort such as that we do not allow children to operate cars.
“That is not to say that there will be no rules asserting the rights of artificial intelligence that will be determined in the near term. For example, the current question of whether AI can hold a patent may afford some rights to AI or the creator of an algorithm.
“I do not expect us to see a truly sentient AI by 2035, though that, too, may be close behind. When that level of cognition is achieved, we will need to reconsider the restrictions that will have been placed in the intervening years.”
Michel Grossetti, director of sociological research at the French National Center for Scientific Research (CNRS), said, “In the far future it is not impossible that automata will reach a level of realism and an autonomy in their behavior that leads many to consider them as an ‘other’ kind of ‘people.’ This could lead the social sciences to have to define ‘artificial persons.’ But automatons will always be caught in the relationships of power and domination between humans.”
Barry Chudakov, founder and principal, Sertain Research, wrote, “We would be wise to prepare for what shared consciousness means. Today that sharing is haphazard: We pick up a tool and once we begin using and see how it is programmed, we may realize (and may be shocked by) how much agency the tool usurps. We need to be aware of what technology-human sharing means and how much, if any, agency we are willing to share with a given tool. We will increasingly use AI to help us resolve wicked issues of climate change, pollution, hunger, soil erosion, vanishing shorelines, biodiversity, etc. In this sharing of agency, humans will change. How will we change?
“If we consider mirror worlds, the metaverse or digital twins, a fundamental design feature raises a host of philosophical questions. Namely, how much agency can we design, should we design into machines, bots and systems powered by autonomous and artificial intelligence? This has the potential to effect a death by a thousand cuts. What constitutes agency? Is a ping, a reminder, an alert – agency? Probably not. But when those (albeit minimal) features are embedded in a gadget and turning them off or on seems difficult to maneuver – does the gadget have effectively some agency? I would argue yes. If a robot is programmed to assist a failing human, say during respiratory arrest or cardiac failure, is this a measure of (or actual) agency? (We’re going down the slope.) What about when an alarm system captures the face of an intruder and is able to instantly match that face with a police database – and then calls 911 or police dispatch? (We may not be here today, but we’re not far away from that possibility.)
“The broadening and accelerating rollout of tech-abetted, often autonomous decision-making has the potential to change human society in untold ways. The most significant of these is human reliance on autonomous decision-making and how, from passivity or packaged convenience, the scope of the decision-making could creep into, and then overtake, previously human-moderated decisions. Humans are notorious for giving up their agency to their tools from habit and path-of-least-resistance lethargy. This is not an indictment of humans but an acknowledgment of the ease with which we follow the logic of our convenience-marketed products. Heart disease is an example which is rooted in packaged poison: So many foods in plastic and cans are harmful to human heart health, but the convenience of getting, say, packaged meats has fostered reliance on growth hormones, (which has also fueled inhumane animal conditions) which drive up meat consumption which drives up rates of heart disease. The same could be said of diabetes, which is due to an overreliance on sugar sweeteners in every product imaginable.
“These examples are harbingers of what could happen to human society if proper oversight is not exercised regarding tech-abetted, often autonomous decision-making.”