Numbers, Facts and Trends Shaping Your World

The Future of Human Agency

2. Expert essays on human agency and digital life (continued)

We are heading for a shift to significant control by AI systems that subordinate human agency to increasingly aware AI

David Barnhizer, a professor of law emeritus and author of “Human Rights as a Strategic System,” wrote, “Various futurists project that AI systems will or already are developing an internal version of what I think of as ‘alternative intelligence’ as opposed to artificial intelligence, and they expect that there could or will be a shift (possibly by 2035 but most likely 15 or 20 years later) to significant control by interacting AI systems that subordinate human agency to the increasingly sentient and aware AI systems.

“To put it even more bleakly, some say humanity may be facing a ‘Terminator’-type apocalyptic world. I don’t know if that very dark future awaits, but I do know that the human race and its leaders are getting dumber and dumber, greedier and greedier while the tech experimenters, government and military leaders, corporations, academics, etc., are engaged in running an incredible experiment over which they have virtually no control and no real understanding.

“One MIT researcher admitted several years ago after some AI experiments they were conducting that it was obvious the AI systems were self-learning outside the programmatic algorithms and the researchers didn’t know exactly how or what was going on. All of that happened within relatively unsophisticated AI systems by today’s research standards. As quantum AI systems are refined, the speed and sophistication of AI systems will be so far beyond our comprehension that to think we are in control of what is going on is pre-Copernican. The sun does not revolve around the Earth, and sophisticated AI systems do not revolve around their human ‘masters.’

“As my son Daniel and I set forth in our 2019 book ‘The Artificial Intelligence Contagion,’ no one really knows what is going on, and no one knows the scale or speed of the consequences or outcomes we are setting into motion. But some things are known, even if ignored. They include:

  • “For humans and human governments, AI is power. By now it is obvious that the power of AI is irresistible for gaining and maintaining power. Big Tech companies, political activists, governmental agencies, political parties, the intelligence-gathering actors, etc., simply can’t help themselves.
  • “Information is power, and data creation, privacy intrusions, data mining and surveillance are rampant and will only get worse. I don’t even want to get into the possibilities of cyborg linkages of AI within human brain systems such as are already in the works, but all of this sends a signal to me of even greater control over humans and the inevitable deepening of the stark global divide between the ‘enhanced haves’ and everyone else (who are potentially under the control of the ‘haves.’)

“We need to admit that regardless of our political rhetoric, there is no overarching great ‘brotherhood’ of the members of the human race. The fact is that those who are the most aggressive and power-driven are always hungry for more power, and they aren’t all that concerned with sharing that power or its benefits widely. The AI developments that are occurring demonstrate this phenomenon quite clearly whether we are talking about China, the U.S., Russia, Iran, corporations, agencies, political actors or others.

“The result is that there is a very thin tier of humans who, if they somehow are able to work out a symbiosis with the enhanced AI systems that are developing, will basically lord it over the remainder of humanity – at least for a generation or so. What happens after that is also unknown but unlikely to be pretty. There is no reason to think these AI systems as homogenous or identical. They will continue to grow, with greater capabilities and more-evolved insights, emerging from varied cultures. We (or they, actually) could sadly see artificial intelligence systems at war with each other for reasons humans can’t fathom. This probably sounds wacko, but do we really know what might happen?

“As we point out in our book, many people look at the future through the proverbial ‘rose-colored glasses.’ I, obviously, do not. I personally love having the capabilities computer systems have brought me. I am insatiably curious and an ‘info freak.’ I love thinking, freedom of thought and the ability to communicate and create. I have no interest in gaining power. I am in the situation of Tim Berners-Lee, the creator of the fundamental algorithms that brought the Internet within the reach of global humanity. Berners-Lee and many others who worked on the issues intended to create systems that enriched human dialogue, created shared understanding and made us much better in various ways than we were. Instead, he and other early designers realize they opened a Pandora’s box in which, along with their significant and wonderful benefits, the tools they offered the world have been corrupted and abused in destructive ways and brought out the darker side of humanity.”

The biggest issue is whether people trust the organizations that are delivering AI systems

Peter Reiner, professor and co-founder of the National Core for Neuroethics at the University of British Columbia, said, “One way of restating the question is to ask to what degree is autonomy a protected value – one that resists trade-offs. Humans surely value autonomy. Or at least Westerners do, having inherited autonomy as one of the fruits of the Enlightenment. But whether the affordances of AI are sufficiently enticing to give up autonomous decision-making is really more of an empirical question – to be answered in time – than one to be predicted. Nonetheless, several features of the relationship between humans and algorithms can be anticipated to be influential.

“Most important is the matter of trust, both in the companies offering the technology and in the technology itself. At the moment, the reputation of technology companies is mixed. Some companies reel from years of cascading scandals, depleting trust. At the same time, three of the top five most-trusted companies worldwide base their businesses on information technology. Maintaining faith in the reliability of organizations will be required in order to reassure the public that their algorithms can be trusted in carrying out important decisions.

“Then there is the matter of the technology itself. It goes without saying that it must be reliable. But beyond that, in the realm of important decisions, there must be confidence that the technology is making the decision with the best interests of the user in mind. Such loyal AI is a high bar for current technology, yet will be an important factor in convincing people to trust algorithms with important decisions.

“Finally, it is generally observed that people still seem to prefer humans to help with decisions rather than AIs, even when the algorithm outperforms the human. Indeed, people are comfortable having a total stranger – even one as uncredentialed as an Uber driver – whisk them from place to place in an automobile, but they remain exceedingly skeptical of autonomous vehicles, not just of using them but of the entire enterprise. Such preferences, of course, may depend on the type of task.

“To date we only have fragmentary insight about the pushes and pulls that determine whether people are willing to give up autonomy over important decision-making, but the initial data suggest that trade-offs such as this may represent a substantial sticking point. Whether this will change over time – a phenomenon known as techno-moral change – is unknown. My suspicion is that people will make an implicit risk-benefit calculation: the more important the decision, the greater the benefit must be. That is to say that algorithms are likely to be required to vastly outperform humans when it comes to important decision-making in order for them to be trusted.”

The essential question: What degree of manipulation of people is acceptable?

Claude Fortin, clinical investigator at the Centre for Interdisciplinary Research, Montreal, an expert in the untapped potential and anticipated social impacts of digital practices, commented, “The issue of control is twofold: First, technological devices and techniques mediate the relationship between subject and object, whether these be human, animal, process or ‘thing.’ Every device or technique (such as an AI algorithm) adds a layer of mediation between the subject and the object. For instance, a smartphone device adds one layer of mediation between two people SMS texting. If an autocorrect algorithm is modifying their writing, that adds a second layer of mediation between them. If a pop-up ad were to appear on their screen as a reactive event (reactive to the subject they are texting about – for instance, they are texting about running shoes and an ad suddenly pops up on the side of their screens) that adds a third layer of mediation between them.

“Some layers of mediation are stacked one over another, while others might be displayed next to one another. Either way, the more layers of mediation there are between subject and object, the more interference there is in the control that the user has over a subject and/or object. Each layer has the possibility of acting as a filter, as a smokescreen or as a red herring (by providing misleading information or by capturing the user’s attention to direct it elsewhere, such as toward an ad for running shoes). This affects their decision-making. This is true of anything that involves technology, from texting to self-driving cars.

“The second issue of control is specifically cognitive and has to do with the power and influence of data in all its forms – images, sounds, numbers, text, etc. – on the subject-as-user. Humans are always at the source. In the coding of algorithms, it is either a human in position of power, or else an expert who works for a human in a position of power who decides what data and data forms can circulate and which ones cannot. Although there is a multiplying effect of data being circulated by powerful technologies and the ‘layering effect’ described above, at its source, the control is in the hands of the humans who are in positions of power over the creation and deployment of the algorithms.

“When the object of study is data and data forms, technological devices and techniques can become political tools that enhance or problematize notions of power and control. The human mind can only generate thoughts from sensory impressions it has gathered in the past. If data and data forms that constitute such input are only ideological (power-driven) in essence, then the subject-as-user is inevitably being manipulated. This is extraordinarily easy to do. Mind control applied by implementing techniques of influence is as old as the world – just think of how sorcery and magic work on the basis of illusion.

“In my mind, the question at this point in time is: What degree of manipulation is acceptable? When it comes to the data and data forms side of this question, I would say that we are entering the age of information warfare. Data is the primary weapon used in building and consolidating power – it always has been if we think of the main argument in ‘The Art of War.’

“I can’t see that adding more data to the mix in the hope of getting a broader perspective and becoming better informed in a balanced way is the fix at this point. People will not regain control of their decision-making with more data and more consumption of technology. We have already crossed the threshold and are engulfed in too much data and tech.

“I believe that most people will continue to be unduly influenced by the few powerful people who are in a position to create and generate and circulate data and data forms. It is possible that even if we were to maintain somewhat of the shape of democracy, it would not be a real democracy for this reason. The ideas of the majority are under such powerful forces of influence that we cannot really objectively say that they have control over their decision-making. For all of these reasons, I believe we are entering the age of pseudo-democracy.”

‘Human beings appropriate technology as part of their own thinking process – as they do with any tool’; that frees them to focus on higher-order decisions

Lia DiBello, principal scientist at Applied Cognitive Sciences Labs Inc., commented, “I actually believe this could go either way, but so far, technology has shown itself to free human beings to focus on higher-order decision-making by taking over more practical or mundane cognitive processing.

I actually believe this could go either way, but so far, technology has shown itself to free human beings to focus on higher-order decision-making by taking over more practical or mundane cognitive processing.

Lia DiBello, principal scientist at Applied Cognitive Sciences Labs Inc.

“Human beings have shown themselves to appropriate technology as part of their own thinking process – as they do with any tool. We see this with many smart devices, with GPS systems and with automation in general in business and medicine and in other settings across society. For example, people with implantable medical devices can get data on how lifestyle changes are affecting their cardiac performance and do not have [to] wait for a doctor’s appointment to know how their day-to-day choices are affecting their health.

“What will the relationship look like between humans and machines, bots and systems powered mostly by autonomous and artificial intelligence? I expect we will continue to see growth in the implementation of AI and bots to collect and analyze data that human beings can use to make decisions and gain the insights they need to make appropriate choices.

“Automation will not make ‘decisions’ so much as it will make recommendations based on data. Current examples are the driving routes derived from GPS and traffic systems, shopping suggestions based on data and trends and food recommendations based on health concerns. It provides near-instant analysis of large amounts of data.

“As deep learning systems are further developed, it is hard to say where things will go. The relationship between AI and human beings has to be managed – how we use the AI. Skilled surgeons today use programmable robots that – once programmed – work pretty autonomously, but these surgeries still require the presence of a skilled surgeon. The AI augments the human.

“It’s hard to predict how the further development of autonomous decision-making will change human society. It is most important for humans to find ways to adapt in order to integrate it within our own decision-making processes. For some people, it will free them to innovate and invent; for others, it could overwhelm and deskill them. My colleagues, cognitive scientists Gary Klein and Robert Hoffman have a notion of AI-Q. Their research investigates how people use and come to understand AI as part of their individual decision-making process.”

As with all of today’s technology, the rapid rollout of autonomous tools before they are ready (due to economic pressure) is likely and dangerous

Barrett S. Caldwell, professor of industrial engineering at Purdue University, responded, “I believe humans will be offered control of important decision-making technologies by 2035, but for several reasons, most will not utilize such control unless it is easy (and cost-effective) to do so. The role of agency for decision-making will look similar to the role of active ‘opt-in’ privacy: People will be offered the option, but due to the complexity of the EULAs (end-user license agreements), most people will not read all of them, or will select the default options (which may push them to a higher level of automation) rather than intelligently evaluate and ‘titrate’ their actual level of human-AI interaction.

“Tech-abetted and autonomous decision-making in driving, for example, includes both fairly simple features (lane following) and more-complex features (speed-sensitive cruise control) that are, in fact, user-adjustable. I do not know how many people actually modify or adjust those features. We have already seen the cases of people using the highest level of driver automation (which is nowhere close to true ‘Level 5’ driver automation) to abdicate driving decisions and trust that the technology can take care of all driving decisions for them. Cars such as Tesla are not inexpensive, and so we have a skewing of the use of more fully autonomous vehicles toward more affluent, more educated people who are making these decisions to let the tech take over.

“Key decisions should be automated only when the human’s strategic and tactical goals are clear (keep me safe, don’t injure others) and the primary role of the automation is to manage a large number of low-level functions without requiring the human’s attention or sensorimotor quickness. For example, I personally like automated coffee heating in the morning, and smart temperature management of my home while I’m at work.

“When goals are fluid or a change to pattern is required, direct human input will generally be incorporated in tech-aided decision-making if there is enough time for the human to assess the situation and make the decision. For example, I decide that I don’t want to go straight home today, I want to swing by the building where I’m having a meeting tomorrow morning. I can imagine informing the car’s system of this an hour before leaving; I don’t want to have to wrestle with the car 150 feet before an intersection while traveling in rush-hour traffic.

“I am really worried that this evolution will not turn out well. The technology designers (the engineers, more than the executives) really want to demonstrate how good they are at autonomous/AI operations and take the time to perfect it before having it publicly implemented. However, executives (who may not fully understand the brittleness of the technology) can be under pressure to rush the technological advancement into the marketplace.

“The public can’t even seem to manage simple data hygiene regarding privacy (don’t live-tweet that you won’t be home for a week, informing thieves that your home is easy to cherry pick and telling hackers that your account is easy to hack with non-local transactions), so I fully expect that people will not put the appropriate amount of effort into self-management in autonomous decision-making. If a system does not roll out well (I’m looking at Tesla’s full-self-driving or the use of drones in crowded airport zones), liability and blame will be sorted out by lawyers after the fact, which is not a robust or resilient version of systems design.”

Big Tech companies are using individuals’ data and AI ‘to discover and elicit desired responses informed by psychographic theories of persuasion’

James H. Morris, professor emeritus at the Human-Computer Interaction Institute, Carnegie Mellon University, wrote, “The social ills of today – economic anxiety, declining longevity and political unrest – signal a massive disruption caused by automation coupled with AI. The computer revolution is just as drastic as the industrial revolution but moves faster relative to humans’ ability to adjust.

“Suppose that between now and 2035, most paid work is replaced by robots, backed by the internet. The owners of the robots and the internet – FAANG (Facebook, Apple, Amazon, Netflix, Google) and their imitators – have high revenue per employee and will continue to pile up profits while many of us will be without work. If there is no redistribution of their unprecedented wealth, there will be no one to buy the things they advertise. The economy will collapse.

“Surprisingly, college graduates are more vulnerable to AI because their skills can be taught to robots more easily than what infants learn. The wage premium that college graduates currently enjoy is largely for teaching computers how to do their parents’ jobs. Someone, maybe it was Lenin, said, ‘When it comes time to hang the capitalists, they will vie with each other for the rope contract.’

“We need progressive economists like Keynes who (in 1930) predicted that living standards today in ‘progressive countries’ would be six times higher and this would leave people far more time to enjoy the good things in life. Now there are numerous essays and books calling for wealth redistribution. But wealth is the easy part. Our culture worships work. Our current workaholism is caused by the pursuit of nonessential, positional things which only signify class. The rich call the idle poor freeloaders, and the poor call the idle rich rentiers.

“In the future the only likely forms of future human work are those that are difficult for robots to perform, often ones requiring empathy: caregiving, art, sports and entertainment. In principle, robots could perform these jobs also, but it seems silly when those jobs mutually reward both producer and consumer and enhance relationships.

“China has nurtured a vibrant AI industry using all the latest techniques to create original products and improving on Western ones. China has the natural advantages of a larger population to gather data from and a high-tech workforce that works 12 hours a day, six days a week. In addition, in 2017 the Chinese government has made AI its top development priority. Another factor is that China’s population is inured to the lack of privacy that impedes the accumulation of data in the West. Partly because it was lacking some Western institutions, China was able to leapfrog past checks, credit cards and personal computers to performing all financial transactions on mobile phones.

“The success of AI is doubly troubling because nobody, including the people who unleash the learning programs, can figure out how they succeed in achieving the goals they’re given. You can try – and many people have – to analyze the gigantic maze of simulated neurons they create, but it’s as hard as analyzing the real neurons in someone’s brain to explain their behavior.

“I once had some sympathy with the suggestion that privacy was not an issue and ‘if you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place,’ but media I’ve been consuming like the Facebook/Cambridge Analytics fiasco has woken me up. Simply put, FAANG and others are building large dossiers about each of us and using AI to discover the stimuli that elicit desired responses, informed by psychographic theories of persuasion.

“The responses they desire vary and appear benign. Google wants to show us ads that appeal to us. Facebook wants us to be looking at its pages continually as we connect with friends. Amazon wants us to find books and products we will buy and like. Netflix wants to suggest movies and shows we should like to watch. But China, using TV cameras on every lamppost and WeChat (one single app providing services with the capabilities of Facebook, Apple, Amazon, Netflix, Google, eBay and PayPal), is showing the way to surveillance authoritarianism.

“While we recoil at China’s practices, they have undeniable societal benefits. It allows them to control epidemics far more effectively. In some cities, drones fly around to measure the temperatures of anyone outside. Surveillance can prevent acts like suicide bombing for which punishment is not a deterrent. With WeChat monitoring most human interactions, people might be more fair to each other. Westerners may believe China’s autocracy will stifle its economic progress, but it hasn’t yet.1

“Facebook’s AI engine was instructed to increase users’ engagement and, by itself, discovered that surprising or frightening information is a powerful inducement for a user to stick around. It also discovered that information that confirmed a user’s beliefs was a much better inducement than information that contradicted them. So, without any human help, the Facebook engine began promoting false, incredible stories that agitated users even beyond what cable TV had been doing. And when the Facebook people saw what their AI engine was doing, they were slow to stop it.

“Facebook, Apple, Amazon, Netflix and Google run ecosystems in which memes (but not genes!) compete for survival and drive the competition among their business entities. Human minds are seen as collateral damage. Facebook has been used to conduct whisper propaganda campaigns about people who were oblivious to the attacks, attacks that no one outside Facebook can even assess.

“It gets worse. To increase profits, the massive U.S. tech companies sell their engines’ services to anyone who pays and lets the payers instruct the engines to do whatever serves their ambition. The most glaring example: In 2016 Russian operatives used Facebook to target potential Trump voters and fed them information likely to make them vote.”

Design and regulatory changes will evolve, but will fall short of allowing most people meaningful control in their own lives

Daniel S. Schiff, lead for Responsible AI at JP Morgan Chase and co-director of the Governance and Responsible AI Lab at Purdue University, commented, “Algorithms already drive huge portions of our society and the lives of individuals. This trend will only advance in the coming years. Facilitating meaningful human control in the face of these trends will remain a daunting task. By 2035 AI systems (including consumer-facing systems and government-run, automated decision systems) will likely be designed and regulated so as to enhance public transparency and control of decision-making. However, any changes to the design and governance of AI systems will fall short of functionally allowing most people – especially the most vulnerable groups – to exercise deeply meaningful control in their own lives.

“Optimistically speaking, a new wave of formal regulation of AI systems and algorithms promises to enhance public oversight and democratic governance of AI generally. For example, the European Union’s developing AI Act will have been in place and iterated over the previous decade. Similarly, regulation like the Digital Services Act and even older policies like the General Data Protection Regulation will have had time to mature with respect to efficiency, enforcement and best practices in compliance.

“While formal regulation in the United States is less likely to evolve on the scale of the EU AI act (e.g., it is unclear when or if something like the Algorithmic Accountability Act will be passed), we should still expect to see the development of local and state regulation (such as New York’s restriction on AI-based hiring or Illinois’ Personal Information Protection Act), even if leading to a patchwork of laws. Further, there are good reasons to expect laws like the EU AI Act to defuse internationally via the Brussels effect; evidence suggests that countries like the UK, Brazil, and even China are attentive to the first and most-restrictive regulators with respect to AI. Thus, we should expect to see a more expansive paradigm of algorithmic governance in place in much of the world over the next decade.

“Complementing this is an array of informal or soft governance mechanisms, ranging from voluntary industry standards to private sector firm ethics principles and frameworks, to, critically, changing norms with respect to responsible design of AI systems realized through higher education, professional associations, machine learning conferences, and so on.

“For example, a sizable number of major firms which produce AI systems now refer to various AI ethics principles and practices, employ staff who focus specifically on responsible AI, and there is now a budding industry of AI ethics auditing startups helping companies to manage their systems and governance approaches. Other notable examples of informal mechanisms include voluntary standards like NIST’s AI Risk Management Framework as well as IEEE’s 7000 standard series, focused on ethics of autonomous systems.

“While it is unclear which frameworks will de facto become industry practice, there is an ambitious and maturing ecosystem aimed at mitigating AI’s risks and increasing convergence about key problems and possible solutions.

“The upshot of having more-established formal and informal regulatory mechanisms over the next decade is that there will be additional requirements and restrictions placed on AI developers, complemented by changing norms. The question then is which particular practices will diffuse and become commonplace as a result. Among the key changes we might expect are:

  • “Increased evaluations regarding algorithmic fairness, increased documentation and transparency about AI systems and some ability for the public to access this information and exert control over their personal data.
  • “More attempts by governments and companies employing AI systems to share at least some information on their websites or in a centralized government portal describing aspects of these systems including how they were trained, what data were used, their risks and limits and so on (e.g., via model cards or datasheets). These reports and documentation will result, in some cases, from audits (or conformity assessments) by third-party evaluators and in other cases from internal self-study, with a varying range of quality and rigor. For example, cities like Amsterdam and Helsinki are even now capturing information about which AI systems are used in government in systematic databases, and present information including the role of human oversight in this process. A similar model is likely to take place in the European Union, certainly with respect to so-called high-risk systems. In one sense then, we will likely have an ecosystem that provides more public access to and knowledge about algorithmic decision-making.
  • “Further, efforts to educate the public, emphasized in many national AI policy strategies, such as Finland’s Elements of AI effort, will be aimed at building public literacy about AI and its implications. In theory, individuals in the public will be able to look up information about which AI systems are used and how they work. In the case of an AI-based harm or incident, they may be able to pursue redress from companies or government. This will may be facilitated by civil society watchdog organizations and lawyers who can help bring the most egregious cases to the attention of courts and other government decision-makers.
  • “Further, we might expect researchers and academia or civil society to have increased access to information about AI systems; for example, the Digital Services Act will require that large technology platforms share information about their algorithms with researchers.

“However, there are reasons to be concerned that even these changes in responsible design and monitoring of AI systems will support much in the way of meaningful control by individual members of the general public. That is, while it may be helpful to have general transparency and oversight by civil society or academia, the impact is unlikely to filter down to the level of individuals.

“The evolution of compliance and user adaptation to privacy regulation exemplifies this problem. Post-GDPR, consumers typically experience increased privacy rights as merely more pop-up boxes to click away. Individuals often lack the time, understanding or incentive to read through information about cookies or to go out of their way to learn about privacy policies and rights. They will quickly click ‘OK’ and not take the time to seek greater privacy or knowledge of ownership of data. Best intentions are not always enough.

“In a similar fashion, government databases or corporate websites with details about AI systems and algorithms are likely insufficient to facilitate meaningful public control of tech-aided decision-making. The harms of automated decision-making can be diffuse, obfuscated by subtle interdependencies and long-term feedback effects. For example, the ways in which social media algorithms affect individuals’ daily lives, social organization and emotional well-being are non-obvious and take time and research to understand. In contrast, the benefits of using a search algorithm or content recommendation algorithm are immediate, and these automated systems are now deeply embedded in how people engage in school, work and leisure.

“As a function of individual psychology, limited time and resources and the asymmetry in understanding benefits versus harms, many individuals in society may simply stick with the default options. While theoretically, they may be able to exercise more control – for example, by opting out of algorithms, or requesting their data be forgotten – many individuals will see no reason to exert such ownership.

“This problem is exacerbated for the individuals who are most vulnerable; the same individuals who are most affected by high-risk automated decision systems (e.g., detainees, children in low-income communities, individuals without digital literacy) are the very same people who lack the resources and support to exert control.

“The irony is that the subsets of society most likely to attempt to exert ownership over automated decision systems are those who are less in need. This will leave it to public watchdogs, civil society organizations, researchers and activist politicians to identify and raise specific issues related to automated decision-making. That may involve banning certain use cases or regulating them as issues crystallize. In one sense then, public concerns will be reflected in how automated decision-making systems are designed and implemented, but channeled through elite representatives of the public, who are not always well-placed to understand the public’s preferences.

“One key solution here, again learning from the evolution of privacy policy, is to require more human-centered defaults. Build automated decision systems that are designed to have highly transparent and accessible interfaces, with ‘OK’ button-pushing leading to default choices that protect public rights and well-being and require an individual’s proactive consent for anything other than that. In this setting, members of the public will be more likely to understand and exercise ownership.

“This will require a collective effort of government and industry, plus design and regulation that is highly sensitive to individual psychology and information-seeking behavior. Unless these efforts can keep pace with innovation pressures, it seems likely that automated decision systems will continue to be put into place as they have been and commercialized to build revenue and increase government efficiency. It may be some time before fully sound and responsible design concepts are established.”

People could lose the ability to make decisions, eventually becoming domesticated and under the control of a techno-elite

Russ White, infrastructure architect at Juniper Networks and longtime Internet Engineering Task Force (IETF) leader, said, “When it comes to decision-making and human agency, what will the relationship look like between humans and machines, bots and systems powered mostly by autonomous and artificial intelligence?

“In part, this will depend on our continued belief in ‘progress’ as a solution to human problems. So long as we hold to a cultural belief that technology can solve most human problems, humans will increasingly take a ‘back seat’ to machines in decision-making. Whether or not we hold to this belief depends on the continued development of systems such as self-driving cars and the continued ‘taste’ for centralized decision-making – neither of which are certain at this point.

“If technology continues to be seen as creating as many problems as it solves, trust in technology and technological decision-making will be reduced, and users will begin to consider them more of a narrowly focused tool rather than a generalized solution to ‘all problems.’ Thus, much of the state of human agency by 2035 depends upon future cultural changes that are hard to predict.

“What key decisions will be mostly automated? The general tendency of technology leaders is to automate higher-order decision, such as what to have for dinner, or even which political candidate to vote for, or who you should have a relationship with. These kinds of questions tend to have the highest return on investment from a profit-driving perspective and tend to be the most interesting at a human level. Hence, Big Tech is going to continue working toward answering these kinds of questions. At the same time, most users seem to want these same systems to solve what might be seen as more rote or lower-order decisions. For instance, self-driving cars.

There is some contradiction in this space. Many users seem to want to use technology –particularly social or immersive neurodigital media – to help them make sense out of a dizzying array of decisions by narrowing the field of possibilities

Russ White, infrastructure architect at Juniper Networks and longtime Internet Engineering Task Force (IETF) leader

“There is some contradiction in this space. Many users seem to want to use technology –particularly social or immersive neurodigital media – to help them make sense out of a dizzying array of decisions by narrowing the field of possibilities. Most people don’t want a dating app to tell them who to date (specifically), but rather to narrow the field of possible partners to a manageable number. What isn’t immediately apparent to users is technological systems can present what appears to be a field of possibilities in a way that ultimately controls their choice (using the concepts of choice architecture and ‘the nudge’). This contradiction is going to remain at the heart of user conflict and angst for the foreseeable future.

“While users clearly want to be an integral part of making decisions they consider ‘important,’ these are also the decisions which provide the highest return on investment for technology companies. It’s difficult to see how this apparent mismatch of desires is going to play out. Right now, it seems like the tech companies are ‘winning,’ largely because the average user doesn’t really understand the problem at hand, nor its importance. For instance, when users say, ‘I don’t care that someone is monitoring my every move because no one could really be interested in me,’ they are completely misconstruing the problem at hand.

“Will users wake up at some point and take decision-making back into their own hands? This doesn’t seem to be imminent or inevitable.

“What key decisions should require direct human input? This is a bit of a complex question on two fronts. First, all machine-based decisions are actually driven by human input. The only questions are when that human input took place, and who produced the input. Second, all decisions should ultimately be made by humans – there should always be some form of human override on every machine-based decision. Whether or not humans will actually take advantage of these overrides is questionable, however.

“There are many more ‘trolley problems’ in the real world than are immediately apparent, and it’s very hard for machines to consider unintended consequences. For instance, we relied heavily on machines to make public health policies related to the COIVD-19 pandemic. It’s going to take many decades, however, to work out the unintended consequences of these policies, although the more cynical among us might say the centralization of power resulting from these policies was intended, just hidden from public view by a class of people who strongly believe centralization is the solution to all human problems.

“How might the broadening and accelerating rollout of tech-abetted, often autonomous decision-making change human society?

  • As humans make fewer decisions, they will lose the ability to make decisions.
  • Humans will continue down the path toward becoming … domesticated, which essentially means some small group of humans will increasingly control the much larger ‘mass of humanity.’

“The alternative is for the technocratic culture to be exposed as incapable of solving human problems early enough for a mass of users to begin treating ML and AI systems as ‘tools’ rather than ‘prophets.’ Which direction we go in is indeterminate at this time.”

AI shapes options and sets differential pricing already; people will not have a sufficient range of control of the choices that are available

Stephen Downes, expert with the Digital Technologies Research Centre of the National Research Council of Canada, commented, “This question can be interpreted multiple ways: Could there be any technology that allows people to be in control, will some such technology exist, and will most technology be like that? My response is that the technology will exist. It will have been created. But it is not at all clear that we will be using it.

“There will definitely be decisions out of our control, for example, whether we are allowed to purchase large items on credit. These decisions are made autonomously by the credit agency, which may not use autonomous agents. If the agent denies credit, there is no reason to believe that a human could, or even should, be able to override this decision.

“A large number of decisions like this about our lives are made by third parties and we have no control over them, for example, credit ratings, insurance rates, criminal trials, applications for employment, taxation rates. Perhaps we can influence them, but they are ultimately out of our hands.

“But most decisions made by technology will be like a simple technology, for example, a device that controls the temperature in your home. It could function as an autonomous thermostat, setting the temperature based on your health, on external conditions, on your finances and the on cost of energy. The question boils down to whether we could control the temperature directly, overriding the decision made by the thermostat.

“For something simple like this, the answer seems obvious: Yes, we would be allowed to set the temperature in our homes. For many people, though, it may be more complex. A person living in an apartment complex, condominium or residence may face restrictions on whether and how they control the temperature.

“Most decisions in life are like this. There may be constraints such as cost, but generally, even if we use an autonomous agent, we should be able to override it. For most tasks, such as shopping for groceries or clothes, choosing a vacation destination, or electing videos to watch, we expect to have a range of choices and to be able to make the final decisions ourselves. Where people will not have a sufficient range of control, though, is in the choices that are available to us. We are already seeing artificial intelligences used to shape market options to benefit the vendor by limiting the choices the purchaser or consumer can make.

“For example, consider the ability to select what things to buy. In any given category, the vendor will offer a limited range of items. These menus are designed by an AI and may be based on your past purchases or preferences but are mostly (like a restaurant’s specials of the day) based on vendor needs. Such decisions may be made by AIs deep in the value chain; market prices in Brazil may determine what’s on the menu in Detroit.

“Another common example is differential pricing. The price of a given item may be varied for each potential purchaser based on the AI’s evaluation of the purchaser’s willingness to pay. We don’t have any alternatives – if we want that item (that flight, that hotel room, that vacation package) we have to choose between the prices we the vendors choose, not all prices that are available. Or if you want heated seats in your BMW, but the only option is an annual subscriptionreally.

“Terms and conditions may reflect another set of decisions being made by AI agents that are outside our control. For example, we may purchase an e-book, but the book may come with an autonomous agent that scans your digital environment and restricts where and how your e-book may be viewed. Your coffee maker may decide that only approved coffee containers are permitted. Your car (and especially rental cars) may prohibit certain driving behaviours.

“All this will be the norm, and so the core question in 2035 will be: What decisions need (or allow) human input? The answer to this, depending on the state of individual rights, is that they might be vanishingly few. For example, we may think that life and death decisions need human input. But it will be very difficult to override the AI even in such cases. Hospitals will defer to what the insurance company AI says, judges will defer to the criminal AI, pilots like those on the 737 MAX cannot override and have no way to counteract automated systems. Could there be human control over these decisions being made in 2035 by autonomous agents? Certainly, the technology will have been developed. But unless the relation between individuals and corporate entities changes dramatically over the next dozen years, it is very unlikely that companies will make it available. Companies have no incentive to allow individuals control.”

A few humans will be in control of decision-making, but ‘everyone else will not be in charge of the most relevant parts of their own lives and their own choices’

Seth Finkelstein, principal at Finkelstein Consulting and Electronic Frontier Foundation Pioneer Award winner, wrote, “These systems will be designed to allow only a few people (i.e., the ruling class, and associated managers) to easily be in control of decision-making, and everyone else will not be in charge of the most relevant parts of their own lives and their own choices.

“There’s an implicit excluded middle in the phrasing of the survey question. It’s either turn the keys over to technology, or humans being the primary input in their own lives. It doesn’t consider the case of a small number of humans controlling the system so as to be in charge of the lives and choices of all the other humans.

“There’s not going to be a grand AI in the sky (Skynet) which rules over humanity. Various institutions will use AI and bots to enhance what they do, with all the conflicts inherent therein.

“For example, we don’t often think in the following terms, but for decades militaries have mass-deployed small robots which make autonomous decisions to attempt to kill a target (i.e., with no human in the loop): landmines. Note well: The fact that landmines are analog rather than digital and they use unsophisticated algorithms is of little significance to those maimed or killed. All of the obvious problems – they can attack friendly fighters or civilians, they can remain active long after a war, etc. – are well-known, as are the arguments against them. But they have been extensively used despite all the downsides, as the benefits accrue to a different group of humans than pays the costs. Given this background, it’s no leap at all to see that the explosives-laden drone with facial recognition is going to be used, no matter what pundits wail in horror about the possibility of mistaken identity.

“Thus, any consideration of machine autonomy versus human control will need to be grounded in the particular organization and detailed application. And the bar is much lower than you might naively think. There’s an extensive history of property owners setting booby-traps to harm supposed thieves, and laws forbidding them since such automatic systems are a danger to innocents.

“By the way, I don’t recommend financial speculation, as the odds are very much against an ordinary person. But I’d bet that between now and 2035 there will be an AI company stock bubble.”

A positive outcome for individuals depends on regulations being enforced and everyone being digitally literate enough to understand

Vian Bakir, professor of journalism and political communication at Bangor University, Wales, responded, “I am not sure if humans will be in control of important decision-making in the year 2035. It depends upon regulations being put in place and enforced, and everyone being sufficiently digitally literate to understand these various processes and what it means for them.

“When it comes to decision-making and human agency, what will the relationship look like between humans and machines, bots and systems powered mostly by autonomous and artificial intelligence? It greatly depends upon which part of the world you are considering.

“For instance, in the European Union, the proposed European Union AI Act is unequivocal about the need to protect against the capacity of AI (especially that using biometric data) for undue influence and manipulation. To create an ecosystem of trust around AI, its proposed AI regulation bans use of AI for manipulative purposes; namely, that ‘deploys subliminal techniques … to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm’ (European Commission, 2021, April 21, Title II Article 5).

“But it’s not yet clear what current applications this might include. For instance, in April 2022, proposed amendments to the UK’s draft AI Act included the proposal from the Committee on the Internal Market and Consumer Protection, and the Committee on Civil Liberties, Justice and Home Affairs, that ‘high-risk’ AI systems should include AI systems used by candidates or parties to influence, count or process votes in local, national or European elections (to address the risks of undue external interference, and of disproportionate effects on democratic processes and democracy).

“Also proposed as ‘high-risk’ are machine-generated complex text such as news articles, novels and scientific articles (because of their potential to manipulate, deceive, or to expose natural persons to built-in biases or inaccuracies); and deepfakes representing existing persons (because of their potential to manipulate the persons that are exposed to those deepfakes and harm the persons they are representing or misrepresenting) (European Parliament, 2022, April 20, Amendments 26, 27, 295, 296, 297). Classifying them as ‘high-risk’ would mean that they would need to meet the Act’s transparency and conformity requirements before they could be put on the market; these requirements, in turn, are intended to build trust in such AI systems.

“We still don’t know the final shape of the draft AI Act. We also don’t know how well it will be enforced. On top of that, other parts of the world are far less protective of their citizens’ relationship to AI.

“What key decisions will be mostly automated? Anything that can be perceived as saving corporations and governments money, and that are permissible by law.

“What key decisions should require direct human input? Any decision where there is capacity for harm to individuals or collectives.

“How might the broadening and accelerating rollout of tech-abetted, often autonomous decision-making change human society? If badly applied, it will lead to us feeling disempowered, angered by wrong decisions, and distrustful of AI and those who programme, deploy and regulate it.

“People generally have low digital literacy even in highly digitally literate societies. I expect that people are totally unprepared for the idea of AI making decisions that affect their lives, most are not equipped to challenge this.”

‘Whoever controls these algorithms will be the real government’

Tom Valovic, journalist and author of “Digital Mythologies,” shared passages from a recent article, writing, “In a second Gilded Age in which the power of billionaires and elites over our lives is now being widely questioned, what do we do about their ability to radically and undemocratically alter the landscape of our daily lives using the almighty algorithm? The poet Richard Brautigan said that one day we might all be watched over by ‘machines of loving grace.’ I surmise Brautigan might do a quick 180 if he was alive today. He would see how intelligent machines in general and AI in particular were being semi-weaponized or otherwise appropriated for purposes of a new kind of social engineering. He would also likely note how this process is usually positioned as something ‘good for humanity’ in vague ways that never seem to be fully explained.

“In the Middle Ages, one of the great power shifts that took place was from medieval rulers to the church. In the age of the enlightenment, another shift took place: from the church to the modern state. Now we are experiencing yet another great transition: a shift of power from state and federal political systems to corporations and, by extension, to the global elites that are increasingly exerting great influence. It seems abundantly clear that technologies such as 5G, machine learning and AI will continue to be leveraged by technocratic elites for the purposes of social engineering and economic gain.

“As Yuval Harari, one of transhumanism’s most vocal proponents has stated: ‘Whoever controls these algorithms will be the real government.’ If AI is allowed to begin making decisions that affect our everyday lives in the realms of work, play and business, it’s important to be aware of who this technology serves. We have been hearing promises for some time about how advanced computer technology was going to revolutionize our lives by changing just about every aspect of them for the better. But the reality on the ground seems to be quite different than what was advertised.

“Yes, there are many areas where it can be argued that the use of computer and Internet technology has improved the quality of life. But there are just as many others where it has failed miserably. Health care is just one example. Here, misguided legislation combined with an obsession with insurance company-mandated data gathering has created massive info-bureaucracies where doctors and nurses spend far too much time feeding patient data into a huge information databases where it often seems to languish. Nurses and other medical professionals have long complained that too much of their time is spent on data gathering and not enough time focusing on health care itself and real patient needs.

When considering the use of any new technology, the questions should be asked: Who does it ultimately serve? And to what extent are ordinary citizens allowed to express their approval or disapproval of the complex technological regimes being created that we all end up involuntarily depending upon

Tom Valovic, journalist and author of “Digital Mythologies,” shared passages from a recent article

“When considering the use of any new technology, the questions should be asked: Who does it ultimately serve? And to what extent are ordinary citizens allowed to express their approval or disapproval of the complex technological regimes being created that we all end up involuntarily depending upon?”

‘Our experiences are often manipulated by unseen and largely unknowable mechanisms; the one consistent experience is powerlessness’

Doc Searls, internet pioneer and co-founder and board member at Customer Commons, observed, “Human agency is the ability to act with full effect. We experience agency when we put on our shoes, walk, operate machinery, speak and participate in countless other activities in the world. Thanks to agency, our shoes are on, we go where we mean to go, we say what we want and machines do what we expect them to do.

“Those examples, however, are from the physical world. In the digital world of 2022, many effects of our intentions are less than full. Search engines and social media operate us as much as we operate them. Search engines find what they want us to want, for purposes which at best we can only guess at. In social media, our interactions with friends and others are guided by inscrutable algorithmic processes. Our Do Not Track requests to websites have been ignored for more than a decade. Meanwhile, sites everywhere present us with ‘your choices’ to be tracked or not, biased to the former, with no record of our own about what we’ve ‘agreed’ to. Equipping websites and services with ways to obey privacy laws while violating their spirit is a multibillion-dollar industry. (Search for ‘GDPR+compliance’ to see how big it is.)

“True, we do experience full agency in some ways online. The connection stays up, the video gets recorded, the text goes through, the teleconference happens. But even in those cases, our experiences are observed and often manipulated by unseen and largely unknowable corporate mechanisms.

“Take shopping, for example. While a brick-and-mortar store is the same for everyone who shops in it, an online store is different for everybody, because it is personalized: made ‘relevant’ by the site and its third parties, based on records gained by tracking us everywhere. Or take publications. In the physical world, a publication will look and work the same for all its readers. In the digital world, the same publication’s roster of stories and ads will be different for everybody. In both cases, what one sees is not personalized by you. ‘Tech-aided decision-making’ is biased by the selfish interests of retailers, advertisers, publishers and service providers, all far better equipped than any of us. In these ‘tech-aided’ environments, people cannot operate with full agency. We are given no more agency than site and service operators provide, separately and differently.

“The one consistent experience is of powerlessness over those processes.

“Laws protecting personal privacy have also institutionalized these limits on human agency rather than liberating us from them. The GDPR does that by calling human beings mere ‘data subjects,’ while granting full agency to ‘data controllers’ and ‘data processors’ to which data subjects are subordinated and dependent. The CCPA [California Consumer Privacy Act] reduces human beings to mere ‘consumers,’ with rights limited to asking companies not to sell personal data, and to ask for companies to give back data they have collected. One must also do this separately for every company, without standard and global ways for doing that. Like the GDPR, the CCPA does not even imagine that ‘consumers’ would or should have their own ways to obtain agreements or to audit compliance.

“This system is lame, for two reasons. One is that too much of it is based on surveillance-fed guesswork, rather than on good information provided voluntarily by human beings operating at full agency. The other is that we are reaching the limits of what giant companies and governments can do.

“We can replace this system, just like we’ve replaced or modernized every other inefficient and obsolete system in the history of tech.

“It helps to remember that we are still new to digital life. ‘Tech-aided decision-making,’ provided mostly by Big Tech, is hardly more than a decade old. Digital technology is also only a few decades old and will be with us for dozens or thousands of decades to come. In these early decades, we have done what comes easiest, which is to leverage familiar and proven industrial models that have been around since industry won the industrial revolution, only about 1.5 centuries ago.

“Human agency and ingenuity are boundlessly capable. We need to create our own tools for exercising both. Whether or not we’ll do that by 2035 is an open question. Given Amara’s Law (that we overestimate in the short term and underestimate in the long), we probably won’t meet the 2035 deadline. (Hence my ‘No’ vote on the research question in this canvassing.) But I believe we will succeed in the long run, simply because human agency in both the industrial and digital worlds is best expressed by humans using machines. Not by machines using humans.

“The work I and others are doing at Customer Commons is addressing these issues. Here are just some of the business problems that can be solved only from the customer’s side:

1) “Identity: Logins and passwords are burdensome leftovers from the last millennium. There should be (and already are) better ways to identify ourselves and to reveal to others only what we need them to know. Working on this challenge is the SSI (Self-Sovereign Identity) movement. The solution here for individuals is tools of their own that scale.

2) “Subscriptions: Nearly all subscriptions are pains in the butt. ‘Deals’ can be deceiving, full of conditions and changes that come without warning. New customers often get better deals than loyal customers. And there are no standard ways for customers to keep track of when subscriptions run out, need renewal, or change. The only way this can be normalized is from the customers’ side.

3) “Terms and conditions: In the world today, nearly all of these are ones that companies proffer; and we have little or no choice about agreeing to them. Worse, in nearly all cases, the record of agreement is on the company’s side. Oh, and since the GDPR came along in Europe and the CCPA in California, entering a website has turned into an ordeal typically requiring “consent” to privacy violations the laws were meant to stop. Or worse, agreeing that a site or a service provider spying on us is a ‘legitimate interest.’ The solution here is terms individuals can proffer and organizations can agree to. The first of these is #NoStalking, which allows a publisher to do all the advertising they want, so long as it’s not based on tracking people. Think of it as the opposite of an ad blocker. (Customer Commons is also involved in the IEEE’s P7012 Standard for Machine Readable Personal Privacy Terms.)

4) “Payments: For demand and supply to be truly balanced, and for customers to operate at full agency in an open marketplace (which the Internet was designed to support), customers should have their own pricing gun: a way to signal – and actually pay willing sellers – as much as they like, however, they like, for whatever they like, on their own terms. There is already a design for that, called Emancipay.

5) “Intentcasting: Advertising is all guesswork, which involves massive waste. But what if customers could safely and securely advertise what they want, and only to qualified and ready sellers? This is called intentcasting, and to some degree it already exists. Toward this, the Intention Byway is a core focus of Customer Commons. (Also see a list of intentcasting providers on the ProjectVRM Development Work list.)

6) “Shopping: Why can’t you have your own shopping cart – that you can take from store to store? Because we haven’t invented one yet. But we can. And when we do, all sellers are likely to enjoy more sales than they get with the current system of all-siloed carts.

7) “Internet of Things: What we have so far are the Apple of things, the Amazon of things, the Google of things, the Samsung of things, the Sonos of things, and so on – all siloed in separate systems we don’t control. Things we own on the Internet should be our things. We should be able to control them, as independent operators, as we do with our computers and mobile devices. (Also, by the way, things don’t need to be intelligent or connected to belong to the Internet of Things. They can be or have persistent compute objects, or ‘picos.’)

8) “Loyalty: All loyalty programs are gimmicks, and coercive. True loyalty is worth far more to companies than the coerced kind, and only customers are in a position to truly and fully express it. We should have our own loyalty programs to which companies are members, rather than the reverse.

9) “Privacy: We’ve had privacy tech in the physical world since the inventions of clothing, shelter, locks, doors, shades, shutters and other ways to limit what others can see or hear – and to signal to others what’s OK and what’s not. Instead, all we have are unenforced promises by others not to watch our naked selves, or to report what they see to others. Or worse, coerced urgings to ‘accept’ spying on us and distributing harvested information about us to parties unknown, with no record of what we’ve agreed to.

10) “Customer service: There are no standard ways to call for service yet, or to get it. And there should be.

11) “Regulatory compliance. Especially around privacy. Because really, all the GDPR and the CCPA want is for companies to stop spying on people. Without any privacy tech on the individual’s side, however, responsibility for everyone’s privacy is entirely a corporate burden. This is unfair to people and companies alike, as well as insane – because it can’t work. Worse, nearly all B2B ‘compliance’ solutions only solve the felt need by companies to obey the letter of these laws while ignoring its spirit. But if people have their own ways to signal their privacy requirements and expectations (as they do with clothing and shelter in the natural world), life gets a lot easier for everybody, because there’s something there to respect. We don’t have that yet online, but it shouldn’t be hard. For more on this, see Privacy is Personal and our own Privacy Manifesto.

12) “Real relationships: Ones in which both parties actually care about and help each other, and good market intelligence flows both ways. Marketing by itself can’t do it. All you get is the sound of one hand slapping. (Or, more typically, pleasuring itself with mountains of data and fanciful maths first described in Darrell Huff’s ‘How to Lie With Statistics,’ written in 1954.) Sales can’t do it either because its job is done once the relationship is established. CRM can’t do it without a VRM hand to shake on the customer’s side. An excerpt from Project VRM’s ‘What Makes a Good Customer’: ‘Consider the fact that a customer’s experience with a product or service is far more rich, persistent and informative than is the company’s experience selling those things or learning about their use only through customer service calls (or even through pre-installed surveillance systems such as those which for years now have been coming in new cars). The curb weight of customer intelligence (knowledge, know-how, experience) with a company’s products and services far outweighs whatever the company can know or guess at. So, what if that intelligence were to be made available by the customer, independently, and in standard ways that work at scale across many or all of the companies the customer deals with?’

13) “Any-to-any/many-to-many business: A market environment where anybody can easily do business with anybody else, mostly free of centralizers or controlling intermediaries (with due respect for inevitable tendencies toward federation). There is some movement in this direction around what’s being called Web3.

14) “Life-management platforms: KuppingerCole has been writing and thinking about these since not long after they gave ProjectVRM an award for its work, way back in 2007. These have gone by many labels: personal data clouds, vaults, dashboards, cockpits, lockers and other ways of characterizing personal control of one’s life where it meets and interacts with the digital world. The personal data that matters in these is the kind that matters in one’s life: health (e.g., HIEofOne), finances, property, subscriptions, contacts, calendar, creative works and so on, including personal archives for all of it. Social data out in the world also matters, but is not the place to start, because that data is less important than the kinds of personal data listed above – most of which has no business being sold or given away for goodies from marketers. (See ‘We Can Do Better Than Selling Our Data.’)

“The source for that list (with lots of links) is at Customer Commons, where we are working with the Ostrom Workshop at Indiana University on the Bloomington Byway, a project toward meeting some of these challenges at the local level. If we succeed, I’d like to change my vote on this future of human agency question from ‘No’ to ‘Yes’ before that 2035 deadline.”

A human-centered scenario for 2035: Trusted tech must augment, not replace people’s choices

Sara M. Watson, writer, speaker and independent technology critic, replied with a scenario, writing, “The year is 2035. Intelligent agents act on our behalf, prioritizing collective and individual human interests above all else. Technological systems are optimized to maximize for democratically recognized values of dignity, care, well-being, justice, equity, inclusion and collective- and self-determination. We are equal stakeholders in socially and environmentally sustainable technological futures.

“Dialogic interfaces ask open questions to capture our intent and confirm that their actions align with stated needs and wants in virtuous, intelligent feedback loops. Environments are ambiently aware of our contextual preferences and expectations for engagement. Rather than paternalistic or exploitative defaults, smart homes nudge us toward our stated intentions and desired outcomes. We are no longer creeped out by the inferred false assumptions that our data doppelgängers perpetuate behind the uncanny shadows of our behavioral traces. This is not a utopian impossibility. It is an alternative liberatory future that is the result of collective action, care, investment and systems-thinking work. It is born out of the generative, constructive criticism of our existing and emergent relationship to technology.

“In order to achieve this:

  • Digital agents must act on stakeholders’ behalf with intention, rather than based on assumptions.
  • Technology must augment, rather than replace human decision-making and choice.
  • Stakeholders must trust technology.

“The stakes of privacy for our digital lives have always been about agency. Human agency and autonomy is the power and freedom of self-determination. Machine agency and autonomy are realized when systems have earned trust to act independently. Socio-technical futures will rely on both in order for responsible technological innovation to progress.

“As interfaces become more intimate, seamless and immersive, we will need new mechanisms and standards for establishing and maintaining trust. Examples:

  • Audio assistants and smart speakers present users not with a list of 10 search results but instead initiate a single command line action.
  • Augmented-reality glasses and wearable devices offer limited real estate for real time detail and guidance.
  • Virtual reality and metaverse immersion raise the stakes for connected, embodied safety.
  • Synthetic media like text and image generation are co-created through the creativity and curation of human artistry.
  • Neural interfaces’ input intimacy will demand confidence in maintaining control of our bodies and minds.

“Web3 principles and technical standards promise trustless mechanism solutions, but those standards have been quickly gobbled by rent seekers and zero-to-one platform logics before significant shifts in markets, norms and policy incentive structures can sustainably support their vision. Technology cannot afford to continue making assumptions based on users’ and consumers’ observed behaviors. Lawrence Lessig’s four forces of regulatory influence over technology must be enacted:

  • Code – Technology is built with agency by design.
  • Markets – Awareness and demand for agency interfaces increases.
  • Norms – Marginalized and youth communities are empowered to imagine what technology agency futures look like.
  • Law – Regulators punish and disincentivize exploitative, extractive economic logics.”

Humans are a faction- and fiction-driven species that can be exploited for profit

John Hartley, professor of digital media and culture at the University of Sydney, Australia, observed, “The question is not what does decision-making tech do to us, but who owns it. Digital media technologies and computational platforms are globalising much faster than formal educational systems, faster indeed than most individual or community lives. They are however neither universal nor inclusive. Each platform does its best to distinguish itself from the others (they are not interoperable but they are in direct competition), and no computational technology is used by everyone as a common human system (in contrast to natural language).

“Tech giants are as complex as countries, but they use their resources to fend off threats from each other and from external forces (e.g., regulatory and tax regimes), not to unify their users in the name of the planet. Similarly, countries and alliances are preoccupied with the zones of uncertainty among them, not with planetary processes at large.

“Taken as a whole, over evolutionary and historical time, ‘we’ (H. sapiens) are a parochial, aggressive, faction- and fiction-driven species. It has taken centuries – and is an ongoing struggle – to elaborate systems, institutions and expertise that can exceed these self-induced boundaries. Science seeks to describe the external world but is still learning how to exceed its own culture-bound limits. Further, in the drive toward interpretive neutrality, science has applied Occam’s razor all the way down to the particle, whose behaviour is reduced to mathematical codes. In the process, science loses its connection to culture, which it must needs restore not by data but by stories.

“For their part, corporations seek to turn everyone into a consumer, decomposing what they see as ‘legacy’ cultural identities into infinitely substitutable units, of which the ideal type is the robot. They promote stories of universal freedom to bind consumers closer to the value placed on them in the information economy, which hovers somewhere between livestock (suitable for data-farming) and uselessness (replaceable by AI).

“Universal freedom is not the same as value. In practice, something can only have value if somebody owns it. Things that can’t be owned have no value: the atmosphere; biosphere; individual lives; language; culture. These enter the calculus of economic value as resource, impediment, or waste. In the computational century, knowledge has been monetised in the form of information, code and data, which in turn have taken the economic calculus deep into the space previously occupied by life, language, culture and communication. These, too, now have value. But that’s not the same as meaning.

“Despite what common sense might lead you to think, ‘universal freedom’ does not mean the achievement of meaningful senses of freedom among populations. Commercial and corporate appropriations of ‘universal freedom’ restrict that notion to the accumulation of property, for which a widely consulted league table is Forbes’ rich lists, maintained in real time, with ‘winners’ and ‘losers’ calculated on a daily basis.

“For their part, national governments and regulatory regimes use strategic relations not to sustain the world as a whole but for defence and home advantage. Strategy is used to govern populations (internally) and to outwit adversaries (externally). It is not devoted to the overall coordination of self-created groups and institutions within their jurisdiction, but to advantage corporate and political friends, while confounding foes. As a result, pan-human stories are riven with conflict and vested interests. It’s ‘we’ against ‘they’ all the way down, even in the face of global threats to the species, as in climate-change and pandemics.

“Knowledge of the populace as a whole tends to have value only in corporate and governmental terms. In such an environment, populations are known not through their own evolved cultural and semiotic codes, but as bits of information, understood as the private property of the collecting agency. A ‘semiosphere’ has no economic value; unlike ‘consumers’ and ‘audiences,’ from which economic data can be harvested. Citizens and the public (aka ‘voters’ and ‘taxpayers’) have no intrinsic value but are sources of uncertainty in decision-making and action. Such knowledge is monopolised by marketing and data-surveillance agencies, where ‘the people’ remain ‘other.’

“Population-wide self-knowledge, at semiospheric scale, is another domain where meaning is rich but value is small. Unsurprisingly, economic and governmental discourses routinely belittle collective self-knowledge that they deem not in their interests. Thus, they might applaud ‘unions’ if they are populist-nationalist-masculine sporting codes, but campaign against self-created and self-organised unions among workers, women, and human-rights activists. They pursue anti-intellectual agendas, since their interests are to confine the popular imagination to fictions and fantasies, and not to emancipate it into intellectual freedom and action. From the point of view of partisans in the ‘culture wars,’ the sciences and humanities alike are cast as ‘they’ groups, foreign – and hostile – to the ‘we’ of popular culture. Popular culture is continually apt to being captured by top-down forces with an authoritarian agenda. Popularity is sought not for universal public good but for the accumulation of private profit at corporate scale. As has been the case since ancient empires introduced the terms, democracy is fertile ground for tyranny.”

We need to rethink the foundations of political economy – human agency, identity and intelligence are not what we think they are

Jim Dator, well-known futurist, director of the Hawaii Center for Futures Studies and author of the 2022 book “Beyond Identities: Human Becomings in Weirding Worlds,” wrote a three-part response tying into the topics of agency, identity and intelligence.

1) “Agency – In order to discuss the ‘future of human agency and the degree to which humans will remain in control of tech-aided decision-making,’ it is necessary to ask whether humans, in fact, have agency in the way the question implies, and, if so, what its source and limits might be.

“Human agency is often understood as the ability to make choices and to act on behalf of those choices. Agency often implies free will – that the choices humans make are not predetermined (by biology and/or experience, for example) but are made somehow freely.

“To be sure, most humans may feel that they choose and act freely, and perhaps they do, but some evidence from neuroscience – which is always debatable – suggests that what we believe to be a conscious choice may actually be formulated unconsciously before we act; that we do not freely choose, rather, we rationalize predetermined decisions. Humans may not be rational actors but rather rationalizing actors.

“Different cultures sometimes prefer certain rationalizations over others – some say God or the devil or sorcerers or our genes made us do it. Other cultures expect us to say we make our choices and actions after carefully weighing the pros and cons of action-rational choices. What we may actually be doing is rationalizing, not reasoning.

“This is not just a picayune intellectual distinction. Many people reading these words live in cultures whose laws and economic theories are based on assumptions of rational decision-making that cause great pain and error because those assumptions may be completely false. If so, we need to rethink (!) the foundations of our political economy and base it on how people actually decide instead of how people 300 years ago imagined they did and upon which they built our obsolete constitutions and economies. If human agency is more restricted than most of us assume we need to tread carefully when we fret about decisions being made by artificial intelligences. Or maybe there is nothing to worry about at all. Reason rules! I think there is reason for concern.

2) “Identity – The 20th century may be called the Century of Identity, among other things. It was a period when people, having lost their identity (often because of wars, forced or voluntary migration, or cultural and environmental change), sought either to create new identities or to recapture lost ones. Being a nation of invaders, slaves and immigrants, America is currently wracked with wars of identity. But there is also a strong rising tide of people rejecting identities that others have imposed on them, seeking to perform different identities that fit them better. Most conspicuous now are diverse queer, transexual, transethnic and other ‘trans’ identities, as well as biohackers and various posthumans, existing and emerging.

“While all humans are cyborgs to some extent (clothes may make the man, but clothes, glasses, shoes, bicycles, automobiles and other protheses actually turn the man into a cyborg), true cyborgs in the sense of mergers of humans and high technologies (biological and/or electronic) already exist with many more on the horizon.

“To be sure, the war against fluid identity is reaching fever pitch and the outcome cannot be predicted, but since identity-creation is the goal of individuals struggling to be free and not something forced on them by the state, it is much harder to stop and it should be admired and greeted respectfully.

3) “Intelligence – For most of humanity’s short time on Earth, life, intelligence and agency were believed to be everywhere, not only in humans but in spirits, animals, trees, rivers, mountains, rocks, deserts, everywhere. Only relatively recently has intelligence been presumed to be the monopoly of humans who were created, perhaps, in the image of an all-knowing God, and were themselves only a little lower than the angels.

“Now science is (re)discovering life, intelligence and agency not just in homo sapiens, but in many or all eukarya [plants, animals, fungi and some single-celled creatures], and even in archaea and bacteria as well as Lithbea – both natural and human-made – such as xenobots, robots, soft artificial-life entities, genetically engineered organisms, etc. See Jaime Gómez-Márquez, ‘Lithbea, A New Domain Outside the Tree of Life,’ Richard Grant’s Smithsonian piece ‘Do Trees Talk to Each Other?’ Diana Lutz’s ‘Microbes Buy Low and Sell High’ and James Bridle’s essay in Wired magazine, ‘Can Democracy Include a World Beyond Humans?’ in which he suggests, ‘A truly planetary politics would extend decision-making to animals, ecosystems and – potentially – AI.’

“Experts differ about all of this, as well as about the futures of artificial intelligence and life. I have been following the debate for 60 years, and I see ‘artificial intelligence’ to be a swiftly moving target. As Larry Tesler has noted, intelligence is what machines can’t do yet. As machines become smarter and smarter, intelligence always seems to lie slightly ahead of what they just did. The main lesson to be learned from all of this to not judge ‘intelligence’ by 21st century Western, cis male, human standards. If it helps, don’t call it ‘intelligence.’ Find some other word that embraces them all and doesn’t privilege or denigrate any one way of thinking or acting. I would call it ‘sapience’ if that term weren’t already appropriated by self-promoting homo. Similarly, many scientists, even those in artificial life (or Alife) want to restrict the word ‘life’ to carbon-based organic processes. OK, but they are missing out on a lot of processes that are very, very lifelike that humans might well want to adapt. It is like saying an automobile without an internal combustion engine is not an automobile.

“Humanity can no longer be considered to be the measure of all things, the crown of creation. We are participants in an eternal evolutionary waltz that enabled us to strut and fret upon the Holocene stage. We may soon be heard from no more, but our successors will be. We are, like all parents, anxious that our prosthetic creations are not exactly like us, while fearful they may be far too much like us after all. Let them be. Let them go. Let them find their agency in the process of forever becoming.”

  1. This canvassing was conducted between June 29 and Aug. 8, 2022 before China changed it COVID prevention policies. At the time this respondent answered the question, China practiced a “zero COVID” strategy of strict restrictions and population lockdowns to prevent the spread of the coronavirus. The restrictive policy was loosened in early December 2022 and major COVID outbreaks have occurred since then.

Sign up for our weekly newsletter

Fresh data delivery Saturday mornings

Sign up for The Briefing

Weekly updates on the world of news & information