Numbers, Facts and Trends Shaping Your World

Code-Dependent: Pros and Cons of the Algorithm Age

Theme 3: Humanity and human judgment are lost when data and predictive modeling become paramount

Many respondents said that as people put too much faith in data, humanity can be lost. Some argued that because technology corporations and, sometimes, governments are most often the agencies behind the code, algorithms are written to optimize efficiency and profitability without much thought about the possible societal impacts of the data modeling and analysis. These respondents said people are considered to be an “input” to the process and they are not seen as real, thinking, feeling, changing beings. Some said that as the process evolves – that is, as algorithms begin to write the algorithms – humans may get left completely out of the loop, letting “the robots decide.”

There are a lot of places where algorithms are beneficial and helpful, but so far, none of them take into account the actual needs of humans.

An anonymous respondent wrote, “We simply can’t capture every data element that represents the vastness of a person and that person’s needs, wants, hopes, desires. Who is collecting what data points? Do the human beings the data points reflect even know, or did they just agree to the terms of service because they had no real choice? Who is making money from the data? How is anyone to know how his/her data is being massaged and for what purposes to justify what ends? There is no transparency, and oversight is a farce. It’s all hidden from view. I will always remain convinced the data will be used to enrich and/or protect others and not the individual. It’s the basic nature of the economic system in which we live.”

Peter Eckart’s comment reflects the attitude of many in this canvassing: “We can create algorithms faster than we can understand or evaluate their impact. The expansion of computer-mediated relationships means that we have less interest in the individual impact of these innovations, and more on the aggregate outcomes. So we will interpret the negative individual impact as the necessary collateral damage of ‘progress.’”

Axel Bruns, a professor at the Digital Media Research Center at Queensland University of Technology, said, “There are competitive, regulatory and legal disadvantages that would result from greater transparency on behalf of the platform operator, and so there is an incentive only to further obfuscate the presence and operations of algorithmic shaping of communications processes. This is not to say that such algorithms are inherently ‘bad,’ in the sense that they undermine effective communication; algorithms such as Google’s PageRank clearly do the job that is asked of them, for instance, and overall have made the web more useful than it would be without them. But without further transparency ordinary users must simply trust that the algorithm does what it is meant to do, and does not inappropriately skew the results it delivers. Such algorithms will continue to be embedded deeply into all aspects of human life, and will also generate increasing volumes of data on their fields. This continues to increase the power that such algorithms already have over how reality is structured, measured and represented, and the potential impact that any inadvertent or deliberate errors could have on user activities, on society’s understanding of itself, and on corporate and government decisions. More fundamentally, the increasing importance of algorithms to such processes also transfers greater importance to the source data they work with, amplifying the negative impacts of data gaps and exclusions.”

An anonymous community advocate said, “There are a lot of places where algorithms are beneficial and helpful, but so far, none of them take into account the actual needs of humans. Human resources are an input in a business equation at the moment, not real, thinking, feeling symbiotes in the eyes of business.”

An anonymous associate professor of political science at a major U.S. university said, “Algorithms are the typecasting of technology. They are a snapshot of behavior influenced by contextual factors that give us a very limited view of an individual. Typecasting is a bad way to be regarded by others and it is a bad way to ‘be.’”

Rebecca MacKinnon, director of the Ranking Digital Rights project at New America, commented, “Algorithms driven by machine learning quickly become opaque even to their creators who no longer understand the logic being followed to make certain decisions or produce certain results. The lack of accountability and complete opacity is frightening. On the other hand, algorithms have revolutionized humans’ relationship with information in ways that have been life-saving and empowering and will continue to do so.”

Programming primarily in pursuit of profits and efficiencies is a threat

The potential for good is huge, but the potential for misuse and abuse, intentional and inadvertent, may be greater.

A large number of respondents expressed deep concerns about the primary interests being served by networked algorithms. Most kept their comments anonymous, which makes sense since a significant number of the participants in this canvassing are employed by or are funded in some regard by corporate or government interests. As an anonymous chairman and CEO at a nonprofit organization observed, “The potential for good is huge, but the potential for misuse and abuse, intentional and inadvertent, may be greater.” (All respondents not identified by name in this section submitted their comments anonymously.)

One participant described the future this way: “The positives are all pretty straightforward, e.g., you get the answer faster, the product is cheaper/better, the outcome fits the needs more closely. Similarly, the negatives are mostly pretty easy to foresee as well, given that it’s fundamentally people/organizations in positions of power that will end up defining the algorithms. Profit motives, power accumulation, etc., are real forces that we can’t ignore or eliminate. Those who create the algorithms have a stake in the outcome, so they are, by definition, biased. It’s not necessarily bad that this bias is present, but it does have dramatic effects on the outputs, available inputs and various network effects that may be entirely indirect and/or unforeseen by the algorithm developers. As the interconnectedness of our world increases, accurately predicting the negative consequences gets ever harder, so it doesn’t even require a bad actor to create deleterious conditions for groups of people, companies, governments, etc.”

Another respondent said, “The algorithms will serve the needs of powerful interests, and will work against the less-powerful. We are, of course, already seeing this start to happen. Today there is a ton of valuable data being generated about people’s demographics, behaviours, attitudes, preferences, etc. Access to that data (and its implications) is not evenly distributed. It is owned by corporate and governmental interests, and so it will be put to uses that serve those interests. And so what we see already today is that in practice, stuff like ‘differential pricing’ does not help the consumer; it helps the company that is selling things, etc.”

An IT architect at IBM said, “Companies seek to maximize profit, not maximize societal good. Worse, they repackage profit-seeking as a societal good. We are nearing the crest of a wave the trough side of which is a new ethics of manipulation, marketing, nearly complete lack of privacy. All predictive models, whether used for personal convenience or corporate greed, require large amounts of data. The ways to obtain that are at best gently transformative of culture, and on the low side, destructive of privacy. Corporations’ use of big data predicts law enforcement’s use of shady techniques (e.g., Stingrays) to invade privacy. People all too quickly view law-enforcement as ‘getting the bad guys their due’ but plenty of cases show abuse, mistaken identity, human error resulting in police brutality against the innocent, and so on. More data is unlikely to temper the mistakes; instead, it will fuel police overreach, just as it fuels corporate overreach.”

Said another respondent, “Everything will be geared to serve the interests of the corporations and the 1%. Life will become more convenient, but at the cost of discrimination, information compartmentalization and social engineering.”

It is difficult to imagine that the algorithms will consider societal benefits when they are produced by corporations focused on short-term fiscal outcomes.

A professor noted, “If lean, efficient global corporations are the definition of success, the future will be mostly positive. If maintaining a middle class with opportunities for success is the criterion by which the algorithms are judged, this will not be likely. It is difficult to imagine that the algorithms will consider societal benefits when they are produced by corporations focused on short-term fiscal outcomes.”

A senior software developer wrote, “Smart algorithms can be incredibly useful, but smart algorithms typically lack the black-and-white immediacy that the greedy, stupid and short-sighted prefer. They prefer stupid, overly broad algorithms with lower success rates and massive side effects because these tend to be much easier to understand. As a result, individual human beings will be herded around like cattle, with predictably destructive results on rule of law, social justice and economics. For instance, I see algorithmic social data crunching as leading to ‘PreCrime,’ where ordinary, innocent citizens are arrested because they set off one too many flags in a Justice Department data dragnet.”

One business analyst commented, “The outcome will be positive for society on a corporate/governmental basis, and negative on an individual basis.”

A faculty member at a U.S. university said, “Historically, algorithms are inhumane and dehumanizing. They are also irresistible to those in power. By utilitarian metrics, algorithmic decision-making has no downside; the fact that it results in perpetual injustices toward the very minority classes it creates will be ignored. The Common Good has become a discredited, obsolete relic of The Past.”

Another respondent who works for a major global human rights foundation said, “Algorithms are already put in place to control what we see on social media and how content is flagged on the same platforms. That’s dangerous enough – introducing algorithms into policing, health care, educational opportunities can have a much more severe impact on society.”

An anonymous professor of media production and theory warned, “While there is starting to be citizen response to algorithms, they tend to be seen as neutral if they are seen at all. Since algorithms are highly proprietary and highly lucrative, they are highly dangerous. With TV, the U.S. developed public television, what kind of public space for ownership of information will be possible? It is the key question for anyone interested in the future of democratic societies.”

We are going full-bore into territory that we should be approaching hesitantly if at all. David Golumbia

David Golumbia, an associate professor of digital studies at Virginia Commonwealth University, wrote, “The putative benefits of algorithmic processing are wildly overstated and the harms are drastically underappreciated. Algorithmic processing in many ways deprives individuals and groups of the ability to know about, and to manage, their lives and responsibilities. Even when aspects of algorithmic control are exposed to individuals, they typically have nowhere near the knowledge necessary to understand what the consequences are of that control. This is already widely evident in the way credit scoring has been used to shape society for decades, most of which have been extremely harmful despite the credit system having some benefit to individuals and families (although the consistent provision of credit beyond what one’s income can bear remains a persistent and destructive problem). We are going full-bore into territory that we should be approaching hesitantly if at all, and to the degree that they are raised, concerns about these developments are typically dismissed out of hand by those with the most to gain from those developments.”

An anonymous professor at the University of California-Berkeley observed, “Algorithms are being created and used largely by corporations. The interests of the market economy are not the same as those of the people being subjected to algorithmic decision-making. Costs and the romanticization of technology will drive more and more adoption of algorithms in preference to human-situated decision-making. Some will have positive impacts. But the negatives are potentially huge. And I see no kind of oversight mechanism that could possibly work.”

Joseph Turow, a communications professor at the University of Pennsylvania, said, “A problem is that even as they make some tasks easier for individuals, many algorithms will chip away at their autonomy by using the data from the interactions to profile them, score them, and decide what options and opportunities to present them next based on those conclusions. All this will be carried out by proprietary algorithms that will not be open to proper understanding and oversight even by the individuals who are scored.”

Karl M. van Meter, a sociological researcher and director of the Bulletin of Methodological Sociology, Ecole Normale Supérieure de Paris, said, “The question is really, ‘Will the net overall effect of the next decade of bosses be positive for individuals and society or negative for individuals and society?’ Good luck with that one.”

Sometimes algorithms make false assumptions that place people in an echo chamber/ad delivery system that isn’t really a fit for them. An engineer at a U.S. government organization complained, “Some work will become easier, but so will profiling. I, personally, am often misidentified as one racial type, political party, etc., by my gender, address, career, etc., and bombarded with advertising and spam for that person. If I had an open social profile, would I even have that luxury – or would everything now ‘match’ whichever article I most recently read?”

An anonymous respondent wrote, “One major question is: To what extent will the increased use of algorithms encourage a behaviorist way of thinking of humans as creatures of stimulus and response, capable of being gamed and nudged, rather than as complex entities with imagination and thought? It is possible that a wave of algorithm-ization will trigger new debates about what it means to be a person, and how to treat other people. Philip K. Dick has never been more relevant.”

Evan Selinger, professor of philosophy at the Rochester Institute of Technology, wrote, “The more algorithmic advice, algorithmic decision-making and algorithmic action that occurs on our behalf, the more we risk losing something fundamental about our humanity. But because being ‘human’ is a contested concept, it’s hard to make a persuasive case for when and how our humanity is actually diminished, and how much harm each diminishment brings. Only when better research into these questions is available can a solid answer be provided as to whether more positive or negative outcomes arise.”

Algorithms manipulate people and outcomes, and even ‘read our minds’

Respondents registered fears about the ease with which powerful interests can manipulate people and outcomes through the design of networked intelligence and tools.

Michael Kleeman, senior fellow at the University of California-San Diego, observed, “In the hands of those who would use these tools to control, the results can be painful and harmful.”

The threat is to liberty. Peter Levine

Peter Levine, professor and associate dean for research at Tisch College of Civic Life, Tufts University, noted, “What concerns me is the ability of governments and big companies to aggregate information and gain insight into individuals that they can use to influence those individuals in ways that are too subtle to be noticed or countered. The threat is to liberty.”

Freelance journalist Mary K. Pratt commented, “Algorithms have the capability to shape individuals’ decisions without them even knowing it, giving those who have control the algorithms (in how they’re built and deployed) an unfair position of power. So while this technology can help in so many areas, it does take away individual decision-making without many even realizing it.”

Martin Shelton, Knight-Mozilla OpenNews Fellow at The Coral Project + New York Times, wrote, “Peoples’ values inform the design of algorithms – what data they will use, and how they will use data. Far too often, we see that algorithms reproduce designers’ biases by reducing complex, creative decisions to simple decisions based on heuristics. Those heuristics do not necessarily favor the person who interacts with them. These decisions typically lead software creators not to optimize for qualitative experiences, but instead, optimizing for click-through rates, page views, time spent on page, or revenue. These design decisions mean that algorithms use (sometimes quite misplaced) heuristics to decide which news articles we might be interested in; people we should connect with; products we should buy.”

Chris Showell, an independent health informatics researcher based in Australia, said, “The organisation developing the algorithm has significant capacity to influence or moderate the behaviour of those who rely on the algorithm’s output. Two current examples: manipulation of the process displayed in online marketplaces, and use of ‘secret’ algorithms in evaluating social welfare recipients. There will be many others in years to come. It will be challenging for even well-educated users to understand how an algorithm might assess them, or manipulate their behaviour. Disadvantaged and poorly educated users are likely to be left completely unprotected.”

Writer James Hinton commented, “The fact the internet can, through algorithms, be used to almost read our minds, means those who have access to the algorithms and their databases have a vast opportunity to manipulate large population groups. The much-talked-about ‘experiment’ conducted by Facebook to determine if it could manipulate people emotionally through deliberate tampering with news feeds is but one example of both the power, and the lack of ethics, that can be displayed.”

An anonymous president of a consulting firm said, “LinkedIn tries to manipulate me to benefit from my contacts’ contacts and much more. If everyone is intentionally using or manipulating each other, is it acceptable? We need to see more-honest, trust-building innovations and fewer snarky corporate manipulative design tricks. Someone told me that someday only rich people will not have smartphones, suggesting that buying back the time in our day will soon become the key to quality lifestyles in our age of information overload. At what cost, and with what ‘best practices’ for the use of our recovered time per day? The overall question is whether good or bad behaviors will predominate globally.” This consultant suggested: “Once people understand which algorithms manipulate them to build corporate revenues without benefiting users, they will be looking for more-honest algorithm systems that share the benefits as fairly as possible. When everyone globally is online, another 4 billion young and poor learners will be coming online. A system could go viral to win trillions in annual revenues based on micropayments due to sheer volume. Example: The Facebook denumerator app removes the manipulative aspects of Facebook, allowing users to return to more typically social behavior.”

Several respondents expressed concerns about a particular industry – insurers. An anonymous respondent commented, “The increasing migration of health data into the realm of ‘big data’ has potential for the nightmare scenario of Gattaca writ real.”

Our artificial intelligence is only as good as we can design it. If the systems we are using presently do not evolve with our needs, algorithms will be useless at best, harmful at worst. Masha Falkov

Masha Falkov, artist and glassblower, said, “It is important to moderate algorithms with human judgment and compassion. Already we see every day how insurance companies attempt to wrest themselves out of paying for someone’s medical procedure. The entire health care system in the U.S. is a madhouse presently moderated by individuals who secretly choose to rebel against its tyranny. Doctors who fight for their patients to get the medicine they need, operators within insurance companies who decide to not deny the patient the service, at the risk of their own job. Our artificial intelligence is only as good as we can design it. If the systems we are using presently do not evolve with our needs, algorithms will be useless at best, harmful at worst.”

Systems architect John Sniadowski noted, “Predictive modeling will make life more convenient, but conversely it will narrow choices and confine individuals into classes of people from which there is no escape. Predictive modeling is unstoppable because international business already sees massive financial advantages by using such techniques. An example of this is insurance where risk is now being eliminated in search of profits instead of the original concept of insurance being shared risk. People are now becoming uninsurable either because of their geographic location or social position. Premiums are weighted against individuals on control decisions on which the individual has no control and therefore cannot improve their situation.”

Ryan Sweeney, director of analytics at Ignite Social Media, commented, “Every human is different, so an algorithm surrounding health care could tailor a patient’s treatment plan. It could also have the potential to serve the interests of the insurance company over the patient.”

All of this will lead to a flawed yet inescapable logic-driven society

Some who assessed the impacts of algorithms in the next decade expressed the opinion that they are unreliable, “oversold” and “cold,” saying they “give a false impression” of efficacy and are “not easily subject to critique.” An anonymous respondent said, “It’s not that algorithms are the problem; it’s that we think that with sufficient data we will have wisdom. We will become reliant upon ‘algorithms’ and data and this will lead to problematic expectations. Then that’s when things will go awry.”

People will forget that models are only an approximation of reality. Jason Hong

Jason Hong, an associate professor at Carnegie Mellon University, said, “People will forget that models are only an approximation of reality. The old adage of garbage in, garbage out still applies, but the sheer quantity of data and the speed of computers might give the false impression of correctness. As a trivial example, there are stories of people following GPS too closely and ending up driving into a river.”

An anonymous computer science PhD noted, “Algorithms typically lack sufficient empirical foundations, but are given higher trust by users. They are oversold and deployed in roles beyond their capacity.”

Bob Frankston, internet pioneer and software innovator, said, “The negatives of algorithms will outweigh the positives. There continues to be magical thinking assuming that if humans don’t intervene the ‘right thing’ will happen. Sort of the modern gold bugs that assume using gold as currency prevents humans from intervening. Algorithms are the new gold, and it’s hard to explain why the average ‘good’ is at odds with the individual ‘good.’”

An anonymous respondent observed, “Algorithms are opaque and not easily subject to critique. People too easily believe that they are scientific. Health care – there is not a single study that shows clinical improvement from the use of the electronic health record, and instead of saving costs, it has increased them. Resources going there are resources not gong into patient care. Consumer choice – we only see what we are allowed to see in whatever markets we’ve been segmented into. As that segmentation increases, our choices decrease. Corporate consolidation also decreases choices. Likewise news, opportunities, access. Big data can be helpful – like tracking epidemics – but it can also be devastating because there is a huge gap between individuals and the statistical person. We should not be constructing social policy just on the basis of the statistical average but, instead, with a view of the whole population. So I am inclined to believe that big data gets us to Jupiter, it may help us cope with climate change but it will not increase justice, fairness, morality and so on.”

B. Remy Cross, an assistant professor of sociology at Webster University in Missouri, said, “Algorithms in particular are prone to a sort of tehcno-fetishism where they are seen as perfectly unbiased and supremely logical, when they are often nothing of the sort.”

An anonymous technology developer commented, “Algorithms will overestimate the certainty with which people hold convictions. Most people are pretty wishy-washy but algorithms try to define you by estimating feelings/beliefs. If I ‘kind of like’ something I am liable to be grouped with fervent lovers of that thing.”

Some said the aura of definitive digital logic is already difficult to overcome. An anonymous software security consultant bemoaned the lack of quick and fair appeals processes for automated decisions. “It’s already nearly impossible to correct an incorrect credit report, despite the existence of clear laws requiring support for doing so. It seems unlikely that similar problems will be easy to correct in the future unless significant regulation is added around such systems. I am hopeful the benefits will be significant, but I expect the downsides to be far more obvious and easy to spot than the upsides.”

Some respondents said human managers will ignore people’s needs or leave them unattended more as machine intelligence takes over more tasks. An anonymous participant commented, “The use of algorithms will create a distance from those who make corporate decisions and the actual decision that gets made. This will result in the plausible deniability that a manager did not actively control the outcome of the algorithm, and as a result, (s)he is not responsible for the outcome when it affects either the public or the employees.”

Another anonymous participant wrote, “The downsides to these are any situations that do not fit a standard set of criteria or involve judgment calls – large systems do not handle exceptional situations well and tend to be fairly inflexible and complicated to navigate. I see a great deal of trouble in terms of connections between service providers and the public they serve because of a lack of empathy and basic interaction. It’s hard to plan for people’s experiences when the lived experience of the people one plans for are alien to one’s own experiential paradigm.”

An anonymous computer scientist wrote, “The tech industry is attuned to computer logic, not feelings or ethical outcomes. The industrial ‘productivity’ paradigm is running out of utility, and we need a new one that is centered on more human concerns.”

James McCarthy, a manager, commented, “Sometimes stuff just happens that can’t be accounted for by even a sufficiently complex rules set, and I worry that increasing our dependency on algorithmic decision-making will also create an increasingly reductive view of society and human behavior.”

An anonymous respondent said, “An algorithm is only as good as the filter it is put through, and the interpretation put upon it. Too often we take algorithms as the basis of fact, or the same as a statistic, which they are not. They are ways of collecting information into subjects. An over-reliance on this and the misinterpretation of what they are created for shall lead to trouble within the next decade.”

Some fear people could lose sophisticated decision-making capabilities and local intelligence

Since the early days of widespread adoption of the internet, some have expressed concerns that the fast-evolving dependence upon intelligence augmentation via algorithms will make humans less capable of thinking for themselves. Some respondents in this canvassing noted this as likely to have a negative impact on the capabilities of the individual.

Amali De Silva-Mitchell, a futurist and consultant, wrote, “Predictive modeling will limit individual self-expression hence innovation and development. It will cultivate a spoon-fed population with those in the elite being the innovators. There will be a loss in complex decision-making skills of the masses. Kings and serfs will be made and the opportunity for diversification lost and then even perhaps global innovative solutions lost. The costs of these systems will be too great to overturn if built at a base level. The current trend toward the uniform will be the undoing rather than building of platforms that can communicate with everything so that innovation is left as key and people can get the best opportunities. Algorithms are not the issue, the issue is a standard algorithm.”

Automated decision-making will reduce the perceived need for critical thinking and problem solving.

An anonymous respondent said, “Automated decision-making will reduce the perceived need for critical thinking and problem solving. I worry that this will increase trust in authority and make decisions of all kinds more opaque.”

Dave McAllister, director at Philosophy Talk, said, “We will find ourselves automatically grouped into classes (caste system) by algorithms. While it may increase us being effective in finding the information we need while drowning in a world of big data, it will also limit the scope of synthesis and serendipitous discovery.”

Giacomo Mazzone wrote, “Unfortunately most algorithms that will be produced in the next 10 years will be from global companies looking for immediate profits. This will kill local intelligence, local skills, minority languages, local entrepreneurship because most of the available resources will be drained out by the global competitors. The day that a ‘minister for algorithms toward a better living’ will be created is likely to be too late unless new forms of social shared economy emerge, working on ‘algorithms for happiness.’ But this is likely to take longer than 10 years.”

Jesse Drew, a digital media professor at the University of California-Davis, replied, “Certainly algorithms can make life more efficient, but the disadvantage is the weakening of human thought patterns that rely upon serendipity and creativity.”

Ben Railton, a professor of English and American studies at Fitchburg State University in Massachusetts, wrote, “Algorithms are one of the least attractive parts of both our digital culture and 21st-century capitalism. They do not allow for individual identity and perspective. They instead rely on the kinds of categorizations and stereotypings we desperately need to move beyond.”

Miles Fidelman, systems architect, policy analyst and president at the Center for Civic Networking, wrote, “By and large, tools will disproportionally benefit those who have commercial reasons to develop them – as they will have the motivation and resources to develop and deploy tools faster.”

One respondent warned of looming motivations to apply algorithms more vigorously will limit freedom of expression. Joe McNamee, executive director at European Digital Rights, commented, “The Cambridge/Sanford studies on Facebook likes, the Facebook mood experiment, Facebook’s election turnout experiment and the analysis of Google’s ability to influence elections have added to the demands for online companies to become more involved in policing online speech. All raise existential questions for democracy, free speech and, ultimately, society’s ability to evolve. The range of ‘useful’ benefits is broad and interesting but cannot outweigh this potential cost.”

Solutions should include embedding respect for the individual

Algorithms require and create data. Much of the internet economy has been built by groups offering “free” use of online tools and access to knowledge while minimizing or masking the fact that people are actually paying with their attention and/or allegiance – as well as complete algorithmic access to all of their private information plus ever-more-invasive insights into their hopes, fears and other emotions. Some say it has already gotten to the point at which the data collectors behind the algorithms are likely to know more about you than you do yourself.

The major downside is that in order for such algorithms to function, they will need to know a great deal about everyone’s personal lives. Rob Smith

Rob Smith, software developer and privacy activist, observed, “The major downside is that in order for such algorithms to function, they will need to know a great deal about everyone’s personal lives. In an ecosystem of competing services, this will require sharing lots of information with that marketplace, which could be extremely dangerous. I’m confident that we can in time develop ways to mitigate some of the risk, but it would also require a collective acceptance that some of our data is up for grabs if we want to take advantage of the best services. That brings me to perhaps the biggest downside. It may be that, in time, people are – in practical terms – unable to opt out of such marketplaces. They might have to pay a premium to contract services the old-fashioned way. In summary, such approaches have a potential to improve matters, at least for relatively rich members of society and possibly for the disadvantaged. But the price is high and there’s a danger that we sleepwalk into things without realising what it has cost us.”

David Adams, vice president of product at a new startup, said, “Overreach in intellectual property in general will be a big problem in our future.”

Some respondents suggested that assuring individuals the rights to and control over their identity is crucial. Garth Graham, board member at Telecommunities Canada, wrote, “The future positives will only outweigh the negatives if the simulation of myself – the anticipation of my behaviours – that the algorithms make possible is owned by me, regardless of who created it.”

Paul Dourish, chancellor’s professor of informatics at the University of California-Irvine, said, “More needs to be done to give people insight into and control over algorithmic processing – which includes having algorithms that work on individuals’ behalf rather than on behalf of corporations.”

Marshall Kirkpatrick, co-founder of Little Bird, previously with ReadWriteWeb and TechCrunch, said, “Most commercial entities will choose to implement algorithms that serve them even at the expense of their constituents. But some will prioritize users and those will be very big. Meeting a fraction of the opportunities that arise will require a tremendous expansion of imagination.”

Susan Price, digital architect and strategist at Continuum Analytics, commented, “The transparent provenance of data and transparent availability of both algorithms and analysis will be crucial to creating the trust and dialog needed to keep these systems fair and relatively free of bias. This necessary transparency is in conflict with the goals of corporations developing unique value in intellectual property and marketing. The biggest challenge is getting humans in alignment on what we collectively hope our data will show in the future – establishing goals that reflect a fair, productive society, and then systems that measure and support those goals.”

One respondent suggested algorithm-writing teams include humanist thinkers. Dana Klisanin, founder and CEO of Evolutionary Guidance Media R&D, wrote, “If we want to weigh the overall impact of the use of algorithms on individuals and society toward ‘positive outweighs negative,’ the major corporations will need to hold themselves accountable through increasing their corporate social responsibility. Rather than revenue being the only return, they need to hire philosophers, ethicists and psychologists to help them create algorithms that provide returns that benefit individuals, society and the planet. Most individuals have never taken a course in ‘Race, Class and Gender,’ and do not recognize discrimination even when it is rampant and visible. The hidden nature of algorithms means that it will take individuals and society that much longer to demand transparency. Or, to say it another way: We don’t know what we don’t know.”

An anonymous respondent who works for the U.S. government cited the difficulties in serving both societal good and the rights of the individual, writing, “There is a tension between the wishes of individuals and the functions of society. Fairness for individuals comes at the expense of some individual choices. It is hard to know how algorithms will end up on the spectrum between favoring individuals over a functioning society because the trend for algorithms is toward artificial intelligence. AI will likely not work the same way that human intelligence does.”

As code takes over complex systems, humans are left out of the loop

As intelligent systems and knowledge networks become more complex and artificial intelligence and quantum computing evolve over the next decade, experts expect that humans will be left more and more “out of the loop” as more and more aspects of code creation and maintenance are taken over by machine intelligence.

Most people will simply lose agency as they don’t understand how choices are being made for them.

The vast majority of comments in this vein came from expert respondents who remained anonymous. A sampling of these statements:

An executive director for an open source software organization commented, “Most people will simply lose agency as they don’t understand how choices are being made for them.”

One respondent said, “Everything will be ‘custom’-tailored based on the groupthink of the algorithms; the destruction of free thought and critical thinking will ensure the best generation is totally subordinate to the ruling class.”

Another respondent wrote, “Current systems are designed to emphasize the collection, concentration and use of data and algorithms by relatively few large institutions that are not accountable to anyone, and/or if they are theoretically accountable are so hard to hold accountable that they are practically unaccountable to anyone. This concentration of data and knowledge creates a new form of surveillance and oppression (writ large). It is antithetical to and undermines the entire underlying fabric of the erstwhile social form enshrined in the U.S. Constitution and our current political-economic-legal system. Just because people don’t see it happening doesn’t mean that it’s not, or that it’s not undermining our social structures. It is. It will only get worse because there’s no ‘crisis’ to respond to, and hence, not only no motivation to change, but every reason to keep it going – especially by the powerful interests involved. We are heading for a nightmare.”

A scientific editor observed, “The system will win; people will lose. Call it ‘The Selfish Algorithm’; algorithms will naturally find and exploit our built-in behavioral compulsions for their own purposes. We’re not even consumers anymore. As if that wasn’t already degrading enough, it’s commonplace to observe that these days people are the product. The increasing use of ‘algorithms’ will only – very rapidly – accelerate that trend. Web 1.0 was actually pretty exciting. Web 2.0 provides more convenience for citizens who need to get a ride home, but at the same time – and it’s naive to think this is a coincidence – it’s also a monetized, corporatized, disempowering, cannibalizing harbinger of the End Times. (I exaggerate for effect. But not by much.)”

A senior IT analyst said, “Most people use and will in the future use the algorithms as a facility, not understanding their internals. We are in danger of losing our understanding and then losing the capability to do without. Then anyone in that situation will let the robots decide.”

What happens when algorithms write algorithms? “Algorithms in the past have been created by a programmer,” explained one anonymous respondent. “In the future they will likely be evolved by intelligent/learning machines. We may not even understand where they came from. This could be positive or negative depending on the application. If machines/programs have autonomy, this will be more negative than positive. Humans will lose their agency in the world.”

And then there is the possibility of an AI takeover.

Seti Gershberg, executive producer and creative director at Arizona Studios, wrote, “At first, the shift will be a net benefit. But as AI begins to pass the Turing test and potentially become sentient and likely super-intelligent, leading to an intelligence explosion as described by Vernor Vinge, it is impossible to say what they will or will not do. If we can develop a symbiotic relationship with AI or merge with them to produce a new man-machine species, it would be likely humans would survive such an event. However, if we do not create a reason for AI to need humans then they would either ignore us or eliminate us or use us for a purpose we cannot imagine. Recently, the CEO of Microsoft put forth a list of 10 rules for AI and humans to follow with regard to their programming and behavior as a method to develop a positive outcome for both man and machines in the future. However, if humans themselves cannot follow the rules set forth for good behavior and a positive society (i.e., the Ten Commandments – not in a religious sense, but one of common sense) I would ask the question, why would or should AI follow rules humans impose on them?”

Sign up for The Briefing

Weekly updates on the world of news & information