Two lines of thinking about societal divisions were embodied in many respondents’ answers. First, they predicted that an algorithm-assisted future will widen the gap between the digitally savvy, who are the most desired customers in the new information ecosystem, and disadvantage those who are not nearly as connected or able to participate. The second observation about “divides” is that social and political divisions will be abetted by algorithms, as algorithm-driven insights encourage people to live in echo chambers of repeated and reinforced media and political content. As one respondent put it, “Algorithms risk entrenching people in their own patterns of thought and like-mindedness.”

The disadvantaged are likely to be even more so

Some respondents predicted that those individuals who are already being left out or disadvantaged by the digital age will fall even further behind as algorithms become more embedded in society. They noted that the capacity to participate in digital life is not universal because fast-evolving digital tools and connections are costly, complicated, difficult to maintain and sometimes have a steep learning curve. And they said algorithmic tools create databased information that categorizes individuals in ways that are often to their disadvantage.

Pete Cranston of Euroforic Services wrote, “Smart(er) new apps and platforms will require people to learn how to understand the nature of the new experience, learn how it is guided by software, and learn to interact with the new environment. That has tended to be followed by a catch-up by people who learn then to game the system, as well as navigate it more speedily and reject experiences that don’t meet expectations or needs. The major risk is that less-regular users, especially those who cluster on one or two sites or platforms, won’t develop that navigational and selection facility and will be at a disadvantage.”

If the current economic order remains in place, then I do not see the growth of data-driven algorithms providing much benefit to anyone outside of the richest in society.
Christopher Owens

Christopher Owens, a community college professor, said, “If the current economic order remains in place, then I do not see the growth of data-driven algorithms providing much benefit to anyone outside of the richest in society.”

Tom Vest, a research scientist, commented, “Algorithms will most benefit the minority of individuals who are consistently ‘preferred’ by algorithms, plus those who are sufficiently technically savvy to understand and manipulate them (usually the same group).”

These proponents argued that “upgrades” often do very little to make crucial and necessary improvements in the public’s experiences. Many are incremental and mostly aimed at increasing revenue streams and keeping the public reputations of technology companies – and their shareholder value – high. An anonymous sociologist at the Social Media Research Foundation commented, “Algorithms make discrimination more efficient and sanitized. Positive impact will be increased profits for organizations able to avoid risk and costs. Negative impacts will be carried by all deemed by algorithms to be risky or less profitable.”

Jerry Michalski, founder at REX, commented, “Algorithms are already reshaping – might we say warping? – relationships, citizenship, politics and more. Almost all the algorithms that affect our lives today are opaque, created by data scientists (or similar) behind multiple curtains of privacy and privilege. Worse, the mindset behind most of these algorithms is one of consumerism: How can we get people to want more, buy more, get more? The people designing the algorithms seldom have citizens’ best interests at heart. And that can’t end well. On the positive side, algorithms may help us improve our behavior on many fronts, offsetting our weaknesses and foibles or reminding us just in time of vital things to do. But on the whole, I’m pessimistic about algorithm culture.”

Some of these experts said that – as smart, networked devices and big data combine and allow the creation of highly detailed databased profiles of individuals that follow them everywhere and impact their transactions – people of lesser means and those with some socially questionable acts in their backgrounds will be left out, cheated or forced to come up with alternate methods by which to operate securely, safely and fairly in information networks.

Dave Howell, a senior program manager in the telecommunications industry, replied, “Algorithms will identify the humans using connected equipment. Identity will be confirmed through blockchain by comparison to trusted records of patterns, records kept by the likes of [Microsoft], Amazon, Google. But there are weaknesses to any system, and innovative people will work to game a system. Advertising companies will try to identify persons against their records, blockchains can be compromised (given a decade someone will …). Government moves too slowly. The Big Five (Microsoft, Google, Apple, Amazon, Facebook) will offer technology for trust and identity, few other companies will be big enough. Scariest to me is Alibaba or China’s state-owned companies with the power to essentially declare who is a legal person able to make purchases or enter contracts. Government does not pay well enough to persevere. I bet society will be stratified by which trust/identity provider one can afford/qualify to go with. The level of privacy and protection will vary. Lois McMaster [Bujold]’s Jackson’s Whole suddenly seems a little more chillingly realistic.”

Nigel Cameron, president and CEO of the Center for Policy on Emerging Technologies, observed, “Positives: Enormous convenience/cost-savings/etc. Negatives: Radically de-humanizing potential, and who writes/judges the algos? In a consensus society all would be well. But we have radically divergent sets of values, political and other, and algos are always rooted in the value systems of their creators. So the scenario is one of a vast opening of opportunity – economic and otherwise – under the control of either the likes of Zuckerberg or the grey-haired movers of global capital or ….”

Freelancer Julie Gomoll wrote, “The overall effect will be positive for some individuals. It will be negative for the poor and the uneducated. As a result, the digital divide and wealth disparity will grow. It will be a net negative for society.”

Polina Kolozaridi, a researcher at the Higher School of Economics, Moscow, wrote, “The Digital Gap will extend, as people who are good in automating their labour will be able to have more benefits.”

An anonymous associate professor observed, “Whether algorithms positively or negatively impact people’s lives probably depends on the educational background and technological literacy of the users. I suspect that winners will win big and losers will continue to lose – the Matthew effect. This is likely to occur through access to better, cheaper and more-efficient services for those who understand how to use information, and those who don’t understand it will fall prey to scams, technological rabbit holes and technological exclusion.”

An anonymous respondent wrote, “Algorithms are not neutral, and often privilege some people at the expense of those with certain marginalized identities. As data mining and algorithmic living become more pervasive, I expect these inequalities will continue.”

Another respondent wrote, “The benefits will accrue disproportionately to the parts of society already doing well – the upper middle class and above. Lower down the socioeconomic ladder, algorithmic policy may have the potential to improve some welfare at the expense of personal freedom: for example, via aggressive automated monitoring of food stamp assistance, or mandatory online training. People in these groups will also be most vulnerable to algorithmic biases, which will largely perpetuate the societal biases present in the training data. Since algorithms are increasingly opaque, it will be hard to provide oversight or prove discrimination.”

Algorithms create filter bubbles and silos shaped by corporate data collectors; they limit people’s exposure to a wider range of ideas and reliable information and eliminate serendipity

Code written to make individualized information delivery more accurate (and more monetizable for the creators of the code) also limits what people see, read and understand about the world. It can create “echo chambers” in which people see only what the algorithms determine they want to see. This can limit exposure to opposing views and random, useful information. Among the items mentioned as exemplars in these responses were the United Kingdom’s contentious vote to exit the European Union and the 2016 U.S. presidential election cycle. Some respondents also expressed concerns over the public’s switch in news diet from the pre-internet 20th century’s highly edited, in-depth and professionally reported content to the algorithm-driven viewing and sharing of often-less-reliable news via social media outlets such as Facebook and Twitter.

Our society is as polarized as it has ever been. We are going to need to be disciplined about not surrendering to what the robots think we would like to see.
Valerie Bock

Valerie Bock of VCB Consulting commented, “It has definitely come to pass that it is now more possible than ever before to curate one’s information sources so that they include only those which strike one as pleasurable. That’s a real danger, which we’re seeing the impact of in this time of the Brexit and the 2016 U.S. election season. Our society is as polarized as it has ever been. We are going to need to be disciplined about not surrendering to what the robots think we would like to see. I worry that because it will become a hassle to see stuff we don’t ‘like,’ that gradually, fewer and fewer people will see that which challenges them.”

M.E. Kabay, a professor of computer information systems at Norwich University, said, “We may be heading for lowest-common-denominator information flows. Another issue is the possibility of increasingly isolated information bubbles or echo chambers. If the algorithms directing news flow suppress contradictory information – information that challenges the assumptions and values of individuals – we may see increasing extremes of separation in worldviews among rapidly diverging subpopulations.”

Vance S. Martin, an instructional designer at Parkland College, said, “Algorithms save me time when my phone gets a sense for what I will be typing and offers suggestions, or when Amazon or Netflix recommends something based on my history. However, they also close options for me when Google or Facebook determine that I read or watch a certain type of material and then offer me content exclusively from that point of view. This narrows my field of view, my exposure to other points of view. Using history to predict the future can be useful, but overlooks past reasons, rationales and biases. For example, in the past, the U.S. based its immigration quotas on historical numbers of people who came in the past. So if in the early 1800s there was a large number of Scottish immigrants and few Italian immigrants, they would allow in more Scots, and fewer Italians. So a historical pattern leads to future exclusionary policies. So if an algorithm determines that I am male, white, middle-class and educated, I will get different results and opportunities than a female African-American, lower-class aspirant. So ease of life/time will be increased, but social inequalities will presumably become reified.”

Jan Schaffer, executive director at J-Lab, predicted, “The public will increasingly be creeped out by the nonstop data mining.”

An anonymous assistant professor at a state university said, “I worry that the use of algorithms, while not without its benefits, will do more harm than good by limiting information and opportunities. Algorithms and big data will improve health care decisions, for example, but they will really hurt us in other ways, such as their potential influence on our exposure to ideas, information, opinions and the like.”

Steven Waldman, founder and CEO of LifePosts, said, “Algorithms, of course, are not values-neutral. If Twitter thrives on retweets, that seems neutral but it actually means that ideas that provoke are more likely to succeed; if Facebook prunes your news feed to show you things you like, that means you’ll be less exposed to challenging opinions or boring content, etc. As they are businesses, most large internet platforms will have to emphasize content that prompts the strongest reaction, whether it’s true or not, healthy or not.”

The many acts of exclusion committed in the act of leveraging algorithms was a primary concern expressed by Frank Elavsky, a data and policy analyst at Acumen LLC. Outlining what he perceives to be the potential impacts of algorithmic advances over the next decade, he listed a series of points, writing, “Negative changes? Identity security. Privacy.” He included the following on the list of concerning trends he ties to the “Algorithm Age”:

  • “Identity formation – people will become more and more shaped by consumption and desire.
  • Racial exclusion in consumer targeting.
  • Gendered exclusion in consumer targeting.
  • Class exclusion in consumer targeting – see Google’s campaign to educate many in Kansas on the need for a fiber-optic infrastructure.
  • Nationalistic exclusion in consumer targeting.
  • Monopoly of choice – large companies control the algorithms or results that people see.
  • Monopoly of reliable news – already a problem on the internet, but consumer bias will only get worse as algorithms are created to confirm your patterns of interest.”

An anonymous social scientist spoke up for serendipity. “We are mostly unaware of our own internal algorithms, which, well, sort of define us but may also limit our tastes, curiosity and perspectives,” he said. “I’m not sure I’m eager to see powerful algorithms replace the joy of happenstance. What greater joy is there than to walk the stacks in a graduate library looking for that one book I have to read, but finding one I’d rather? I’m a better person to struggle at getting 5 out of 10 New Yorker cartoons than to have an algorithm deliver 10 they’d know I get. I’m comfortable with my own imperfection; that’s part of my humanness. Efficiency and the pleasantness and serotonin that come from prescriptive order are highly overrated. Keeping some chaos in our lives is important.”

An anonymous political science professor took the idea further, posing that a lack of serendipity can kill innovative thinking. He wrote, “The first issue is that randomness in a person’s life is often wonderfully productive, and the whole purpose of algorithms seems to be to squash those opportunities in exchange for entirely different values (such as security and efficiency). A second, related question is whether algorithms kill experimentation (purposely or not); I don’t see how they couldn’t, by definition.”

Several participants in this canvassing expressed concerns over the change in the public’s information diets, the “atomization of media,” an overemphasis of extreme, ugly, weird news, and the favoring of “truthiness” over more-factual material that may be vital to understanding how to be a responsible citizen of the world.

Respondent Noah Grand commented, “Algorithms help create the echo chamber. It doesn’t matter if the algorithm recognizes certain content or not. In politics and news media it is extremely difficult to have facts that everyone agrees on. Audiences may not want facts at all. To borrow from Stephen Colbert, audiences may prefer ‘truthiness’ to ‘truth.’ Algorithms that recognize ‘engagement’ – likes, comments, retweets, etc. – appear to reward truthiness instead of truth.”

We are quite literally losing the discursive framework we need to communicate with people who disagree with us.

An anonymous respondent said, “I am troubled by the way algorithms contribute to the atomization of media through Facebook and the like. We are quite literally losing the discursive framework we need to communicate with people who disagree with us.”

An anonymous technician said online discourse choreographed by algorithms creates a sophomoric atmosphere. “Algorithms are just electronic prejudices, just as the big grown-up world is just high school writ large,” he wrote. “We’ll get the same general sense of everything being kind of okay, kind of sucking, and the same daily outrage story, and the same stupid commentary, except algorithms will be the responsible parties, and not just some random schmuck, and artificial intelligences composed of stacks of algorithms will be writing the stories and being outraged.”

Robert Boatright, professor of political science at Clark University, said algorithms remove necessary cognitive challenges, writing, “The main problem is that we don’t encounter information that conflicts with our prior beliefs or habits, and we’re rarely prompted to confront radically new information or content – whether in news, music, purchasing or any of the other sorts of opportunities that we are provided.”

An anonymous IT analyst noted, “Facebook, for example, only shows topics you’ve previously shown interest in on their platform to show you more of the same. You’re far less likely to expand your worldview if you’re only seeing the same narrow-minded stuff every day. It’s a vast topic to delve into when you consider the circumstances a child is born into and how it will affect individuals’ education.”

Respondents also noted that the savviest tech strategists are able to take advantage of algorithms’ features, foibles and flaws to “game the system” and “get the highest profit out of most people.”