The respondents to this canvassing offered a variety of ideas about how individuals and the broader culture might respond to the algorithm-ization of life. They noted that those who create and evolve algorithms are not held accountable to society and argued there should be some method by which they are. They also argued there is great need for education in algorithm literacy, and that those who design algorithms should be trained in ethics and required to design code that considers societal impacts as it creates efficiencies.

Glenn Ricart, Internet Hall of Fame member, technologist and founder and CTO of US Ignite, commented, “The danger is that algorithms appear as ‘black boxes’ whose authors have already decided upon the balance of positive and negative impacts – or perhaps have not even thought through all the possible negative impacts. This raises the issue of impact without intention. Am I responsible for all the impacts of the algorithm I invoke, or algorithms invoked in my behalf through my choice of services? How can we achieve algorithm transparency, at least at the level needed for responsible invocation? On the positive side, how can we help everyone better understand the algorithms they choose and use? How can we help people personalize the algorithms they choose and use?”

The pushback will be inevitable but necessary and will, in the long run, result in balances that are more beneficial for all of us.
Scott McLeod

Scott McLeod, an associate professor of educational leadership at the University of Colorado, Denver, is hopeful that the public will gain more control. “While there are dangers in regard to who creates and controls the algorithms,” he said, “eventually we will evolve mechanisms to give consumers greater control that should result in greater understanding and trust. Right now the technologies are far outpacing our individual and societal abilities to make sense of what’s happening and corporate and government entities are taking advantage of these conceptual and control gaps. The pushback will be inevitable but necessary and will, in the long run, result in balances that are more beneficial for all of us.”

It starts with algorithm literacy – this goes beyond basic digital literacy

Because algorithms are generally invisible – even often referred to as “black box” constructs, as they are not evident in user interfaces and their code is usually not made public – most people who use them daily are in the dark about how they work and why they can be a threat. Some respondents said the public should be better educated about them.

Unless there is an increased effort to make true information literacy a part of basic education, there will be a class of people who can use algorithms and a class used by algorithms.
David Lankes

David Lankes, a professor and director at the University of South Carolina School of Library and Information Science, wrote, “There is simply no doubt that, on aggregate, automation and large-scale application of algorithms have had a net-positive effect. People can be more productive, know more about more topics than ever before, identify trends in massive piles of data and better understand the world around them. That said, unless there is an increased effort to make true information literacy a part of basic education, there will be a class of people who can use algorithms and a class used by algorithms.”

An anonymous professor at MIT observed, “[The challenge presented by algorithms] is the greatest challenge of all. Greatest because tackling it demands not only technical sophistication but an understanding of and interest in societal impacts. The ‘interest in’ is key. Not only does the corporate world have to be interested in effects, but consumers have to be informed, educated and, indeed, activist in their orientation toward something subtle. This is what computer literacy is about in the 21st century.”

Trevor Owens, senior program officer at the Institute of Museum and Library Services, agreed, writing, “Algorithms all have their own ideologies. As computational methods and data science become more and more a part of every aspect of our lives, it is essential that work begin to ensure there is a broader literacy about these techniques and that there is an expansive and deep engagement in the ethical issues surrounding them.”

Daniel Menasce, a professor of computer science at George Mason University, wrote, “Algorithms have been around for a long time, even before computers were invented. They are just becoming more ubiquitous, which makes individuals and the society at large more aware of their existence in everyday life devices and applications. The big concern is the fact that the algorithms embedded in a multitude of devices and applications are opaque to individuals and society. Consider for example the self-driven cars being currently developed. They certainly have collision-avoidance and risk-mitigation algorithms. Suppose a pedestrian crosses in front of your vehicle. The embedded algorithm may decide to hit the pedestrian as opposed to ramming the vehicle against a tree because the first choice may cause less harm to the vehicle occupants. How does an individual decide if he or she is OK with the myriad decision rules embedded in algorithms that control your life and behavior without knowing what the algorithms will decide? This is a non-trivial problem because many current algorithms are based on machine learning techniques, and the rules they use are learned over time. Therefore, even if the source code of the embedded algorithms were made public, it is very unlikely that an individual would know the decisions that would be made at run time. In summary, algorithms in devices and applications have some obvious advantages but pose some serious risks that have to be mitigated.”

An anonymous policy adviser said, “There is a need for algorithmic literacy, and to critically assess outcomes from, e.g., machine learning, and not least how this relates to biases in the training data. Finding a framework to allow for transparency and assess outcomes will be crucial. Also a need to have a broad understanding of a the algorithmic ‘value chain’ and that data is the key driver and as valuable as the algorithm which it trains.”

Alexander Halavais, director of the master’s program in social technologies at Arizona State University, said teaching these complex concepts will require a “revolutionary” educational effort. “For society as a whole, algorithmic systems are likely to reinforce (and potentially calcify) existing structures of control,” he explained. “While there will be certain sectors of society that will continue to be able to exploit the move toward algorithmic control, it is more likely that such algorithms will continue to inscribe the existing social structure on the future. What that means for American society is that the structures that make Horatio Alger’s stories so unlikely will make them even less so. Those structures will be ‘naturalized’ as just part of the way in which things work. Avoiding that outcome requires a revolutionary sort of educational effort that is extraordinarily difficult to achieve in today’s America; an education that doesn’t just teach kids to ‘code,’ but to think critically about how social and technological structures shape social change and opportunity.”

Justin Reich, executive director at the MIT Teaching Systems Lab, observed, “The advancing impact of algorithms in our society will require new forms and models of oversight. Some of these will need to involve expanded ethics training in computer science training programs to help new programmers better understand the consequences of their decisions in a diverse and pluralistic society. We also need new forms of code review and oversight, that respect company trade secrets but don’t allow corporations to invoke secrecy as a rationale for avoiding all forms of public oversight.”

People call for accountability processes, oversight and transparency

2016 was a banner year for algorithm accountability activists. Though they had been toiling largely in obscurity, their numbers have begun to grow. Meanwhile, public interest has increased somewhat as large investments in AI by every top global technology company, breakthroughs in the design and availability of autonomous vehicles and the burgeoning of big data analytics have raised algorithm issues to a new prominence. Many respondents in this canvassing urged that new algorithm accountability, oversight and transparency initiatives be developed and deployed. After the period in which this question was open for comments by our expert group (July 1-Aug. 12, 2016), three important White House reports on AI were released and the Partnership on AI, an industry-centered working group including Amazon, Facebook, Google, IBM and Microsoft was announced. (Apple joined the partnership in early 2017.)

Algorithmic accountability is a big-tent project, requiring the skills of theorists and practitioners, lawyers, social scientists, journalists and others. It’s an urgent, global cause with committed and mobilized experts looking for support.
Frank Pasquale

In this canvassing, Frank Pasquale, author of The Black Box Society: The Secret Algorithms That Control Money and Information and professor of law at the University of Maryland, wrote: “Empiricists may be frustrated by the ‘black box’ nature of algorithmic decision-making; they can work with legal scholars and activists to open up certain aspects of it (via freedom of information and fair data practices). Journalists, too, have been teaming up with computer programmers and social scientists to expose new privacy-violating technologies of data collection, analysis and use – and to push regulators to crack down on the worst offenders. Researchers are going beyond the analysis of extant data and joining coalitions of watchdogs, archivists, open data activists and public interest attorneys to assure a more balanced set of ‘raw materials’ for analysis, synthesis and critique. Social scientists and others must commit to the vital, long-term project of assuring that algorithms are producing fair and relevant documentation; otherwise, states, banks, insurance companies and other big, powerful actors will make and own more and more inaccessible data about society and people. Algorithmic accountability is a big-tent project, requiring the skills of theorists and practitioners, lawyers, social scientists, journalists and others. It’s an urgent, global cause with committed and mobilized experts looking for support.”

Several participants in the canvassing said the law must catch up to reality.

Lee McKnight, an associate professor at Syracuse University’s School of Information Studies, said, “Given the wide-ranging impact on all aspects of people’s lives, eventually, software liability law will be recognized to be in need of reform, since right now, literally, coders can get away with murder. Inevitably, regulation of implementation and operation of complex policy models such as [the] Dodd-Frank Volcker Rule capital adequacy standards will themselves be algorithmically driven. Regulatory algorithms, code and standards will be – actually already are – being provided as a service. The Law of Unintended Consequences indicates that the increasing layers of societal and technical complexity encoded in algorithms ensure that unforeseen catastrophic events will occur – probably not the ones we were worrying about.”

The government will need to step in, either to prevent some uses of information or to compensate for the discrimination that results.
Mark Lemley

Mark Lemley, a professor of law at Stanford Law School, pointed out the urgent need to address new issues arising out of the abundance of previously unavailable data. He explained, “Algorithms will make life and markets more efficient and will lead to significant advances in health. But they will also erode a number of implicit safety nets that the lack of information has made possible. The government will need to step in, either to prevent some uses of information or to compensate for the discrimination that results.”

Tse-Sung Wu, project portfolio manager at Genentech, used emerging concerns tied to autonomous vehicles as a compelling example of the need for legal reform. He wrote, “Perhaps the biggest peril is the dissolution of accountability unless we change our laws. Who will be held to account when these decisions are wrong? Right now, it’s a person – the driver of a vehicle or, in the case of professional services, someone with professional education and/or certification (a doctor making a diagnosis and coming up with a treatment plan; a judge making a ruling; a manager deciding how to allocate resources, etc.). In each of these, there is a person who is the ultimate decision-maker, and, at least at moral level, the person who is accountable (whether they are held to account is a different question). Liability insurance exists in order to manage the risk of poor decision-making by these individuals. How will our legal system of torts deal with technologies that make decisions: Will the creator of the algorithm be the person of ultimate accountability of the tool? Its owner? Who else? The algorithm will be limited by the assumptions, world view/mental model and biases of its creator. Will it be easier to tease these out, will it be harder to hide biases? Perhaps, which would be a good thing. In the end, while technology steadily improves, once again, society will need to catch up. We live in a civilization of tools, but the one thing these tools don’t yet do is make important decisions. The legal concepts around product liability closely define the accountabilities of failure or loss of our tools and consumable products. However, once tools enter the realm of decision-making, we will need to update our societal norms (and thus laws) accordingly. Until we come to a societal consensus, we may inhibit the deployment of these new technologies, and suffer from them inadvertently.”

Patrick Tucker, author of The Naked Future, wrote, “We can create laws that protect people volunteering information such as the Genetic Information Nondiscrimination Act, (Pub.L. 110–233, 122 Stat. 881) that ensures people aren’t punished for data that they share that then makes its way into an algorithm. The current suite of encryption products available to consumers shows that we have the technical means to allow consumers to fully control their own data and share it according to their wants and needs, and the entire FBI vs. Apple debate shows that there is strong public interest and support in preserving the ability of individuals to create and share data in a way that they can control. The worst possible move we, as a society, can make right now is to demand that technological progress reverse itself. This is futile and shortsighted. A better solution is to familiarize ourselves with how these tools work, understand how they can be used legitimately in the service of public and consumer empowerment, better living, learning and loving, and also come to understand how these tools can be abused.”

Many respondents agreed it is necessary to take immediate steps to protect the public’s interests.

Sandi Evans, an assistant professor at California State Polytechnic University, Pomona, said, “We need to ask: How do we evaluate, understand, regulate, improve, make ethical, make fair, build transparency into, etc., algorithms?”

Lilly Irani, an assistant professor at the University of California-San Diego, wrote, “While algorithms have many benefits, their tendency toward centralization needs to be countered with policy. When we talk about algorithms, we sometimes are actually talking about bureaucratic reason embedded in code. The embedding in code, however, powerfully takes the execution of bureaucracy out of specific people’s hands and into a centralized controller – what Aneesh Aneesh has called algocracy. A second issue is that these algorithms produce emergent, probabilistic results that are inappropriate in some domains where we expect accountable decisions, such as jurisprudence.”

Our algorithms, like our laws, need to be open to public scrutiny, to ensure fairness and accuracy.
Thomas Claburn

Thomas Claburn, editor-at-large at InformationWeek, commented, “Our algorithms, like our laws, need to be open to public scrutiny, to ensure fairness and accuracy.”

One anonymous respondent offered some specific suggestions, “Regarding governance: 1) Let’s start with it being mandatory that all training sets be publicly available. In truth, probably only people well-qualified will review them, but at least vested interests will be scrutinized by diverse researchers whom they cannot control. 2) Before any software is deployed it should be thoroughly tested not just for function but for values. 3) No software should be deployed in making decisions that affect benefits to people without a review mechanism and potential to change them if people/patients/students/workers/voters/etc. have a legitimate concern. 4) No lethal software should be deployed without human decision-makers in control. 5) There should be a list of disclosures at least about operative defaults so that mere mortals can learn something about what they are dealing with.”

An anonymous senior fellow at a futures organization studying civil rights observed, “There must be redress procedures since errors will occur.”

Another anonymous respondent wrote, “There are three things that need to happen here: 1) A 21st-century solution to the prehistoric approach to passwords; 2) A means whereby the individual has ultimate control over and responsibility for their information; and 3) Governance and oversight of the way these algorithms can be used for critical things (like health care and finance), coupled with an international (and internationally enforceable) set of laws around their use. Solve these, and the world is your oyster (or, more likely, Google’s oyster).”

Robert Bell, co-founder of the Intelligent Community Forum, commented, “Transparency is the great challenge. As these things exert more and more influence, we want to know how they work, what choices are being made and who is responsible. The irony is that, as the algorithms become more complex, the creators of them increasingly do not know what is going on inside the black box. How, then, can they improve transparency?”

Micah Altman, director of research at MIT Libraries, noted, “The key policy question is: How [will we] choose to hold government and corporate actors responsible for the choices that they delegate to algorithms? There is increasing understanding that each choice of algorithms embody a specific set of choices over what criteria are important to ‘solving’ a problem, and what can be ignored. To incent better choices in algorithms will likely require actors using them to provide more transparency, to explicitly design algorithms with privacy and fairness in mind, and holding actors who use algorithms meaningfully responsible for their consequences.”

Timothy C. Mack, managing principal at AAI Foresight, said, “The use of attention analysis on algorithm dynamics will be a possible technique to pierce the wall of black box decisions, and great progress is being made in that arena.”

Respondents suggested a range of oversight mechanisms, including a “new branch of the [U.S. Federal Communications Commission] made up of coders” and “some kind of a rainbow coalition,” and said it must “legislate humanely the protection of both the individual and society in general.”

The most salient question everyone should be asking is the classical one about accountability – ‘quis custodiet ipsos custodes?’ – who guards the guardians?
Mark Griffiths

Mary Griffiths, an associate professor in media at the University of Adelaide in South Australia, replied, “The most salient question everyone should be asking is the classical one about accountability – ‘quis custodiet ipsos custodes?’ – who guards the guardians? And, in particular, which ‘guardians’ are doing what, to whom, using the vast collection of information? Who has access to health records? Who is selling predictive insights, based on private information, to third parties unbeknown to the owners of that information? Who decides which citizens do and don’t need additional background checks for a range of activities? Will someone with mental health issues be ‘blocked’ invisibly from employment or promotion? The question I’ve been thinking about, following UK scholar [Evelyn] Ruppert, is that data is a collective achievement, so how do societies ensure that the collective will benefit? Oversight mechanisms might include stricter access protocols; sign off on ethical codes for digital management and named stewards of information; online tracking of an individual’s reuse of information; opt-out functions; setting timelines on access; no third-party sale without consent.”

An anonymous cloud-computing architect commented, “Closed algorithms in closed organizations can lead to negative outcomes and large-scale failures. If there is not enough oversight and accountability for organizations and how they use their algorithms, it can lead to scenarios where entire institutions fail, leading to widespread collapse. Nowhere is this more apparent than in critical economic institutions. While many of these institutions are considered ‘too big to fail,’ they operate based on highly secretive and increasingly complex rules with outcomes that are focused on only single factor – short-term economic gains. The consequence is that they can lead to economic disparity, increased long-term financial risk and larger social collapse. The proper response to this risk, though, is to increase scrutiny into algorithms, make them open, and make institutions accountable for the broader social spectrum of impact from algorithmic decisions.”

An anonymous system administrator commented, “We need some kind of rainbow coalition to come up with rules to avoid allowing inbuilt bias and groupthink to effect the outcomes.”

Maria Pranzo, director of development at The Alpha Workshops, wrote, “Perhaps an oversight committee – a new branch of the FCC made up of coders – can monitor new media using algorithms of their own, sussing out suspicious programming – a watchdog group to keep the rest of us safely clicking.”

Fredric Litto, emeritus professor of communications at the University of São Paulo, Brazil, said, “If there is, built-in, a manner of overriding certain classifications into which one falls, that is, if one can opt out of a ‘software-determined’ classification, then I see no reason for society as a whole not taking advantage of it. On the other hand, I have ethical reservations about the European laws that permit individuals to ‘erase’ ‘inconvenient’ entries in their social media accounts. I leave to the political scientists and jurists (like Richard Posner) the question of how to legislate humanely the protection of both the individual and society in general.”

An anonymous postdoctoral fellow in humanities at a major U.S. university commented, “The bias of many, if not most, of the algorithms and databases governing our world are now corporate. The recent debate over whether Facebook’s News Feed algorithm is biased against conservative news in the U.S., for example, does little to address the bias Facebook has in presenting news which is likely to keep users on Facebook, using and producing data for Facebook. A democratic oversight mechanism aimed at addressing the unequal distribution of power between online companies and users could be a system in which algorithms, and the databases they rely upon, are public, legible and editable by the communities they affect.”

Lauren Wagner wrote hopefully about OpenAI, a nonprofit artificial intelligence research agency founded in December 2015 with $1 billion in funding from technologists and entrepreneurs including Sam Altman, Jessica Livingston, Elon Musk, Reid Hoffman and Peter Thiel. “Overall, artificial intelligence holds the most promise and risk in terms of impacting peoples’ lives through the expanding collection and analysis of data. Oversight bodies like OpenAI are emerging to assess the impact of algorithms. OpenAI is a nonprofit artificial intelligence research company. Their goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”

Some respondents said any sort of formal regulation of algorithms would not be as effective as allowing the marketplace to initiate debate that inspire improvements.

Writer Richard Oswald commented, “As the service industries use these tools more extensively, they will evolve or face discriminating use by consumers. The secret does not lie in government rules for the algorithms themselves but in competition and free choice allowing consumers to use the best available service and by allowing consumers to openly share their experiences.”

Over the next few years, scrutiny over the real-world impacts of algorithms will increase and organizations will need to defend their application.
Michael Whitaker

Michael Whitaker, vice president of emerging solutions at ICF International, expects the market to self-correct after public input. He wrote, “Algorithms are delivering and will continue to deliver significant value to individuals and society. However, we are in for a substantial near- to mid-term backlash (some justified, some not) that will make things a bit bumpy on the way to a more transparent future with enhanced trust and understanding of algorithm impacts. Over the next few years, scrutiny over the real-world impacts of algorithms will increase and organizations will need to defend their application. Many will struggle and some are likely to be held accountable (reputation or legal liability). This will lead to increased emphasis on algorithm transparency and bias research.”

Respondents said that, regardless of projected efficacy, attention has to be paid to the long-term consequences of algorithm development.

John B. Keller, director of eLearning at the Metropolitan School District of Warren Township, Indiana, replied, “As algorithms become more complex and move from computational-based operations into predictive operations and perhaps even into decisions requiring moral or ethical judgment, it will become increasingly important that built-in assumptions are transparent to end users and perhaps even configurable. Algorithms are not going to simply use data to make decisions – they are going to make more data about people that will become part of their permanent digital record. We must advocate for benefits of machine-based processes but remain wary, cautious and reflective about the long-term consequences of seemingly innocuous progress of today.”

An anonymous respondent wrote, “A more refined sense of the use of data and algorithms is needed and a critical eye at their outputs to make sure that they are inclusive and relevant to different communities. User testing using different kinds of groups is needed. Furthermore, a more diverse group of creators for these algorithms is needed! If it is all young white men, those who have privilege in this country, then of course the algorithms and data will serve that community. We need awareness of privilege and a more diverse group of creators to be involved.”

Many are pessimistic about the prospects for policy rules and oversight

Is any proposed oversight method really going to be effective? Many have doubts. Their thoughts primarily fall into two categories. There are those who doubt that reliable and effective oversight and regulation can exist in an environment dominated by corporate and government interests, and there are those who believe oversight will not be possible due to the vastness, never-ending growth and complexity of algorithmic systems.

T. Rob Wyatt, an independent network security consultant, wrote, “Algorithms are an expression in code of systemic incentives, and human behavior is driven by incentives. Any overt attempt to manipulate behavior through algorithms is perceived as nefarious, hence the secrecy surrounding AdTech and sousveillance marketing. If they told us what they do with our data we would perceive it as evil. The entire business model is built on data subjects being unaware of the degree of manipulation and privacy invasion. So the yardstick against which we measure the algorithms we do know about is their impartiality. The problem is, no matter how impartial the algorithm, our reactions to it are biased. We favor pattern recognition and danger avoidance over logical, reasoned analysis. To the extent the algorithms are impartial, competition among creators of algorithms will necessarily favor the actions that result in the strongest human response, i.e., act on our danger-avoidance and cognitive biases. We would, as a society, have to collectively choose to favor rational analysis over limbic instinctive response to obtain a net positive impact of algorithms, and the probability of doing so at the height of a decades-long anti-intellectual movement is slim to none.”

An anonymous respondent said, “I expect a weak oversight group, if any, which will include primarily old, rich, white men, who may or may not directly represent vested interests especially in ‘intellectual property’ groups. I also expect all sorts of subtle manipulation by the actual organizations that operate these algorithms as well as single bad actors within them, to basically accomplish propaganda and market manipulation. As well as a further promulgation of the biases that already exist within the analog system of government and commerce as it has existed for years. Any oversight must have the ability to effectively end any bad actors, by which I mean fully and completely dismantle companies, and to remove all senior and any other related staff of government agencies should they be found to be manipulating the system or encouraging/allowing systemic discrimination. There would need to be strong representation of the actual population of whatever area they represent, from socioeconomic, education, racial and cultural viewpoints. All of their proceedings should be held within the public eye.”

There are no incentives in capitalism to fight filter bubbles, profiling and the negative effects, and governmental/international governance is virtually powerless.
Dariusz Jemielniak

Dariusz Jemielniak, a professor of management at Kozminski University in Poland and a Wikimedia Foundation trustee, observed, “There are no incentives in capitalism to fight filter bubbles, profiling and the negative effects, and governmental/international governance is virtually powerless.”

John Sniadowski, a systems architect, noted that oversight is difficult if not impossible in a global setting. He wrote, “The huge problem with oversight mechanisms is that globalisation by the internet removes many geopolitical barriers of control. International companies have the resources to find ways of implementing methods to circumvent controls. The more controls are put in place, the more the probability of unintended consequences and loophole searching, the net result being more complex oversight that becomes unworkable.”

Some respondents said these complex, fast-evolving systems will be quite difficult if not impossible to assess and oversee, now and in the future.

Software engineer Joshua Segall said, “We already have the statistical tools today to assess the impact of algorithms, and this will be aided by better data collection. However, assessment will continue to be difficult regardless of algorithms and data because of the complexity of the systems we aim to study.”

An anonymous senior research scholar at a major university’s digital civil society lab commented, “This is a question of the different paces at which tech (algorithmic) innovation and regulation work. Regulation and governing of algorithms lags way behind writing them and setting them loose on ever-growing (already discriminatory) datasets. As deep learning (machine learning) exponentially increases, the differential between algorithmic capacity and regulatory understanding and its inability to manage the unknown will grow vaster.”

An anonymous respondent warned, “Who are these algorithms accountable for, once they are out in the world and doing their thing? They don’t always behave in the way their creators predicted. Look at the stock market trading algorithms, the ones that have names like ‘The Knife.’ These things move faster than human agents ever could, and collectively, through their interactions with each other, they create a non-random set of behaviors that cannot necessarily be predicted ahead of time, at time zero. How can we possibly know well enough how these interactions among algorithms will all turn out? Can we understand these interactions well enough to correct problems with algorithms when injustice invariably arises?”

Another anonymous respondent noted, “Algorithms affect quantitative factors more than relational factors. This has had a huge effect already on our society in terms of careers and in the shadow work that individuals now have to do. Algorithms are too complicated to ever be transparent or to ever be completely safe. These factors will continue to influence the direction of our culture.”

And an anonymous participant in this canvassing observed that the solution might be more algorithms: “I expect meta-algorithms will be developed to try to counter the negatives of algorithms,” he said. “Until those have been developed and refined, I can’t see there being overall good from this.”