Numbers, Facts and Trends Shaping Your World

Code-Dependent: Pros and Cons of the Algorithm Age

Theme 4: Biases exist in algorithmically-organized systems

There are two strands of thinking that tie together here. One is that the algorithm creators (code writers), even if they strive for inclusiveness, objectivity and neutrality, build into their creations their own perspectives and values. The other is that the datasets to which algorithms are applied have their own limits and deficiencies. Even datasets with millions or billions of pieces of information do not capture the fullness of people’s lives and the diversity of their experiences. Moreover, the datasets themselves are imperfect because they do not contain inputs from everyone, or a representative sample of everyone. This section covers the respondent answers on both those fronts.

Algorithms reflect the biases of programmers and datasets

Bias and poorly developed datasets have been widely recognized as a serious problem that technologists say they are working to address; however, many respondents see this as a problem that will not be remedied anytime soon.

Randy Bush, Internet Hall of Fame member and research fellow at Internet Initiative Japan, wrote, “Algorithmic methods have a very long way to go before they can deal with the needs of individuals. So we will all be mistreated as more homogenous than we are.”

The result could be the institutionalization of biased and damaging decisions with the excuse of, ‘The computer made the decision, so we have to accept it.’ Eugene H. Spafford

Eugene H. Spafford, a professor at Purdue University, said, “Algorithmic decisions can embody bias and lack of adjustment. The result could be the institutionalization of biased and damaging decisions with the excuse of, ‘The computer made the decision, so we have to accept it.’ If algorithms embody good choices and are based on carefully vetted data, the results could be beneficial. To do that requires time and expense. Will the public/customers demand that?”

Irina Shklovski, an associate professor at the IT University of Copenhagen, said, “Discrimination in algorithms comes from implicit biases and unreflective values embedded in implementations of algorithms for data processing and decision-making. There are many possibilities to data-driven task and information-retrieval support, but the expectation that somehow automatic processing will necessarily be more ‘fair’ makes the assumption that implicit biases and values are not part of system design (and these always are). Thus the question is how much agency will humans retain in the systems that will come to define them through data and how this agency can be actionably implemented to support human rights and values.”

An anonymous freelance consultant observed, “Built-in biases (largely in favour of those born to privilege such as Western Caucasian males, and, to a lesser extent, young south-Asian and east-Asian men) will have profound, largely unintended negative consequences to the detriment of everybody else: women, especially single parents, people of colour (any shade of brown or black), the ‘olds’ over 50, immigrants, Muslims, non-English speakers, etc. This will not end well for most of the people on the planet.”

Marc Brenman, managing partner at IDARE, wrote, “The algorithms will reflect the biased thinking of people. Garbage in, garbage out. Many dimensions of life will be affected, but few will be helped. Oversight will be very difficult or impossible.”

Searching for images of ‘professor’ will produce pictures of white males … but to find representations of women or people of color, the search algorithm requires the user to include ‘woman professor’ or ‘Latina professor,’ which reinforces the belief that a ‘real’ professor is white and male. Jenny Korn

Jenny Korn, a race and media scholar at the University of Illinois at Chicago, noted, “The discussion of algorithms should be tied to the programmers programming those algorithms. Algorithms reflect human creations of normative values around race, gender and other areas related to social justice. For example, searching for images of ‘professor’ will produce pictures of white males (including in cartoon format), but to find representations of women or people of color, the search algorithm requires the user to include ‘woman professor’ or ‘Latina professor,’ which reinforces the belief that a ‘real’ professor is white and male. Problematic! So, we should discuss the (lack of) critical race and feminist training of the people behind the algorithm, not just the people using the algorithm.”

Adrian Hope-Bailie, standards officer at Ripple, noted, “One of the greatest challenges of the next era will be balancing protection of intellectual property in algorithms with protecting the subjects of those algorithms from unfair discrimination and social engineering.”

David Wuertele, a software engineer for a major company innovating autonomous vehicles, noted, “I am optimistic that the services engineers build are capable of being free of discrimination, and engineers will try to achieve that ideal. I expect that we will have some spectacular failures as algorithms get blamed for this or that social tragedy, but I believe that we will have an easier time fixing those services than we will have fixing society.”

Eric Marshall, a systems architect, said, “Algorithms are tweaked or replaced over time. Similar to open source software, the good will outweigh the bad, if the right framework is found.”

Kevin Novak, CEO of 2040 Digital, commented, “Algorithms can lead to filtered results that demonstrate biased or limited information to users. This bias and limitation can lead to opinions or understanding that does not reflect the true nature of a topic, issue or event. Users should have the option to select algorithmic results or natural results.”

Author Paul Lehto observed, “Unless the algorithms are essentially open source and as such can be modified by user feedback in some fair fashion, the power that likely algorithm-producers (corporations and governments) have to make choices favorable to themselves, whether in internet terms of service or adhesion contracts or political biases, will inject both conscious and unconscious bias into algorithms.”

An anonymous respondent commented, “If you start at a place of inequality and you use algorithms to decide what is a likely outcome for a person/system, you inevitably reinforce inequalities. For example, if you were really willing to use the data that exist right now, we would tell African-American men from certain metro areas that they should not even consider going to college – it won’t ‘pay off’ for them because of wage discrimination post-schooling. Is this an ethical position? No. But is it what a computer would determine to be the case based on existing data? Yes.”

And another respondent, Jeff Kaluski, predicted that trying to eliminate all bias may cause new problems, commenting, “New algs will start by being great, then a problem will emerge. The creator will be sued in the U.S. The alg will be corrected. It won’t be good enough for the marginalized group. Someone else will create a better alg that was ‘written in part by marginalized group’ then we’ll have a worse alg than the original+correction.”

Lisa Heinz, a doctoral student at Ohio University, said, “Those of us who learn and work in human-computer areas of study will need to make sure our concerns about discrimination and the exclusionary nature of the filter-bubble are addressed in the oversight mechanisms of algorithm development. This means that all voices, genders and races need to be incorporated into the development of algorithms to prevent even unintentional bias. Algorithms designed and created only by young white men will always benefit young white men to the exclusion of all others.”

Following are additional comments by anonymous respondents regarding bias:

Algorithms can only reflect our society back to us, so in a feedback loop they will also reflect our prejudices and exacerbate inequality.

  • “The rise of unfounded faith in algorithmic neutrality coupled with spread of big data and AI will enable programmer bias to spread and become harder to detect.”
  • “The positives of algorithmic analysis are largely about convenience for the comfortable; the negatives vastly outweigh them in significance.”
  • “Bias is inherent in algorithms. This will only function to make humans more mechanical, and those who can rig algorithms to increase inequality and unfairness will, of course, prevail.”
  • “Algorithms value efficiency over correctness or fairness, and over time their evolution will continue the same priorities that initially formulated them.”
  • “Algorithms can only reflect our society back to us, so in a feedback loop they will also reflect our prejudices and exacerbate inequality. It’s very important that they not be used to determine things like job eligibility, credit reports, etc.”
  • “Algorithms purport to be fair, rational and unbiased but just enforce prejudices with no recourse.”
  • “Poor algorithms in justice systems actually preserve human bias instead of mitigating it. As long as algorithms are hidden from public view, they can pose a great danger.”
  • “How can we expect algorithms designed to maximize ‘efficiency’ (which is an inherently conservative activity) to also push underlying social reform?”

Algorithms depend upon data that is often limited, deficient or incorrect

Some made the case that datasets upon which algorithmic decisions are made may exclude some groups of people, eliminate consumer choice and not recognize exceptions. They may include limited, skewed or incorrect detail, and the analysis based on that can cause harm. An anonymous respondent noted, “Until we begin to measure what we value rather than valuing what we measure, any insights we may gain from algorithms will be canceled out by false positives caused by faulty or incomplete data.”

An anonymous senior program manager at Microsoft observed, “Because of inherent bias (due to a lack of diversity), many algorithms will not fully reflect the complexity of the problems they are trying to address and solutions will tend to sometimes neglect important factors. Unfortunately, it will take time until biases (or simply short-sighted thinking) baked into these algorithms will get detected. By then the American government will have banned innocent people from boarding planes, insurers will have raised premiums for the wrong people, and ‘predictive crime prevention’ will have gotten out of hand.”

Real life does not always mimic mathematics. Algorithms have a limited number of variables, and often life shows that it needs to factor in extra variables. Masha Falkov

An anonymous respondent commented, “Automated systems can never address the complexity of human interaction with the same degree of precision as a person would.” And artist Masha Falkov said, “Real life does not always mimic mathematics. Algorithms have a limited number of variables, and often life shows that it needs to factor in extra variables. There should always be feedback in order to develop better variables and human interaction when someone falls through the cracks of the new normalcy as defined by the latest algorithm. A person may be otherwise a good person in society, but they may be judged for factors over which they do not have any control.”

Randy Albelda, an economics professor at the University of Massachusetts-Boston, replied, “My research is on poor people. I’ve been doing this for a long time (close to 30 years). And no matter how much information, data, empirical evidence that is presented about poor people, we still have horrible anti-poverty policies, remarkable misconceptions about poor people, and lots more poor people. Collecting and analyzing data does not ‘set us free.’ ‘Facts’ are convenient. Political economic forces shape the way we understand and use ‘facts/data’ as well as technology. If we severely underfund health care or much of health care dollars get sucked up by insurance companies, algorithms will be used to allocate insufficient dollars on patients. It will not improve health care.”

Will Kent, an e-resources specialist on the staff at Loyola University-Chicago, observed, “Any amount/type of discrimination could occur. It could be as innocent as a slip-up in the code or a mistranslation. It could be as nefarious as deliberate suppression, obfuscation or lie of omission.”

An anonymous respondent said, “I don’t think we understand intersectionality enough to engineer it in an algorithm. As someone who is LBGTQ, and a member of a small indigenous group who speaks a minority language, I have already encountered so many ‘blind spots’ online – but who do you tell? How do you approach the algorithm? How do you influence it without acquiescing?”

Hume Winzar, an associate professor of business at Macquarie University in Sydney, Australia, wrote, “Banks, governments, insurance companies, and other financial and service providers will use whatever tools they can to focus on who are risks. It’s all about money and power.”

[subtle]

A dictatorship like that in Orwell’s 1984 would love to have control over the algorithms selecting information for the public or for subsectors of the public. If information is power, then information control is supreme power. M.E. Kabay

M.E. Kabay, a professor of computer information systems at Norwich University in Vermont, said, “A dictatorship like that in Orwell’s 1984 would love to have control over the algorithms selecting information for the public or for subsectors of the public. If information is power, then information control is supreme power. Warning bells should sound when individualized or group information bubbles generated by the selective algorithms diverge from some definition of reality. Supervisory algorithms should monitor assertions or information flows that deviate from observable reality and documentary evidence; the question remains, however, of whose reality will dominate.”

An anonymous professor at the University of California-Berkeley observed, “Algorithms are, by definition, impersonal and based on gross data and generalized assumptions. The people writing algorithms, even those grounded in data, are a non-representative subset of the population. The result is that algorithms will be biased toward what their designers believe to be ‘normal.’ One simple example is the security questions now used by many online services. E.g., what is your favorite novel? Where did your spouse go to college? What was your first car? What is your favorite vacation spot? What is the name of the street you grew up on?”

An anonymous respondent commented, “There is a lot of potential for abuse here that we have already seen in examples such as sentencing for nonviolent offences. Less-well-off and minority offenders are more likely to serve sentences or longer sentences than others whose actions were the same. We also see that there is a lot of potential for malicious behaviour similar to the abuses corrected previously when nasty neighbours would spread lies and get their victims reclassified for auto or other insurance rates.”

Icon for promotion number 1

Sign up for our weekly newsletter

Fresh data delivery Saturday mornings

Icon for promotion number 1

Sign up for The Briefing

Weekly updates on the world of news & information