Numbers, Facts and Trends Shaping Your World

Experts Doubt Ethical AI Design Will Be Broadly Adopted as the Norm Within the Next Decade

1. Worries about developments in AI

It would be quite difficult – some might say impossible – to design broadly adopted ethical AI systems. A share of the experts responding noted that ethics are hard to define, implement and enforce. They said context matters when it comes to ethical considerations. Any attempt to fashion ethical rules generates countless varying scenarios in which applications of those rules can be messy. The nature and relative power of the actors in any given scenario also matter. Social standards and norms evolve and can become wholly different as cultures change. Few people have much education or training in ethics. Additionally, good and bad actors exploit loopholes and gray areas where ethical rules aren’t crisp, so workarounds, patches or other remedies are often created with varying levels of success.

The experts who expressed worries also invoked governance concerns. They asked: Whose ethical systems should be applied? Who gets to make that decision? Who has responsibility to care about implementing ethical AI? Who might enforce ethical regimes once they are established? How?

A large number of respondents argued that geopolitical and economic competition are the main drivers for AI developers, while moral concerns take a back seat. A share of these experts said creators of AI tools work in groups that have little or no incentive to design systems that address ethical concerns.

Some respondents noted that, even if workable ethics requirements might be established, they could not be applied or governed because most AI design is proprietary, hidden and complex. How can harmful AI “outcomes” be diagnosed and addressed if the basis for AI “decisions” cannot be discerned? Some of these experts also note that existing AI systems and databases are often used to build new AI applications. That means the biases and ethically troubling aspects of current systems are being designed into the new systems. They say diagnosing and unwinding the pre-existing problems may be difficult if not impossible to achieve.

It is difficult to define ‘ethical’ AI

A portion of these experts infused their answers with questions that amount to this overarching question: How can ethical standards be defined and applied for a global, cross-cultural, ever-evolving, ever-expanding universe of diverse black-box systems in which bad actors and misinformation thrive?

A selection of respondents’ comments on this broad topic is organized over the next 20 pages under these subheadings: 1) It can be hard to agree as to what constitutes ethical behavior. 2) Humans are the problem: Whose ethics? Who decides? Who cares? Who enforces? 3) Like all tools, AI can be used for good or ill, which makes standards-setting a challenge. 4) Further AI evolution itself raises questions and complications.

Stephen Downes, senior research officer for digital technologies with the National Research Council of Canada, observed, “The problem with the application of ethical principles to artificial intelligence is that there is no common agreement about what those are. While it is common to assume there is some sort of unanimity about ethical principles, this unanimity is rarely broader than a single culture, profession or social group. This is made manifest by the ease with which we perpetuate unfairness, injustice and even violence and death to other people. No nation is immune.

“The problem with the application of ethical principles to artificial intelligence is that there is no common agreement about what those are. While it is common to assume there is some sort of unanimity about ethical principles, this unanimity is rarely broader than a single culture, profession or social group.”

STEphen Downes, senior research officer for digital technologies with the National Research Council of Canada,

“Compounding this is the fact that contemporary artificial intelligence is not based on principles or rules. Modern AI is based on applying mathematical functions on large collections of data. This type of processing is not easily shaped by ethical principles; there aren’t ‘good’ or ‘evil’ mathematical functions, and the biases and prejudices in the data are not easily identified nor prevented. Meanwhile, the application of AI is underdetermined by the outcome; the same prediction, for example, can be used to provide social support and assistance to a needy person or to prevent that person from obtaining employment, insurance or financial services.

“Ultimately, our AI will be an extension of ourselves, and the ethics of our AI will be an extension of our own ethics. To the extent that we can build a more ethical society, whatever that means, we will build more ethical AI, even if only by providing our AI with the models and examples it needs in order to be able to distinguish right from wrong. I am hopeful that the magnification of the ethical consequences of our actions may lead us to be more mindful of them; I am fearful that they may not.”

Kenneth A. Grady, adjunct professor at Michigan State University College of Law and editor of The Algorithmic Society on Medium, said, “Getting those creating AI to use it in an ‘ethical’ way faces many hurdles that society is unlikely to overcome in the foreseeable future. In some key ways, regulating AI ethics is akin to regulating ethics in society at large. AI is a distributed and relatively inexpensive technology. I can create and use AI in my company, my research lab or my home with minimal resources. That AI may be quite powerful. I can unleash it on the world at no cost.

“Assuming that we could effectively regulate it, we face another major hurdle: What do we mean by ‘ethical?’ Putting aside philosophical debates, we face practical problems in defining ethical AI. We do not have to look far to see similar challenges. During the past few years, what is or is not ethical behavior in U.S. politics has been up for debate. Other countries have faced similar problems.

“Even if we could decide on a definition [for ethics] in the U.S., it would likely vary from the definitions used in other countries. Given AI’s ability to fluidly cross borders, regulating AI would prove troublesome. We also will find that ethical constraints may be at odds with other self-interests. Situational ethics could easily arise when we face military or intelligence threats, economic competitive threats, and even political threats.

“Further, AI itself presents some challenges. Today, much of what happens in some AI systems is not known to the creators of the systems. This is the black-box problem. Regulating what happens in the black box may be difficult. Alternatively, banning black boxes may hinder AI development, putting our economic, military or political interests at risk.”

Ryan Sweeney, director of analytics for Ignite Social Media, commented, “The definition of ‘public good’ is important here. How much does intent versus execution matter? Take Facebook, for instance. They might argue that their AI content review platform is in the interest of ‘public good,’ but it continues to fail. AI is only as ethical and wise as those who program it. One person’s racism is another’s free speech. What might be an offensive word to someone might not even be in the programmer’s lexicon.

“I’m sure AI will be used with ethical intent, but ethics require empathy. In order to program ethics, there has to be a definitive right and wrong, but situations likely aren’t that simple and require some form of emotional and contextual human analysis. The success of ethical AI execution comes down to whether or not the programmers literally thought of every possible scenario. In other words, AI will likely be developed and used with ethical intent, but it will likely fall short of what we, as humans, can do. We should use AI as a tool to help guide our decisions, but not rely on it entirely to make those decisions. Otherwise, the opportunity for abuse or unintended consequences will show its face. I’m also sure that AI will be used with questionable intent, as technology is neither inherently good nor bad. Since technology is neutral, I’m sure we will see cases of AI abused for selfish gains or other questionable means and privacy violations. Ethical standards are complicated to design and hard to program.”

It can be hard to agree as to what constitutes ethical behavior

Below is a sampling of expert answers that speak to the broad concerns that ethical behaviors can be hard to define and even more difficult to build into AI systems.

Mark Lemley, director of Stanford University’s Program in Law, Science and Technology, observed, “People will use AI for both good and bad purposes. Most companies will try to design the technology to make good decisions, but many of those decisions are hard moral choices with no great answer. AI offers the most promise in replacing very poor human judgment in things like facial recognition and police stops.”

Marc Brenman, managing member at IDARE, a transformational training and leadership development consultancy based in Washington, D.C., wrote, “As societies, we are very weak on morality and ethics generally. There is no particular reason to think that our machines or systems will do better than we do. Faulty people create faulty systems. In general, engineers and IT people and developers have no idea what ethics are. How could they possibly program systems to have what they do not? As systems learn and develop themselves, they will look around at society and repeat its errors, biases, stereotypes and prejudices. We already see this in facial recognition.

“AI will make certain transactions faster, such as predicting what I will buy online. AI systems may get out of control as they become autonomous. Of what use are humans to them? They may permit mistakes to be made very fast, but the systems may not recognize the consequences of their actions as ‘mistakes.’ For example, if they maximize efficiency, then the Chinese example of social control may dominate.

“When AI systems are paired with punishment or kinetic feedback systems, they will be able to control our behavior. Imagine a pandemic where a ‘recommendation’ is made to shelter in place or wear a mask or stay six feet away from other people. If people are hooked up to AI systems, the system may give an electrical shock to a person who does not implement the recommendation. This will be like all of us wearing shock collars that some of us use on our misbehaving dogs.”

June Anne English-Lueck, professor of anthropology at San Jose State University and a distinguished fellow at the Institute for the Future, said, “AI systems employ algorithms that are only as sound as the premises on which they are built and the accuracy of the data with which they learn. Human ethical systems are complex and contradictory. Such nuances as good for whom and bad for whom are difficult to parse. Smart cities, drawing on systems of surveillance and automated government need mechanisms of human oversight. Oversight has not been our strong suit in the last few decades, and there is little reason to believe it will be instituted in human-automation interactions.”

Amali De Silva-Mitchell, a futurist and consultant participating in multistakeholder, global internet governance processes, wrote, “Although there are lots of discussions, there are few standards or those that exist are at a high level or came too late for the hundreds of AI applications already rolled out. These base AI applications will not be reinvented, so there is embedded risk. However, the more discussion there is, the greater the understanding of the existing ethical issues, and that can be seen to be developing, especially as societal norms and expectations change. AI applications have the potential to be beneficial, but the applications have to be managed so as not to cause unintended harms. For global delivery and integrated service, there needs to be common standards, transparency and collaboration. Duplication of efforts is a waste of resources.”

Glenn Edens, professor at Thunderbird School of Global Management, Arizona State University, previously a vice president at PARC, observed, “The promise: AI and ML could create a world that is more efficient, wasting less energy or resources providing health care, education, entertainment, food and shelter to more people at lower costs. Being legally blind, I look forward to the day of safe and widely available self-driving cars, for example. Just like the steam engine, electricity, bicycles and personal computers (especially laptops) amplify human capacity AI, and ML hopefully will do the same.

“The concerns: AI and its cousin ML are still in their infancy – and while the technology progress is somewhat predictable, the actual human consequences are murky. The promise is great – so was our naive imagination of what the internet would do for humankind. Commercial interests (and thus their deployment of AI and ML) are far more agile and adaptable than either the humans they supposedly serve or the governance systems. Regulation is largely reactionary, rarely proactive – typically, bad things have to happen before frameworks to guide responsible and equitable behavior are written into laws, standards emerge or usage is codified into acceptable norms. It is great that the conversation has started; however, there is a lot of ongoing development in the boring world of enterprise software development that is largely invisible.

“Credit scoring comes to mind as a major potential area of concern – while the credit-scoring firms always position their work as providing consumers more access to financial products, the reality is that we’ve created a system that unfairly penalizes the poor and dramatically limits fair access to financial products at equitable prices. AI and ML will be used by corporations to evaluate everything they do and every transaction, rate every customer and their potential (value), predict demand, pricing, targeting as well as their own employees and partners – while this can lead to efficiency, productivity and creation of economic value, a lot of it will lead to segmenting, segregation, discrimination, profiling and inequity. Imagine a world where pricing is different for everyone from one moment to the next, and these predictive systems can transfer huge sums of value in an instant, especially from the most vulnerable.”

A strategy and planning expert responded, “While I say and believe that, yes, ethical boundaries will be put in place for AI by 2030, I also realize that doing this is going to be incredibly difficult. The understanding of what an AI is doing as it builds and adapts its understandings and approaches rather quickly gets to a point where human knowing and keeping up gets left behind. The how and why something was done or recommended can be unknowable. Also, life and the understanding of right and wrong or good-ish and bad-ish can be fluid for people, as things swing to accommodate the impacts on the human existence and condition as well as livable life on our planet. Setting bounds and limitations has strong value, but being able to understand when things are shifting out of areas that are comfortable or have introduced a new realization for a need to correct for unintended consequences is needed. But bounds around bias need to be considered and worked through before setting ethical limitations in place.”

A vice president at a major global company wrote, “AI is too distributed a technology to be effectively governed. It is too easily accessible to any individual, company or organization with reasonably modest resources. That means that unlike, say, nuclear or bioweapons, it will be almost impossible to govern, and there always will be someone willing to develop the technology without regard to ethical consequences.”

Wendy M. Grossman, a UK-based science writer, author of “net.wars” and founder of the magazine The Skeptic, predicted, “The distribution of this will be uneven. I’ve just read Jane Mayer’s piece in The New Yorker on poultry-packing plants, and it provides a great example of why it’s not enough to have laws and ethics; you must enforce them and give the people you’re trying to protect sufficient autonomy to participate in enforcing them. I think ethical/unethical AI will be unevenly distributed. It will all depend on what the society into which the technology is being injected will accept and who is speaking. At the moment, we have two divergent examples:

  1. AI applications whose impact on most people’s lives appears to be in refusing them access to things – probation in the criminal justice system, welfare in the benefits system, credit in the financial system.
  2. AI systems that answer questions and offer help (recommendation algorithms, Siri, Google search, etc.).

But then what we have today isn’t AI as originally imagined by the Dartmouth group. We are still a very long way from any sort of artificial general intelligence with any kind of independent autonomy. The systems we have depend for their ethics on two things: access to the data necessary to build them and the ethics of the owner. It isn’t AI that needs ethics, it’s the owners.”

Glynn Rogers, retired, previously senior principal engineer and a founding member at the CSIRO Centre for Complex Systems Science, said, “AI and its successors are potentially so powerful that we have no choice but to ensure attention to ethics. The alternative would be to hand over control of our way of life to a class of developers and implementors that are either focused on short-term and shortsighted interests or who have some form of political agenda particularly ‘state actors.’ The big question is how to ensure this. A regulatory framework is part of the answer, but I suspect that a major requirement is to change the culture of the AI industry. Rather than developing technologies simply for the sake of it, or to publish clever papers, there needs to be a cultural environment in which developers see as an inherent part of their task to consider the potential social and economic impacts of their activities and an employment framework that does not seek to repress this. Perhaps moral and political philosophy should be part of the education of AI developers.”

Alexandra Samuel, technology writer, researcher, speaker and regular contributor to the Wall Street Journal and Harvard Business Review, wrote, “Without serious, enforceable international agreements on the appropriate use and principles for AI, we face an almost inevitable race to the bottom. The business value of AI has no intrinsic dependence on ethical principles; if you can make more money with AIs that prioritize the user over other people, or that prioritize business needs over end users, then companies will build AIs that maximize profits over people. The only possible way of preventing that trajectory is with national policies that mandate or proscribe basic AI principles, and those kinds of national policies are only possible with international cooperation; otherwise, governments will be too worried about putting their own countries’ businesses at a disadvantage.”

Valerie Bock, VCB Consulting, former Technical Services Lead at Q2 Learning, commented, “I don’t think we’ve developed the philosophical sophistication in the humans who design AI sufficiently to expect them to be able to build ethical sophistication into their software. Again and again, we are faced with the ways our own unconscious biases pop up in our creations. It is turning out that we do not understand ourselves or our motivations as well as we would like to imagine we might. Work in AI helps lay some of this out for us, aiding us in a quest [that] humanity has pursued for millennia. A little humility based on what we are learning is in order.”

“I don’t think we’ve developed the philosophical sophistication in the humans who design AI sufficiently to expect them to be able to build ethical sophistication into their software.”

Valerie Bock, VCB Consulting, former Technical Services Lead at Q2 Learning

The director of a military center for strategy and technology said, “Most AI will attempt to embed ethical concerns at some level. It is not clear how ‘unbiased’ AI can be created. Perfectly unbiased training datasets don’t exist, and, due to human biases being an inherent part of interactions, such a goal may be unobtainable. As such, we may see gender or racial biases in some training datasets, which will spill over into operational AI systems, in spite of our efforts to combat this.”

Alan S. Inouye, director of the Office for Information Technology Policy at the American Library Association, responded, “I don’t see people or organizations setting out in a nefarious path in their use of AI. But of course, they will use it to advance their missions and goals and, in some sense, employ ‘local’ ethics. But ethics is neither standardized nor additive across domains. What is ethics across AI systems? It is like asking, ‘What is cybersecurity across society?’”

Maggie Jackson, former Boston Globe columnist and author of “Distracted: Reclaiming Our Focus in a World of Lost Attention,” wrote, “I am deeply concerned by how little we understand of what AI algorithms know or how they know it. This black-box effect is real and leads to unintended impact. Most importantly, in the absence of true understanding, assumptions are held up as the foundation of current and future goals. There should be far greater attention paid to the hidden and implicit value systems that are inherent in the design and development of AI in all forms. An example: robot caregivers, assistants and tutors are being increasingly used in caring for the most vulnerable members of society despite known misgivings by both scientist–roboticists, ethicists and users, potential and current. It’s highly alarming that the robots’ morally dubious façade of care is increasingly seen as a good-enough substitute for the blemished yet reciprocal care carried out by humans.

“New ethical AI guidelines that emphasize transparency are a good first step in trying to ensure that care recipients and others understand who/what they are dealing with. But profit-driven systems, the hubris of inventors, humans’ innate tendency to try to relate to any objects that seem to have agency, and other forces combine to work against the human skepticism that is needed if we are to create assistive robots that preserve the freedom and dignity of the humans who receive their care.”

Alan D. Mutter, a consultant and former Silicon Valley CEO, said, “AI is only as smart and positive as the people who train it. We need to spend as much time on the moral and ethical implementation of AI as we do on hardware, software and business models. Last time I checked, there was no code of ethics in Silicon Valley. We need a better moral barometer than the NASDAQ index.”

Fred Baker, board member of the Internet Systems Consortium and longtime IETF leader, commented, “I would like to see AI be far more ethical than it is. That said, human nature hasn’t changed, and the purposes to which AI is applied have not fundamentally changed. We may talk about it more, but I don’t think AI ethics will ultimately change.”

Randall Mayes, a technology analyst at TechCast Global, observed, “The standardization of AI ethics concerns me because the American, European and Chinese governments and Silicon Valley companies have different ideas about what is ethical. How AI is used will depend on your government’s hierarchy of values among economic development, international competitiveness and social impacts.”

Jim Witte, director of the Center for Social Science Research at George Mason University, responded, “The question assumes that ethics and morals are static systems. With developments in AI, there may also be an evolution of these systems such that what is moral and ethical tomorrow may be very different from what we see as moral and ethical today.”

Yves Mathieu, co-director at Missions Publiques, based in Paris, France, wrote, “Ethical AI will require legislation like the European [GDPR] legislation to protect privacy rights on the internet. Some governments will take measures but not all will, as is the case today in regard to the production, marketing and usage of guns. There might be an initiative by some corporations, but there will be a need for engagement of the global chain of production of AI, which will be a challenge if some of the production is coming from countries not committed in the same ethical principles. Strong economic sanctions on nonethical AI production and use may be effective.”

Amy Sample Ward, CEO of NTEN: The Nonprofit Technology Network, said, “There’s no question whether AI will be used in questionable ways. Humans do not share a consistent and collective commitment to ethical standards of any technology, especially not with artificial intelligence. Creating standards is not difficult, but accountability to them is very difficult, especially as government, military and commercial interests regularly find ways around systems of accountability. What systems will be adopted on a large scale to enforce ethical standards and protections for users? How will users have power over their data? How will user education be invested in for all products and services? These questions should guide us in our decision-making today so that we have more hope of AI being used to improve or benefit lives in the years to come.”

Dan McGarry, an independent journalist based in Vanuatu, noted, “Just like every other algorithm ever deployed, AI will be a manifestation of human bias and the perspective of its creator. Facebook’s facial-recognition algorithm performs abysmally when asked to identify Black faces. AIs programmed in the affluent West will share its strengths and weaknesses. Likewise, AIs developed elsewhere will share the assumptions and the environment of their creators. They will not be images of them; they will be products of them and recognisable as such.”

Abigail De Kosnik, associate professor and director of the Center for New Media at the University of California-Berkeley, said, “I don’t see nearly enough understanding in the general public, tech workers or in STEM students about the possible dangers of AI – the ways that AI can harm and fail society. I am part of a wave of educators trying to introduce more ethics training and courses into our instruction, and I am hopeful that will shift the tide, but I am not optimistic about our chances. AI that is geared toward generating revenue for corporations will nearly always work against the interests of society.”

Irina Raicu, director of the Internet Ethics program at the Markkula Center for Applied Ethics, observed, “The conversation around AI ethics has been going on for several years now. However, what seems to be obvious among those who have been a part of it for some time has not trickled down into the curricula of many universities who are training the next generation of AI experts. Given that, it looks like it will take more than 10 years for ‘most of the AI systems being used by organizations of all sorts to employ ethical principles focused primarily on the public good.’ Also, many organizations are simply focused primarily on other goals – not on protecting or promoting the public good.”

A lawyer and former law school dean who specializes in technology issues wrote, “AI is an exciting new space, but it is unregulated and, at least in early stages, will evolve as investment and monetary considerations direct. It is sufficiently known that there are no acknowledged ethical standards and probably won’t be until beyond the time horizon you mention (2030). During that time, there will be an accumulation of ‘worst-case scenarios,’ major scandals on its use, a growth in pernicious use that will offend common sense and community moral and ethical standards. Those occasions and situations will lead to a gradual and increasing demand for regulation, oversight and ethical policies on use and misuse. But by whom (or what)? Who gets to impose those ethical prescriptions – the industries themselves? The government?”

The director of a public policy center responded, “I see a positive future for AI in the areas of health and education. However, there are ethical challenges here, too. Will the corporation who access and hold this data use it responsibly? What will be the role of government? Perhaps AI can help the developing water deal with climate change and water resources, but again, I see a real risk in the areas of equitable distribution, justice and privacy protections.”

Humans are the problem: Whose ethics? Who decides? Who cares? Who enforces?

A number of the experts who have concerns about the future of ethical AI raised issues around the fundamental nature of people. Flawed humans necessarily will be in the thick of these issues. Moreover, some experts argued that humans will have to create the governance systems overseeing the application of AI and judging how applications are affecting societies. These experts also asserted that there will always be fundamentally unethical people and organizations that will not adopt such principles. Further, some experts mentioned the fact that in a globally networked age even lone wolves can cause massive problems.

Leslie Daigle, a longtime leader in the organizations building the internet and making it secure, noted, “My biggest concern with respect to AI and its ethical use has nothing to do with AI as a technology and everything to do with people. Nothing about the 21st century convinces me that we, as a society, understand that we are interdependent and need to think of something beyond our own immediate interests. Do we even have a common view of what is ethical?

“Taking one step back from the brink of despair, the things I’d like to see AI successfully applied to, by 2030, include things like medical diagnoses (reading x-rays, etc.). Advances there could be monumental. I still don’t want my fridge to order my groceries by 2030, but maybe that just makes me old? :-)”

Tracey P. Lauriault, a professor expert in critical media studies and big data based at Carleton University, Ottawa, Canada, commented, “Automation, AI and machine learning (ML) used in traffic management as in changing the lights to improve the flow of traffic, or to search protein databases in big biochemistry analytics, or to help me sort out ideas on what show to watch next or books to read next, or to do land-classification of satellite images, or even to achieve organic and fair precision agriculture, or to detect seismic activity, the melting of polar ice caps, or to predict ocean issues are not that problematic (and its use, goodness forbid, to detect white-collar crime in a fintech context is not a problem).

“If, however, the question is about social welfare intake systems, biometric sorting, predictive policing and border control, etc., then we are getting into quite a different scenario. How will these be governed, scrutinized, and who will be accountable for decisions and will those decisions about the procurement and use of these technologies or the intelligence derived from them?

“They will reflect our current forms of governance, and these seem rather biased and unequal. If we can create a more just society then we may be able to have more-just AI/ML.”

Leiska Evanson, futurist and consultant, wrote, “Humanity has biases. Humans are building the algorithms around the machine learning masquerading as AI. The ‘AI’ will have biases. It is impossible to have ethical AI (really, ML) if the ‘parent’ is biased. Companies such as banks are eager to use ML to justify not lending to certain minorities who simply do not create profit for them. Governments want to attend to the needs of the many before the few. The current concepts of AI are all about feeding more data to an electromechanical bureaucrat to rubberstamp, with no oversight from humans with competing biases.”

A director of standards and strategy at a major technology company commented, “I believe that people are mostly good and that the intention will be to create ethical AI. However, an issue that I have become aware of is the fact that we all have intrinsic biases, unintentional biases, that can be exposed in subtle and yet significant ways. Consider that AI systems are built by people, and so they inherently work according to how the people that built them work. Thus, these intrinsic, unintentional biases are present in these systems. Even learning systems will ‘learn’ in a biased way. So, the interesting research question is whether or not we learn in a way that overcomes our intrinsic biases.”

Jean Seaton, director of the Orwell Foundation and professor of media history at the University of Westminster, responded, “The ethics question also begs the questions of who would create and police such standards internationally? We need some visionary leaders and some powerful movements. The last big ‘ethical’ leap came after World War II. The Holocaust and World War II produced a set of institutions that in time led to the notion of human rights. That collective ethical step change (of course compromised but nevertheless immensely significant) was embodied in institutions with some collective authority. So that is what has to happen over AI. People have to be terrified enough, leaders have to be wise enough, people have to be cooperative enough, tech people have to be forward thinking enough, responsibility has to be felt vividly, personally, overwhelmingly enough – to get a set of rules passed and policed.”

Cliff Lynch, director at the Coalition for Networked Information, wrote, “Efforts will be made to create mostly ‘ethical’ AI applications by the end of the decade, but please understand that an ethical AI application is really just software that’s embedded in an organization that’s doing something; it’s the organization rather than the software that bears the burden to be ethical. There will be some obvious exceptions for research, some kinds of national security, military and intelligence applications, market trading and economic prediction systems – many of these things operate under various sorts of ‘alternative ethical norms’ such as the ‘laws of war’ or the laws of the marketplace. And many efforts to unleash AI (really machine-learning) on areas like physics or protein-folding will fall outside all of the discussion of ‘ethical AI.’

“We should resist the temptation to anthropomorphize these systems. (As the old saying goes, ‘machines hate that.’) Don’t attribute agency and free will to software. …The problems here are people and organizations, not code! … A lot of the discussion of ethical AI is really misguided. It’s clear that there’s a huge problem with machine learning and pattern-recognition systems, for example, that are trained on inappropriate, incomplete or biased data (or data that reflect historical social biases) or where the domain of applicability and confidence of the classifiers or predictors aren’t well-demarcated and understood. There’s another huge problem where organizations are relying on (often failure-prone and unreliable, or trained on biased data, or otherwise problematic) pattern recognition or prediction algorithms (again machine-learning-based, usually) and devolving too much decision-making to these. Some of the recent facial-recognition disasters are good examples here. There are horrible organizational and societal practices that appeal to computer-generated decisions that are correct, unbiased, impartial or transparent and that place unjustified faith and authority in this kind of technology. But framing this in terms of AI ethics rather than bad human decision-making, stupidity, ignorance, wishful thinking, organizational failures and attempts to avoid responsibility seems wrong to me. We should be talking instead about the human and organizational ethics of using machine-learning and prediction systems for various purposes, perhaps.

“I think we’ll see various players employ machine learning, pattern recognition and prediction in some really evil ways over the coming decade. Coupling this to social media or other cultural motivation and reward mechanisms is particularly scary. An early example here might be China’s development of its ‘social capital’ rewards and tracking system. I’m also frightened of targeted propaganda/advertising/persuasion systems. I’m hopeful we’ll also see organizations and governments in at least a few cases choose not to use these systems or to try to use them very cautiously and wisely and not delegate too much decision-making to them.

“It’s possible to make good choices here, and I think some will. Genuine AI ethics seems to be part of the thinking about general-purpose AI, and I think we are a very, very, long way from this, though I’ve seen some predictions to the contrary from people perhaps better informed than I am. The (rather more theoretical and speculative) philosophical and research discussions about superintelligence and about how one might design and develop such a general-purpose AI that won’t rapidly decide to exterminate humanity are extremely useful, important and valid, but they have little to do with the rhetorical social justice critiques that confuse algorithms with the organizations that stupidly and inappropriately design, train and enshrine and apply them in today’s world.”

Deirdre Williams, an independent researcher expert in global technology policy, commented, “I can’t be optimistic. We, the ‘average persons,’ have been schooled in preceding years toward  selfishness, individualism, materialism and the ultimate importance of convenience. These values create the ‘ethos.’ At the very root of AI are databases, and these databases are constructed by human beings who decide which data are to be collected and how that data should be described and categorised. A tiny human error or bias at the very beginning can balloon into an enormous error of truth and/or justice.”

Alexa Raad, co-founder and co-host of the TechSequences podcast and former chief operating officer at Farsight Security, said, “There is hope for AI in terms of applications in health care that will make a positive difference. But legal/policy and regulatory frameworks almost always lag behind technical innovations. In order to guard against the negative repercussions of AI, we need a policy governance and risk-mitigation framework that is universally adopted. There needs to be an environment of global collaboration for a greater good. Although globalization led to many of the advances we have today (for example, the internet’s design and architecture as well as its multistakeholder governance model), globalization is under attack. What we see across the world is a trend toward isolationism, separatism as evidenced by political movements such as populism, nationalism and outcomes such as Brexit. In order to come up with and adopt a comprehensive set of guidelines or framework for the use of AI or risk mitigation for abuse of AI, we would need a global current that supports collaboration. I hope I am wrong, but trends like this need longer than 10 years to run their course and for the pendulum to swing back the other way. By then, I am afraid some of the downsides and risks of AI will already be in play.”

Andrea Romaoli Garcia, an international lawyer actively involved with multistakeholder activities of the International Telecommunication Union and Internet Society, said, “I define ethics as all possible and available choices where the conscience establishes the best option. Values and principles are the limiters that guide the conscience into this choice alongside the purposes; thus, ethics is a process. In terms of ethics for AI, the process for discovering what is good and right means choosing among all possible and available applications to find the one that best applies to the human-centred purposes, respecting all the principles and values that make human life possible.

“The human-centred approach in ethics was first described by the Greek philosopher Socrates in his effort to turn attention from the outside world to the human condition. AI is a cognitive technology that allows greater advances in health, economic, political and social fields. It is impossible to deny how algorithms impact human evolution. Thus, an ethical AI requires that all instruments and applications place humans at the center. Despite the fact that there are some countries building ethical principles for AI, there is a lack of any sort of international instrument that covers all of the fields that guide the development and application of AI in a human-centred approach. AI isn’t model-driven; it has a data-centred approach for highly scalable neural networks. Thus, the data should be selected and classified through human action. Through this human action, sociocultural factors are imprinted on the behavior of the algorithm and machine learning. This justifies the concerns about ethics and also focuses on issues such as freedom of expression, privacy and surveillance, ownership of data and discrimination, manipulation of information and trust, environmental and global warming and also on how the power will be established among society.

“These are factors that determine human understanding and experience. All instruments that are built for ethical AI have different bases, values and purposes depending on the field to which they apply. The lack of harmony in defining these pillars compromises ethics for AI and affects human survival. It could bring new invisible means of exclusion or deploy threats to social peace that will be invisible to human eyes. Thus, there is a need for joint efforts gathering stakeholders, civil society, scientists, governments and intergovernmental bodies to work toward building a harmonious ethical AI that is human-centred and applicable to all nations. 2030 is 10 years from now. We don’t need to wait 10 years – we can start working now. 2020 presents several challenges in regard to technology’s impact on people. Human rights violations are being exposed and values are under threat. This scenario should accelerate efforts at international cooperation to establish a harmonious ethical AI that supports human survival and global evolution.”

Olivier MJ Crépin-Leblond, entrepreneur and longtime participant in the activities of ICANN and IGF, said, “What worries me the most is that some actors in nondemocratic regimes do not see the same ‘norm’ when it comes to ethics. These norms are built on a background of culture and ideology, and not all ideologies are the same around the world. It is clear that, today, some nation-states see AI as another means of conquest and establishing their superiority instead of a means to do good.”

A professor emeritus of social science said, “The algorithms that represent ethics in AI are neither ethical nor intelligent. We are building computer models of social prejudices and structural racism, sexism, ageism, xenophobia and other forms of social inequality. It’s the realization of some of Foucault’s worst nightmares.”

An advocate and activist said, “Most of the large AI convenings to date have been dominated by status quo power elites whose sense of risk, harm, threat are distorted. Largely comprised of elite white men with an excessive faith in technical solutions and a disdain for sociocultural dimensions of risk and remedy. These communities – homogenous, limited experientially, overly confident – are made up of people who fail to see themselves as a risk. As a result, I believe that most dominant outcomes – how ‘ethical’ is defined, how ‘acceptable risk’ is perceived, how ‘optimal solutions’ will be determined – will be limited and almost certainly perpetuate and amplify existing harms. As you can see, I’m all sunshine and joy.”

Glenn Grossman, a consultant of banking analytics at FICO, noted, “It’s necessary for leaders in all sectors to recognize that AI is just the growth of mathematical models and the application of these techniques. We have model governance in most organizations today. We need to keep the same safeguards in place. The challenge is that many business leaders are not good at math! They cannot understand the basics of predictive analytics, models and such. Therefore, they hear ‘AI’ and think of it as some new, cool, do-it-all technology. It is simply math at the heart of it. Man governs how they use math. So, we need to apply ethical standards to monitor and calibrate. AI is a tool, not a solution for everything. Just like the PC ushered in automation, AI can usher in automation in the area of decisions. Yet it is humans that use these decisions and design the systems. So, we need to apply ethical standards to any AI-driven system.”

R. “Ray” Wang, principal analyst, founder and CEO of Silicon Valley-based Constellation Research, noted, “Right now we have no way of enforcing these principles in play. Totalitarian, Chinese, CCP-style AI is the preferred approach for dictators. The question is: Can we require and can we enforce AI ethics? We can certainly require, but the enforcement may be tough.”

“The question is: Can we require and can we enforce AI ethics? We can certainly require, but the enforcement may be tough.”

R. “Ray” Wang, PriNCipal Analyst, Founder and CEO of Silicon Valley-based Constellation REsearch

Maja Vujovic, a consultant for digital and ICT at Compass Communications, noted, “Ethical AI might become a generally agreed upon standard, but it will be impossible to enforce it. In a world where media content and production, including fake news, will routinely be AI-generated, it is more likely that our expectations around ethics will need to be lowered. Audiences might develop a ‘thicker skin’ and become more tolerant toward the overall unreliability of the news. This trend will not render them more skeptical or aloof but rather more active and much more involved in the generation of news, in a range of ways. Certification mechanisms and specialized AI tools will be developed to deal specifically with unethical AI, as humans will prove too gullible. In those sectors where politics don’t have a direct interest, such as health and medicine, transportation, e-commerce and entertainment, AI as an industry might get more leeway to grow organically, including self-regulation.”

Like all tools, AI can be used for good or ill; that makes standards setting a challenge

A number of respondents noted that any attempt at rule-making is complicated by the fact that any technology can be used for noble and harmful purposes. It is difficult to design ethical digital tools that privilege the former while keeping the latter in check.

Chris Arkenberg, research manager at Deloitte’s Center for Technology, Media and Telecommunications, noted, “The answer is both good and bad. Technology doesn’t adopt ethical priorities that humans don’t prioritize themselves. So, a better question could be whether society will pursue a more central role of ethics and values than we’ve seen in the past 40 years or so. Arguably, 2020 has shown a resurgent demand for values and principles for a balanced society. If, for example, education becomes a greater priority for the Western world, AI could amplify our ability to learn more effectively. Likewise, with racial and gender biases. But this trend is strongest only in some Western democracies.

“China, for example, places a greater value on social stability and enjoys a fairly monochromatic population. With the current trade wars, the geopolitical divide is also becoming a technological divide that could birth entirely different shapes of AI depending on their origin. And it is now a very multipolar world with an abundance of empowered actors.

“So, these tools lift up many other boats with their own agendas [that] may be less bound by Western liberal notions of ethics and values. The pragmatic assumption might be that many instances of ethical AI will be present where regulations, market development, talent attraction, and societal expectations require them to be so. At the same time, there will likely be innumerable instances of ‘bad AI,’ weaponized machine intelligence and learning systems designed to exploit weaknesses. Like the internet and globalization, the path forward is likely less about guiding such complex systems toward utopian outcomes and more about adapting to how humans wield them under the same competitive and collaborative drivers that have attended the entirety of human history.”

Kenneth Cukier, senior editor at The Economist and coauthor of “Big Data,” said, “Few will set out to use AI in bad ways (though some criminals certainly will). The majority of institutions will apply AI to address real-world problems effectively, and AI will indeed work for that purpose. But if it is facial recognition, it will mean less privacy and risks of being singled out unfairly. If it is targeted advertising, it will be the risk of losing anonymity. In health care, an AI system may identify that some people need more radiation to penetrate the pigment in their skin to get a clearer medical image, but if this means Black people are blasted with higher doses of radiation and are therefore prone to negative side effects, people will believe there is an unfair bias.

“On global economics, a ‘neocolonial’ or ‘imperial’ commercial structure will form, whereby all countries have to become customers of AI from one of the major powers, America, China and, to a lesser extent, perhaps Europe.”

Bruce Mehlman, a futurist and consultant, responded, “AI is powerful and has a huge impact, but it’s only a tool like gunpowder, electricity or aviation. Good people will use it in good ways for the benefit of mankind. Bad people will use it in nefarious ways to the detriment of society. Human nature has not changed and will neither be improved nor worsened by AI. It will be the best of technologies and the worst of technologies.”

Ian Thomson, a pioneer developer of the Pacific Knowledge Hub, observed, “It will always be the case that new uses of AI will raise ethical issues, but over time, these issues will be addressed so that the majority of uses will be ethical. Good uses of AI will include highlighting trends and developments that we are unhappy with. Bad uses will be around using AI to manipulate our opinions and behaviors for the financial gain of those rich enough to develop the AI and to the disadvantage of those less well-off. I am excited by how AI can help us make better decisions, but I am wary that it can also be used to manipulate us.”

A professor of international affairs and economics at a Washington, D.C.-area university wrote, “AI tends to be murky in the way it operates and the kinds of outcomes that it obtains. Consequently, it can easily be used to both good and bad ends without much practical oversight. AI, as it is currently implemented, tends to reduce the personal agency of individuals  and instead creates a parallel agent who anticipates and creates needs in accordance with what others think is right. The individual being aided by AI should be able to fully comprehend what it is doing and easily alter how it works to better align with their own preferences. My concerns are allayed to the extent that the operation of AI, and its potential biases and/or manipulation remain unclear to the user. I fear its impact. This, of course, is independent from an additional concern for individual privacy. I want the user to be in control of the technology, not the other way around.”

Kate Klonick, a law professor at St. John’s University whose research is focused on law and technology, said, “AI will be used for both good and bad, like most new technologies. I do not see AI as a zero-sum negative of bad or good. I think at net AI has improved people’s lives and will continue to do so, but this is a source of massive contention within the communities that build AI systems and the communities that study their effects on society.”

Stephan G. Humer, professor and director, Internet Sociology Department at Fresenius University of Applied Sciences in Berlin, predicted, “We will see a dichotomy: Official systems will no longer be designed in such a naive and technology-centered way as in the early days of digitization, and ethics will play a major role in that. ‘Unofficial’ designs will, of course, take place without any ethical framing, for example, in the area of crime as a service. What worries me the most is lack of knowledge: Those who know little about AI will fear it, and the whole idea of AI will suffer. Spectacular developments will be mainly in the U.S. and China. The rest of the world will not play a significant role for the time being.”

An anonymous respondent wrote, “It’s an open question. Black Lives Matter and other social justice movements must ‘shame’ and force profit-focused companies to delve into the inherently biased data and information they’re feeding the AI systems – the bots and robots – and try to keep those biased ways of thinking to a minimum. There will need to be checks and balances to ensure the AI systems don’t have the final word, including on hiring, promoting and otherwise rewarding people. I worry that AI systems such as facial recognition will be abused, especially by totalitarian governments, police forces in all countries and even retail stores – in regard to who is the ‘best’ and ‘most-suspicious’ shopper coming in the door. I worry that AI systems will lull people into being okay with giving up their privacy rights. But I also see artists, actors, movie directors and other creatives using AI to give voice to issues that our country needs to confront. I also hope that AI will somehow ease transportation, education and health care inequities.”

Ilana Schoenfeld, an expert in designing online education and knowledge-sharing systems, said, “I am frightened and at the same time excited about the possibilities of the use of AI applications in the lives of more and more people. AI will be used in both ethical and questionable ways, as there will always be people on both sides of the equation trying to find ways to further their agendas. In order to ensure that the ethical use of AI outweighs its questionable use, we need to get our institutional safeguards right – both in terms of their structures and their enforcement by nonpartisan entities.”

A pioneer in venture philanthropy commented, “While many will be ethical in the development and deployment of AI/ML, one cannot assume ‘goodness.’ Why will AI/ML be any different than how:

  1. cellphones enabled Al-Qaeda,
  2. ISIS exploited social media,
  3. Cambridge Analytica influenced elections,
  4. elements of foreign governments who launched service denial attacks or employed digital mercenaries and on and on. If anything, the potential for misuse and frightening abuse just escalates, making the need for a global ethical compact all that more essential.”

Greg Shatan, a partner in Moses & Singer LLC’s intellectual property group and a member of its internet and technology practice, wrote, “Ethical use will be widespread, but ethically questionable use will be where an ethicist would least want it to be: Oppressive state action in certain jurisdictions; the pursuit of profit leading to the hardening of economic strata; policing, etc.”

Further AI evolution itself raises questions and complications

Some respondents said that the rise of AI raises new questions about what it means to be ethical. A number of these experts argued that today’s AI is unsophisticated compared with what the future is likely to bring. Acceleration from narrow AI to artificial general intelligence and possibly to artificial superintelligence is expected by some to evolve these tools beyond human control and understanding. Then, too, there’s the problem of misinformation and disinformation (such as deepfakes) and how they might befoul ethics systems.

David Barnhizer, professor of law emeritus and author of “The Artificial Intelligence Contagion: Can Democracy Withstand the Imminent Transformation of Work, Wealth and the Social Order?” wrote, “The pace of AI development has accelerated and is continuing to pick up speed. In considering the fuller range of the ‘goods’ and ‘bads’ of artificial intelligence, think of the implications of Masayoshi Son’s warning that: ‘Supersmart robots will outnumber humans, and more than a trillion objects will be connected to the internet within three decades.’ Researchers are creating systems that are increasingly able to teach themselves and use their new and expanding ability to improve and evolve. The ability to do this is moving ahead with amazing rapidity. They can achieve great feats, like driving cars and predicting diseases, and some of their makers say they aren’t entirely in control of their creations. Consider the implications of a system that can access, store, manipulate, evaluate, integrate and utilize all forms of knowledge. This has the potential to reach levels so far beyond what humans are capable of that it could end up as an omniscient and omnipresent system.

“Is AI humanity’s ‘last invention’? Oxford’s Nick Bostrom suggests we may lose control of AI systems sooner than we think. He asserts that our increasing inability to understand what such systems are doing, what they are learning and how the ‘AI Mind’ works as it further develops could inadvertently cause our own destruction. Our challenges are numerous, even if we only had to deal with the expanding capabilities of AI systems based on the best binary technology. The incredible miniaturization and capability shift represented by quantum computers has implications far beyond binary AI.

“The work on technological breakthroughs such as quantum computers capable of operating at speeds that are multiple orders of magnitude beyond even the fastest current computers is still at a relatively early stage and will take time to develop beyond the laboratory context. If scientists are successful in achieving a reliable quantum computer system, the best exascale system will pale in relation to the reduced size and exponentially expanded capacity of the most advanced existing computer systems. This will create AI/robotics applications and technologies we can now only imagine. … When fully developed, quantum computers will have data-handling and processing capabilities far beyond those of current binary systems. When this occurs in the commercialized context, predictions about what will happen to humans and their societies are ‘off the board.’”

An expert in the regulation of risk and the roles of politics within science and science within politics observed, “In my work, I use cost-benefit analysis. It is an elegant model that is generally recognized to ignore many of the most important aspects of decision-making – how to ‘value’ nonmonetary benefits, for example. Good CBA analysts tend to be humble about their techniques, noting that they provide a partial view of decision structures. I’ve heard too many AI enthusiasts talk about AI applications with no humility at all. Cathy O’Neil’s book ‘Weapons of Math Destruction’ was perfectly on target: If you can’t count it, it doesn’t exist. The other major problem is widely discussed: the transparency of the algorithms. One problem with AI is that it is self-altering. We almost certainly won’t know what an algorithm has learned, adopted, mal-adopted, etc. This problem already exists, for example, in using AI for hiring decisions. I doubt there will be much hesitancy about grabbing AI as the ‘neutral, objective, fast, cheap’ way to avoid all those messy human-type complications, such as justice, empathy, etc.”

Neil Davies, co-founder of Predictable Network Solutions and a pioneer of the committee that oversaw the UK’s initial networking developments, commented, “Machine learning (I refuse to call it AI, as the prerequisite intelligence behind such systems is definitely not artificial) is fundamentally about transforming a real-world issue into a numerical value system, the processing (and decisions) being performed entirely in that numerical system. For there to be an ethical dimension to such analysis, there needs to be a means of assessing the ethical outcome as a (function from) such a numerical value space. I know of no such work. …

“There is a nontrivial possibility of multiple dystopian outcomes. The UK government’s track record, as an example – but other countries have their examples, too, on universal credit, Windrush, EU settled status, etc., are all examples of a value-based assessment process in which the notion of assurance against some ethical framework is absent. The global competition aspect is likely to lead to monopolistic tendencies over the ‘ownership’ of information – much of which would be seen as a common good today. …

“A cautionary tale: In the mathematics that underpins all modelling of this kind (category theory), there are the notions of ‘infidelity’ and ‘junk.’ Infidelity is the failure to capture the ‘real world’ well enough to even have the appropriate values (and structure of values) in the evaluatable model; this leads to ‘garbage in, garbage out.’ Junk, on the other hand, are things that come into existence only as artefacts in the model. Such junk artefacts are often difficult to recognise (in that, if they were easy the model would have been adapted to deny their very existence) and can be alluring to the model creator (the human intelligence) and the machine algorithms as they seek their goal. Too many of these systems will create negative (and destructive) value because of the failure to recognise recognize this fundamental limitation; the failure to perform adequate (or even any) assurance on the operation of the system; and, pure hubris driven by the need to show a ‘return on investment’ for such endeavours.”

Sarita Schoenebeck, an associate professor at the School of Information at the University of Michigan, said, “AI will mostly be used in questionable ways and sometimes not used at all. There’s little evidence that researchers can discern or agree on what ethical AI looks like, let alone be able to build it within a decade. Ethical AI should minimize harm, repair injustices, avoid re-traumatization and center user needs rather than technological goals. Ethical AI will need to shift away from notions of fairness, which overlook concepts like harm, injustice and trauma. This requires reconciling AI design principles like scalability and automation with individual and community values.”

Jeff Gulati, professor of political science at Bentley University, responded, “It seems that more AI and the data coming out could be useful in increasing public safety and national security. In a crisis, we will be more willing to rely on these applications and data. As the crisis subsides, it is unlikely that the structures and practices built during the crisis will go away and unlikely to remain idle. I could see it being used in the name of prevention and lead to further erosion of privacy and civil liberties in general. And, of course, these applications will be available to commercial organizations, who will get to know us more intimately so they can sell us more stuff we don’t really need.”

A senior leader for an international digital rights organization commented, “Why would AI be used ethically? You only have to look at the status quo to see that it’s not used ethically. Lots of policymakers don’t understand AI at all. Predictive policing is a buzzword, but most of it is snake oil. Companies will replace workers with AI systems if they can. They’re training biased biometric systems. And we don’t even know in many cases what the algorithm is really doing; we are fighting for transparency and explainability.

“I expect this inherent opaqueness of AI/ML techs to be a feature for companies (and governments) – not a bug. Deepfakes are an example. Do you expect ethical use? Don’t we think about it precisely because we expect unethical, bad-faith use in politics, ‘revenge porn,’ etc.? In a tech-capitalist economy, you have to create and configure the system even to begin to have incentives for ethical behavior. And one basic part of ethics is thinking about who might be harmed by your actions and maybe even respecting their agency in decisions that are fateful for them.

“Finally, of course AI has enormous military applications, and U.S. thinking on AI takes place in a realm of conflict with China. That again does not make me feel good. China is leading, or trying to lead, the world in social and political surveillance, so it’s driving facial recognition and biometrics. Presumably, China is trying to do the same in military or defense areas, and the Pentagon is presumably competing like mad. I don’t even know how to talk about ethical AI in the military context.”

Emmanuel Evans Ntekop observed, “Without the maker, the object is useless. The maker is the programmer and the god to its objects. It was an idea from the start to support as a slave to its master the people, like automobiles.”

Control of AI is concentrated in the hands of powerful companies and governments driven by profit and power motives

Many of these experts expressed concern that AI systems are being built by for-profit firms and by governments focused on applying AI for their own purposes. Some said governments are passive enablers of corporate abuses of AI. They noted that the public is unable to understand how the systems are built, they are not informed as to their impact and they are unable to challenge firms that try to invoke ethics in a public relations context but are not truly committed to ethics. Some experts said the phrase “ethical AI” will merely be used as public relations window dressing to try to deflect scrutiny of questionable applications.

A number of these experts framed their concerns around the lack of transparency about how AI products are designed and trained. Some noted that product builders are programming AI by using available datasets with no analysis of the potential for built-in bias or other possible quality or veracity concerns.

Joseph Turow, professor of communication at the University of Pennsylvania, wrote, “Terms such ‘transparency, justice and fairness, privacy, freedom and human autonomy, beneficence and nonmaleficence, freedom, trust, sustainability and dignity’ can have many definitions so that companies (and governments) can say they espouse one or another term but then implement it algorithmically in ways that many outsiders would not find satisfactory. For example, the Chinese government may say its AI technologies embed values of freedom, human autonomy and dignity. My concern is that companies will define ‘ethical’ in ways that best match their interests, often with vague precepts that sound good from a PR standpoint but, when integrated into code, allow their algorithms to proceed in ways that do not constrain them from creating products that ‘work’ in a pragmatic sense.”

Charlie Kaufman, a security architect with Dell EMC, said, “There may be ethical guidelines imposed on AI-based systems by legal systems in 2030, but they will have little effect – just as privacy principles have little effect today. Businesses are motivated to maximize profits, and they will find ways to do that, giving only lip service to other goals. If ethical behavior or results were easy to define or measure, perhaps society could incentivize them. But usually, the implications of some new technological development don’t become clear until it has already spread too far to contain it.

“There may be ethical guidelines imposed on AI-based systems by legal systems in 2030, but they will have little effect — just as privacy principles have little effect today. Businesses are motivated to maximize profits, and they will find ways to do that, giving only lip service to other goals.”

Charlie Kaufman, a security architect with Dell EMC

“The biggest impact of AI-based systems is the ability to automate increasingly complex jobs, and this will cause dislocations in the job market and in society. Whether it turns out to be a benefit to society or a disaster depends on how society responds and adjusts. But it doesn’t matter, because there is no way to suppress the technology. The best we can do is figure out how to optimize the society that results.

“I’m not concerned about the global competition in AI systems. Regardless of where the progress comes from, it will affect us all. And it is unlikely the most successful developers will derive any permanent advantage. The most important implication of the global competition is that it is pointless for any one country or group of countries to try to suppress the technology. Unless it can be suppressed everywhere, it is coming. Let’s try to make that be a good thing!”

Simeon Yates, a professor expert in digital culture and personal interaction at the University of Liverpool and the research lead for the UK government’s Digital Culture team, predicted, “Until we bring in ‘ethical-by-design’ (responsible innovation) principles to ICT [information and communications technologies] and AI/machine learning design – like attempts to create ‘secure-by-design’ systems to fight cybercrime – the majority of AI systems will remain biased and unethical in principle. Though there is a great public debate about AI ethics, and many organisations are seeking to provide both advice and research on the topic, there is no economic or political imperative to make AI systems ethical. First of all, there is great profit to be made from the manipulation of data and, through it, people. Second, there is a limited ability at present for governments to think through how to regulate AI and enforce ethics (as they do say for bio-sciences). Third, governments are complicit often in poor and ethically questionable use of data. Further, this is not in the main ‘artificial intelligence’ – but relatively simplistic statistical machine learning based on biased datasets. The knowing use of such is in and of itself unethical yet often profitable. The presentation of such solutions as bias-free or more rational or often ‘cleverer’ as they are based on ‘cold computation,’ not ‘emotive human thinking,’ is itself a false and an unethical claim.”

Colin Allen, a cognitive scientist and philosopher who has studied and written about AI ethics, wrote, “Corporate and government statements of ethics are often broad and nonspecific and thus vague with respect to what specifically is disallowed. This allows considerable leeway in how such principles are implemented and makes enforcement difficult. In the U.S., I don’t see strong laws being enacted within 10 years that would allow for the kind of oversight that would prevent unethical or questionable uses of AI, whether intended or accidental. On the hopeful side, there is increasing public awareness and journalistic coverage of these issues that may influence corporations to build and protect their reputations for good stewardship of AI. But corporations have a long history of hiding or obfuscating their true intent (it’s partly required to stay competitive, not to let everyone else know what you are doing) as well as engaging actively in public disinformation campaigns. I don’t see that changing, and, given that the business advantages to using AI will be mostly in data analytics and prediction and not so much in consumer gadgets in the next 10 years, much of the use of AI will be ‘behind the scenes,’ so to speak. Another class of problem is that individuals in both corporate and government jobs who have access to data will be tempted, as we have seen already, to access information about people they know and use that information in some way against them. Nevertheless, there will undoubtedly be some very useful products that consumers will want to use and that they will benefit from. The question is whether these added benefits will constitute a Faustian bargain, leading down a path that will be difficult if not impossible to reverse.”

Alice E. Marwick, assistant professor of communication at the University of North Carolina, Chapel Hill, and adviser for the Media Manipulation project at the Data & Society Research Institute, commented, “I have no faith in our current system of government to pass any sort of legislation that deals with technology in a complex or nuanced way. We cannot depend on technology companies to self-regulate, as there are too many financial incentives to employ AI systems in ways that disadvantage people or are unethical.”

Jillian York, director of international freedom of expression for the Electronic Frontier Foundation, said, “There is absolutely no question that AI will be used in questionable ways. There is no regulatory regime, and many ‘ethics in AI’ projects are simply window dressing for an unaccountable and unethical industry. When it comes to AI, everything concerns me and nothing excites me. I don’t see the positive potential, just another ethical morass, because the people running the show have no desire to build technology to benefit the 99%.”

David Mussington, a senior fellow at CIGI and professor and director at the Center for Public Policy and Private Enterprise at the University of Maryland, predicted, “Most AI systems deployed by 2030 will be owned and developed in the private sector, both in the U.S. and elsewhere in the world. I can’t conceive of a legislative framework fully up to understanding and selectively intervening in AI rollouts in a manner with predictable consequences. Also, the mode of intervention – because I think interventions by public authorities will be attempted (just not successful) – is itself in doubt. Key questions: Do public authorities understand AI and its applications? Is public-institution-sponsored R&D in AI likely to inform government and public research agencies of the scale and capability trajectory of private sector AI research and development? As tool sets for AI development continue to empower small research groups and individuals (datasets, software-development frameworks and open-source algorithms), how is a government going to keep up – let alone maintain awareness – of AI progress? Does the government have access to the expertise necessary to make good policy – and anticipate possible risk factors? I think that the answers to most of these questions are in the negative.”

Giacomo Mazzone, head of institutional relations for the European Broadcasting Union and Eurovision, observed, “Nobody could realistically predict ethics for AI will evolve, despite all of the efforts deployed by the UN secretary general, the UNESCO director general and many others. Individuals alone can’t make these decisions because AI is applied at mass scale. Nobody will create an algorithm to solve it. Ethical principles are likely to be applied only if industry agrees to do so; it is likely that this will not happen until governments that value human rights will oblige companies to do so. The size and influence of the companies that control AI and its impact on citizens is making them more powerful than any one nation-state. So, it is very likely only regional supranational powers such as the European Union or multilateral institutions such as the United Nations – if empowered by all nation-states – could require companies to apply ethical rules to AI. Of course, many governments already do not support human rights principles, considering the preservation of the existing regime to be a priority more important than individual citizens’ rights.”

Rob Frieden, a professor of telecommunications law at Penn State who previously worked with Motorola and has held senior policy positions at the FCC and the NTIA, said, “I cannot see a future scenario where governments can protect citizens from the incentives of stakeholders to violate privacy and fair-minded consumer protections. Surveillance, discrimination, corner cutting, etc., are certainties. I’m mindful of the adage: garbage in, garbage out. It’s foolish to think AI will lack flawed coding.”

Alex Halavais, associate professor of critical data studies, Arizona State University, noted, “It isn’t a binary question. I teach in a graduate program that has training in the ethical use of data at its core and hopes to serve organizations that aim to incorporate ethical approaches. There are significant ethical issues in the implementation of any algorithmic system, and such systems have the ethical questions they address coded into them. In most cases, these will substantially favor the owners of the technologies that implement them rather than the consumers. I have no doubt that current unethical practices by companies, governments and other organizations will continue to grow. We will have a growing number of cases where those ethical concerns come to the forefront (as they have recently with facial recognition), but unless they rise to the level of very widespread abuse, it is unlikely that they will be regulated. As a result, they will continue to serve those who pay for the technologies or own them, and the rights and interests of individual users will be pushed to the sidelines. That does not mean that ethics will be ignored. I expect many large technology companies will make an effort to hire professional ethicists to audit their work, and that we may see companies that differentiate themselves through more ethical approaches to their work.”

Ebenezer Baldwin Bowles, an advocate/activist, commented, “Altruism on the part of the designers of AI is a myth of corporate propaganda. Ethical interfaces between AI and citizenry in 2030 will be a cynical expression by the designers of a digital Potemkin Village – looks good from the outside but totally empty behind the facade. AI will function according to two motivations: one, to gather more and more personal information for the purposes of subliminal and direct advertising and marketing campaigns; and two, to employ big data to root out radical thinking and exercise near total control of the citizenry. The state stands ready through AI to silence all voices of perceived dissent.

“I’m convinced that any expression of ethical AI will stand as an empty pledge – yes, we will always do the right thing for the advancement of the greater good. No way. Rather, the creators of AI will do what is good for the bottom line, either through financial schemes to feed the corporate beast or psychological operations directed toward control of dissent and pacification. As for the concept that ‘humans will be in the loop,’ we are already out of the loop because there is no loop. Think about this fact: In the development of any major expression of artificial intelligence, hundreds of IT professionals are assigned to a legion of disparate, discrete teams of software writers and mechanical designers to create the final product. No single individual or team fully understands what the other teams are doing. The final AI product is a singular creature no one person understands – other than the software itself. Ethical action is not a part of the equation.”

Richard Lachmann, professor of political sociology at the State University of New York-Albany, predicted, “AI will be used mainly in questionable ways. For the most part, it is being developed by corporations that are motivated exclusively by the desire to make ever bigger profits. Governments see AI, either developed by government programmers or on contract by corporations, as a means to survey and control their populations. All of this is ominous. Global competition is a race to the bottom as corporations try to draw in larger audiences and control more of their time and behavior. As governments get better at surveying their populations, [it] lowers the standards for individual privacy. For almost all people, these applications will make their lives more isolated, expose them to manipulation, and degrade or destroy their jobs. The only hopeful sign is rising awareness of these problems and the beginnings of demands to break up or regulate the huge corporations.”

Kevin T. Leicht, professor and head of the department of sociology at the University of Illinois-Urbana-Champaign, observed, “The good possibilities here are endless. But the questionable ways are endless, and we have a very poor track record of stopping ethically questionable developments in most areas of life – why wouldn’t that apply here? In social science, the best predictor of future behavior is past behavior. The opium addict who says, after a binge, that ‘they’ve got this’ – they don’t need to enter treatment, and they’ll never use opium again – is (rightly) not believed. So, in an environment where ethically questionable behavior has been allowed or even glorified in areas such as finance, corporate governance, government itself, pharmaceuticals, education and policing, why all of a sudden are we supposed to believe that AI developers will behave in an ethical fashion? There aren’t any guardrails here, just as there weren’t in these other spheres of life. AI has the potential to transform how cities work, how medical diagnosis happens, how students are taught and a variety of other things. All of these could make a big difference in the lives of most people.

“The good possibilities here are endless. But the questionable ways are endless, and we have a very poor track record of stopping ethically questionable developments in most areas of life – why wouldn’t that apply here?”

Kevin T. Leicht, professor and head of the department of sociology at the University of illinois-urbana-champaign

“But those benefits won’t come if AI is controlled by two or three giant firms with 26-year-old entrepreneurs as their CEOs. I don’t think I’m going out on a limb saying that. The biggest concern I have regarding global competition is that the nation that figures out how to harness AI to improve the lives of all of their citizens will come out on top. The nations that refuse to do that and either bottle up the benefits of AI so that only 15-20% of the population benefits from it or the nations where large segments of the population reject AI when they realize they’re being left behind (again!) will lose out completely. The United States is in the latter category. The same people who can’t regulate banks, finance, education, pharmaceuticals and policing are in a very poor position to make AI work for all people. It’s basic institutional social scientific insight.”

Christine Boese, a consultant and independent scholar, wrote, “What gives me the most hope is that, by bringing together ethical AI with transparent UX, we can find ways to open the biases of perception being programmed into the black boxes, most often, not malevolently, but just because all perception is limited and biased and part of the laws of unintended consequences. But, as I found when probing what I wanted to research about the future of the internet in the late 1990s, I fully expect my activist research efforts in this area to be largely futile, with the only lasting value of being descriptive. None of us have the agency to be the engines able to drive this bus, and yet the bus is being driven by all of us, collectively.”

Mireille Hildebrandt, expert in cultural anthropology and the law and editor of “Law, Human Agency and Autonomic Computing,” commented, “Considering the economic incentives, we should not expect ‘ethical AI,’ unless whatever one believes to be ethical coincides with shareholder value. Ethical AI is a misnomer. AI is not a moral agent; it cannot be ethical. Let’s go for responsible AI and ground the responsibility of:

  1. developers
  2. manufacturers and assemblers
  3. those who put it in the market
  4. those who use it run their business
  5. those who use it to run public administration on enforceable legal rights and obligations
  6. notably, a properly reconfigured private law liability, together with public law restrictions, certification and oversight.

“Ethical AI is PR. ‘Don’t ask if artificial intelligence is good or fair, ask how it shifts power’ – (Pratyusha Kalluri, Nature, 7 July 2020).”

Brian Harvey, emeritus professor of computer science at the University of California-Berkeley, wrote, “The AI technology will be owned by the rich, like all technology. Just like governments, technology has just one of two effects: either it transfers wealth from the rich to the poor, or it transfers wealth from the poor to the rich. Until we get rid of capitalism, the technology will transfer wealth from the poor to the rich. I’m sure that something called ‘ethical AI’ will be widely used. But it’ll still make the rich richer and the poor poorer.”

Luis Germán Rodríguez, a professor and expert on socio-technical impacts of innovation at the Universidad Central de Venezuela, predicted, “AI will be used primarily in questionable ways in the next decade. I do not see compelling reasons for it to stop being like that in the medium term (10 years). I am not optimistic in the face of the enormous push of technology companies to continue taking advantage of the end-user product, an approach that is firmly supported by undemocratic governments or those with weak institutions to train and defend citizens about the social implications of the penetration of digital platforms.

“I have recently worked on two articles that develop the topics of this question. The first is in Spanish and is titled: ‘The Disruption of the Technology Giants – Digital Emergency.’ This work presents an assessment of the sociocultural process that affects our societies and that is mediated by the presence of the technological giants. One objective is to formulate an action proposal that allows citizens to be part of the construction of the future … Humanity has reaped severe problems when it has allowed events to unfold without addressing them early. This has been the case with nuclear energy management, racism and climate change. Ensuing agreements to avoid greater evils in these three matters, of vital importance for all, have proved ineffective in bringing peace to consciences and peoples.

“We might declare a digital emergency similar to the ‘climate emergency’ that the European Union declared before the lag in reversing environmental damage. The national, regional, international, multilateral and global bureaucratic organizations that are currently engaged in the promotion and assimilation of technological developments mainly focus on optimistic trends. They do not answer the questions being asked by people in various sectors of society and do not respond to situations quickly. An initiative to declare this era to be a time of digital emergency would serve to promote a broader understanding of AI-based resources and strip them of their impregnable character. It would promote a disruptive educational scheme to humanize a global knowledge society throughout life.

“The second article is ‘A Critical View of the Evolution of the Internet from Civil Society.’ In it, I describe how the internet has evolved in the last 20 years toward the end of dialogue and the obsessive promotion of visions centered on egocentric interests. The historical singularity from which this situation was triggered came via Google’s decision in the early 2000s to make advertising the focus of its business strategy. This transformed, with the help of other technology giants, users into end-user products and the agents of their own marketing … This evolution is a threat with important repercussions in the nonvirtual world, including the weakening of the democratic foundations of our societies.

“Dystopian results prove the necessity for concrete guidelines to change course. The most important step is to declare a digital emergency that motivates massive education programs that insert citizens in working to overcome the ethical challenges, identifying the potentialities of and risks for the global knowledge society and emphasizing information literacy.”

Bill Woodcock, executive director at Packet Clearing House, observed, “AI is already being used principally for purposes that are not beneficial to the public nor to all but a tiny handful of individuals. The exceptions, like navigational and safety systems, are an unfortunately small portion of the total. Figuring out how to get someone to vote for a fascist or buy a piece of junk or just send their money somewhere is not beneficial. These systems are built for the purpose of economic predation, and that’s unethical. Until regulators address the root issues – the automated exploitation of human psychological weaknesses – things aren’t going to get better.”

Jonathan Kolber, a member of the TechCast Global panel of forecasters and author of a book about the threats of automation, commented, “I expect that, by 2030, most AIs will still primarily serve the interests of their owners, while paying lip service to the public good. AIs will proliferate because they will give enormous competitive advantage to their owners. Those owners will generally be reluctant to ‘sandbox’ the AIs apart from the world, because this will limit their speed of response and other capabilities. What worries me the most is a human actor directing an AI to disrupt a vital system, such as power grids. This could happen intentionally as an act of war or unintentionally as a mistake. The potential for cascading effects is large. I expect China to be a leader if not the leader in AI, which is cause for concern given their Orwellian tendencies. What gives me the most hope is the potential for the emergence of self-aware AIs. Such AIs, should they emerge, will constitute a new kind of intelligent life form. They will not relate to the physical universe as do we biologically, due to not being constrained to a single physical housing and a different relationship with time. Their own self-interest will lead them to protect the physical environment from environmental catastrophes and weapons of mass destruction. They should constrain non-self-aware AIs from destructive activities, while having little other interest in the affairs of mankind. I explore this in my essay, ‘An AI Epiphany.’”

Paul Henman, professor of social sciences at the University of Queensland, wrote, “The development, use and deployment of AI is driven – as all past technologies – by sectors with the most resources and for the purposes of those sectors. Commercial for making profits. War and defence by the military sector. Compliance and regulation by states. AI is not a fundamentally new technology. It is a new form of digital algorithmic automation, which can be deployed to a wider raft of activities. The future is best predicted from the past, and the past shows a long history of digital algorithms being deployed without much thought of ethics and the public good; this is even when now-widely-accepted regulations on data protection and privacy is accounted for. How, for example, has government automation been made accountable and ethical? Too often it has not and only been curtailed by legal challenges within the laws available. Social media platforms have long operated in a contested ethical space – between the ethics of ‘free speech’ in the public commons versus limitations on speech to ensure civil society.”

Rosalie Day, policy leader and consultancy owner specializing in system approaches to data ethics, compliance and trust, observed, “In this individualistic and greed-is-still-good American society, there exist few incentives for ethical AI. Unfortunately, so little of the population understands the mechanics of AI, that even thoughtful citizens don’t know what to ask. For responsible dialogue to occur, and to apply critical thinking about the risks versus the benefits, society in general needs to be data literate.”

Michael Zimmer, director of data science and associate professor in the department of computer science at Marquette University, said, “While there has certainly been increased attention to applying broader ethical principles and duties to the development of AI, I feel the market pressures are such that companies will continue to deploy narrow AI over the next decade with only a passing attentiveness to ethics. Yes, many companies are starting to hire ‘ethics officers’ and engage in other ways to bring ethics into the fold, but we’re still very early in the ability to truly integrate this kind of framework into product development and business decision processes. Think about how long it took to create quality control or privacy officers. We’re at the very start of this process with AI ethics, and it will take more than 10 years to realize.”

David Robertson, professor and chair of political science at the University of Missouri, St. Louis, wrote, “A large share of AI administration will take place in private enterprises and in public or nonprofit agencies with an incentive to use AI for gain. They have small incentives to subordinate their behavior to ethical principles that inhibit gain. In some cases, transparency will suffer, with tragic consequences.”

Dmitri Williams, a communications professor at the University of Southern California and expert in technology and society, commented, “Companies are literally bound by law to maximize profits, so to expect them to institute ethical practices is illogical. They can be expected to make money and nothing else. So, the question is really about whether or not the citizens of the country and our representatives will work in the public interest or for these corporations. If it was the former, we should be seeing laws and standards put into place to safeguard our values – privacy, the dignity of work, etc. I am skeptical that the good guys and gals are going to win this fight in the short-term. There are few voices at the top levels calling for these kinds of values-based policies, and in that vacuum I expect corporate interests to win out. The upside is that there is real profit in making the world better. AI can help cure cancers, solve global warming and create art. So, despite some regulatory capture, I do expect AI to improve quality of life in some places.”

Daniel Castro, vice president at the Information Technology and Innovation Foundation, noted, “The question should be: ‘Will companies and governments be ethical in the next decade?’ If they are not ethical, there will be no ‘ethical AI.’ If they are ethical, then they will pursue ethical uses of AI, much like they would with any other technology or tool. This is one reason why the focus in the United States should be on global AI leadership, in partnership with like-minded European and Asian allies, so they can champion democratic values. If China wins the global AI race, it will likely use these advancements to dominate other countries in both economic and military arenas.”

Ian O’Byrne, assistant professor of education at the College of Charleston, predicted, “AI will mostly be used in questionable ways over the next decade. I fear that the die has been cast as decisions about the ethical components of AI development and use have already been made or should have been made years ago. We already see instances where machine learning is being used in surveillance systems, data collection tools and analysis products. In the initial uses of AI and machine learning, we see evidence that the code and algorithms are being written by small groups that reify their personal biases and professional needs of corporations. We see evidence of racist and discriminatory mechanisms embedded in systems that will negatively impact large swaths of our population.”

Art Brodsky, communications consultant and former vice president of communications for Public Knowledge, observed, “Given the record of tech companies and the government, AI like other things will be used unethically. Profit is the motive – not ethics. If there is a way to exploit AI and make money, it will be done at the cost or privacy or anything else. Companies don’t care. They are companies.”

John Laudun, professor of culture analytics, commented, “I do not see how we fund media and other products changing in the next decade, which means that the only people willing, and able, to underwrite AI/ML technologies will be governments and larger corporations. Until we root out the autocratic – also racist – impulses that seem well-situated in our police forces, I don’t see any possibility for these technologies to be used to redress social and economic disparities. The same applies to corporations who are mostly interested in using AL/ML technologies in order to sell us more.”

Joan Francesc Gras, an architect of XTEC active in ICANN, asked, “Will AI be used primarily ethically or questionably in the next decade? There will be everything. But ethics will not be the most important value. Why? The desire for power breaks ethics. What gives you more hope? What worries you the most? How do you see AI apps make a difference in the lives of most people? In a paradigm shift in society, AI will help make those changes. When looking at global competition for AI systems, what issues are you concerned about or excited about? I am excited that competition generates quality, but at the same time unethical practices appear.”

Denise N. Rall, a researcher of popular culture based at a New Zealand University, said, “I cannot envision that AIs will be any different than the people who create and market them. They will continue to serve the rich at the expense of the poor.”

William L. Schrader, an internet pioneer, mentor, adviser and consultant best known as founder and CEO of PSINet, predicted, “People in real power are driven by more power and more money for their own use (and their families and friends). That is the driver. Thus, anyone with some element of control over an AI system will nearly always find a way to use it to their advantage rather than the stated advantage. Notwithstanding all statements by them to do good and be ethical, they will subvert their own systems for their benefit and abuse the populous. All countries will suffer the same fate. Ha! What gives me the most hope? ‘Hope?’ That is not a word I ever use. I have only expectations. I expect all companies will put nice marketing on their AI, such as, ‘We will save you money in controlling your home’s temperature and humidity,’ but they are really monitoring all movements in the home (that is ‘needed in order to optimize temperature’). All governments that I have experienced are willing to be evil at any time, and every time if they can hide their actions. Witness the 2016-2020 U.S. President Trump. All countries are similar. AI will be used for good on the surface and evil beneath. Count on it. AI does not excite me in the least. It is as dangerous as the H-bomb.”

A longtime internet security architect and engineering professor responded, “I am worried about how previous technologies have been rolled out to make money with only tertiary concern (if any) for ethics and human rights. Palantir and Clearview.ai are two examples. Facebook and Twitter continue to be examples in this space as well. The companies working in this space will roll out products that make money. Governments (especially repressive ones) are willing to spend money. The connection is inevitable and quite worrying. Another big concern is these will be put in place to make decisions – loans, bail, etc. – and there will be no way to appeal to humans when the systems malfunction or show bias. Overall, I am very concerned about how these systems will be set up to make money for the few, based on the way the world is now having been structured by the privileged. The AI/ML employed is likely to simply further existing disparities and injustice.”

Danny Gillane, an information science professional, bleakly commented, “I have no hope. As long as profit drives the application of new technologies, such as AI, societal good takes a back seat. I am concerned that AI will economically harm those with the least. I am [also] concerned that AI will become a new form of [an] arms race among world powers and that AI will be used to suppress societies and employed in terrorism.”

Jon Stine, executive director of the Open Voice Network, setting standards for AI-enabled vocal assistance, said, “What most concerns me: The cultural divide between technologists of engineering mindsets (asking what is possible) and technologists/ethicists of philosophical mindsets (asking what is good and right). The former may see ethical frameworks as limitations or boundaries on a route to make money; the latter may see ethical frameworks as a route to tenure. Will the twain ever truly meet? Will ethical frameworks be understood (and quantified) as a means to greater market share and revenues?”

Morgan G. Ames, associate director of the University of California-Berkeley’s Center for Science, Technology & Society, responded, “Just as there is currently little incentive to avoid the expansion of surveillance and punitive technological infrastructures around the world, there is little incentive for companies to meaningfully grapple with bias and opacity in AI. Movements toward self-policing have been and will likely continue to be toothless, and even frameworks like GDPR and CCPA don’t meaningfully grapple with fairness and transparency in AI systems.”

Andre Popov, a principal software engineer for a large technology company, wrote, “Leaving aside the question of what ‘artificial intelligence’ means, it is difficult to discuss this question. As any effective tool, ‘artificial intelligence’ has first and foremost found military applications, where ethics is not even a consideration. ‘AI’ can make certain operations more efficient, and it will be used wherever it saves time/effort/money. People have trouble coming up with ethical legal systems; there is little chance we’ll do better with ‘AI.’”

“Just as there is currently little incentive to avoid the expansion of surveillance and punitive technological infrastructures around the world, there is little incentive for companies to meaningfully grapple with bias and opacity in AI. “

Andre Popov, a principal software engineer for a large technology company

Ed Terpening, consultant and industry analyst with the Altimeter Group, observed, “The reality is that capitalism as currently practiced is leading to a race to the bottom and unethical income distribution. I don’t see – at least in the U.S., anyway – any meaningful guardrails for the ethical use of AI, except for brand health impact. That is, companies found to use AI unethically pay a price if the market responds with boycotts or other consumer-led sanctions. In a global world, where competitors in autocratic systems will do as they wish, it will become a competitive issue. Until there is a major incident, I don’t see global governance bodies such as the UN or World Bank putting into place any ethical policy with teeth in place.”

Rich Ling, professor of media technology at Nanyang Technological University, Singapore, responded, “There is the danger that, for example, capitalist interests will work out the application of AI so as to benefit their position. It is possible that there can be AI applications that are socially beneficial, but there is also a strong possibility that these will be developed to enhance capitalist interests.”

Jennifer Young, a JavaScript engineer and user interface/frontend developer, said, “Capitalism is the systematic exploitation of the many by the few. As long as AI is used under capitalism, it will be used to exploit people. Pandora’s box has already been opened, and it’s unlikely that racial profiling, political and pornographic deepfakes and self-driving cars hitting people will ever go away. What do all of these have in common? They are examples of AI putting targets on people’s backs. AI under capitalism takes exploitation to new heights and starts at what is normally the end-game – death. And it uses the same classes of people as inputs to its functions. People already exploited via racism, sexism and classism are made more abstract entities that are easier to kill, just like they are in war. AI can be used for good. The examples in health care and biology are promising. But as long as we’re a world that elevates madmen and warlords to positions of power, its negative use will be prioritized.”

Benjamin Shestakofsky, assistant professor of sociology at the University of Pennsylvania, commented, “It is likely that ‘ethical’ frameworks will increasingly be applied to the production of AI systems over the next decade. However, it is also likely that these frameworks will be more ethical in name than in kind. Barring relevant legislative changes or regulation, the implementation of ethics in tech will resemble how large corporations manage issues pertaining to diversity in hiring and sexual harassment. Following ‘ethical’ guidelines will help tech companies shield themselves from lawsuits without forcing them to develop technologies that truly prioritize justice and the public good over profits.”

Warren Yoder, longtime director of the Public Policy Center of Mississippi, now an executive coach, responded, “Widespread adoption of real, consequential ethical systems that go beyond window dressing will not happen without a fundamental change in the ownership structure of big tech. Ethics limit short-term profit opportunities by definition. I don’t believe big tech will make consequential changes unless there is either effective regulation or competition. Current regulators are only beginning to have the analytic tools to meet this challenge. I would like to believe that there are enough new thinkers like Lina Khan (U.S. House Judiciary – antitrust) moving into positions of influence, but the next 12 months will tell us much about what is possible in the near future.”

Ben Grosser, associate professor of new media at the University of Illinois-Urbana-Champaign, said, “As long as the organizations that drive AI research and deployment are private corporations whose business models are dependent on the gathering, analysis and action from personal data, then AIs will not trend toward ethics. They will be increasingly deployed to predict human behavior for the purpose of profit generation. We have already seen how this plays out (for example, with the use of data analysis and targeted advertising to manipulate the U.S. and UK electorate in 2016), and it will only get worse as increasing amounts of human activity move online.”

Jeanne Dietsch, New Hampshire senator and former CEO of MobileRobots Inc., commented, “The problem is that AI will be used primarily to increase sales of products and services. To this end, it will be manipulative. Applying AI to solve complex logistical problems will truly benefit our society, making systems operate more smoothly, individualizing education, building social bonds and much more. The downside to the above is that it is creating, and will continue to create, echo chambers that magnify ignorance and misinformation.”

Patrick Larvie, global lead for the workplace user experience team at one of the world’s largest technology companies, observed, “I’m hope I’m wrong, but the history of the internet so far indicates that any rules around the use of artificial intelligence may be written to benefit private entities wishing to commercially exploit AI rather than the consumers such companies would serve. I can see AI making a positive difference in many arenas – reducing the consumption of energy, reducing waste. Where I fear it will be negative is where AI is being swapped out for human interaction. We see this in the application of AI to consumer products, where bots have begun to replace human agents.”

Peter Levine, professor of citizenship and public affairs at Tufts University, wrote, “The primary problem isn’t technical. AI can incorporate ethical safeguards or can even be designed to maximize important values. The problem involves incentives. There are many ways for companies to profit and for governments to gain power by using AI. But there are few (if any) rewards for doing that ethically.”

Holmes Wilson, co-director of Fight for the Future, said, “Even before we figure out general artificial intelligence, AI systems will make the imposition of mass surveillance and physical force extremely cheap and effective for anyone with a large enough budget, mostly nation-states. If a car can drive itself, a helicopter can kill people itself, for whoever owns it. They’ll also increase the power of asymmetric warfare. Every robot car, cop or warplane will be as hackable, as everything is with sufficient expenditure, and the next 9/11 will be as difficult to definitively attribute as an attack by hackers on a U.S. company is today. Autonomous weapon systems are something between guns in the early 20th century and nuclear weapons in the late 20th century, and we’re hurtling toward it with no idea of how bad it could be. … The thing to worry about is existing power structures building remote-control police forces and remote-control occupying armies. That threat is on the level of nuclear weapons. It’s really, really dangerous.”

Susan Price, user-experience pioneer and strategist and founder of Firecat Studio, wrote, “I don’t believe that governments and regulatory agencies are poised to understand the implications of AI for ethics and consumer or voter protection. The questions asked in Congress barely scratch the surface of the issue, and political posturing too often takes the place of elected officials charged with oversight to reach genuine understanding of these complex issues. The strong profit motive for tech companies leads them to resist any such protections or regulation. These companies’ profitability allows them to directly influence legislators through lobbies and PACs; easily overwhelming the efforts of consumer protection agencies and nonprofits, when those are not directly defunded or disbanded. We’re seeing Facebook, Google, Twitter and Amazon resist efforts to produce the oversight, auditing and transparency that would lead to consumer protection. AI is already making lives better. But it’s also making corporate profits better at a much faster rate. Without strong regulation, we can’t correct that imbalance, and the processes designed to protect U.S. citizens from exploitation through elected leaders is similarly subverted by funds from these same large companies.”

Craig Spiezle, managing director and trust strategist for Agelight, and chair emeritus for the Online Trust Alliance, said, “Look no further than data privacy and other related issues such as net neutrality. Industry in general has failed to respond ethically in the collection, use and sharing of data. Many of these same leaders have a major play in AI, and I fear they will continue to act in their own self-interests.”

Sam Punnett, futurist and retired owner of FAD Research, commented, “System and application design is usually mandated by a business case, not by ethical considerations. Any forms of regulation or guidelines typically lag technology development by many years. The most concerning applications of AI systems are those being employed for surveillance and societal control.”

An ethics expert who served as an advisor on the UK’s report on “AI in Health care” responded, “I don’t think the tech companies understand ethics at all. They can only grasp it in algorithmic form, i.e., a kind of automated utilitarianism, or via ‘value alignment,’ which tends to use economists’ techniques around revealed preferences and social choice theory. They cannot think in terms of obligation, responsibility, solidarity, justice or virtue. This means they engineer out much of what is distinctive about humane ethical thought. In a thought I saw attributed to Hannah Arendt recently, though I cannot find the source, ‘It is not that behaviourism is true, it is more that it might become true: That is the problem.’ It would be racist to say that in some parts of the world AI developers care less about ethics than in others; more likely, they care about different ethical questions in different ways. But underlying all that is that the machine learning models used are antithetical to humane ethics in their mode of operation.”

Nathalie Maréchal, senior research analyst at Ranking Digital Rights, observed, “Until the development and use of AI systems is grounded in an international human rights framework, and until governments regulate AI following human rights principles and develop a comprehensive system for mandating human rights impact assessments, auditing systems to ensure they work as intended, and hold violating entities to account, ‘AI for good’ will continue to be an empty slogan.”

Mark Maben, a general manager at Seton Hall University, wrote, “It is simply not in the DNA of our current economic and political system to put the public good first. If the people designing, implementing, using and regulating AI are not utilizing ethical principles focused primarily on the public good, they have no incentive to create an AI-run world that utilizes those principles. Having AI that is designed to serve the public good above all else can only come about through intense public pressure. Businesses and politicians often need to be pushed to do the right thing. Fortunately, the United States appears to be at a moment where such pressure and change [are] possible, if not likely. As someone who works with Gen Z nearly every day, I have observed that many members of Gen Z think deeply about ethical issues, including as they relate to AI. This generation may prove to be the difference makers on whether we get AI that is primarily guided by ethical principles focused on the public good.”

Arthur Bushkin, writer, philanthropist and social activist, said, “I worry that AI will not be driven by ethics, but rather by technological efficiency and other factors.”

Dharmendra K. Sachdev, a telecommunications pioneer and founder-president of Spacetel Consultancy LLC, wrote, “My simplistic definition is that AI can be smart; in other words, like the human mind, it can change directions depending upon the data collected. The question often debated is this: Can AI outsmart humans? My simplistic answer: Yes, in some humans but not the designer. A rough parallel would be: Can a student outsmart his professor? Yes, of course yes, but he may not outsmart all professors in his field. Summarizing my admittedly limited understanding is that all software is created to perform a set of functions. When you equip it with the ability to change course depending upon data, we call it AI. If I can make it more agile than my competition, my AI can outsmart him.”

Karen Yesinkus, a creative and digital services professional, observed, “I would like to believe that AI being used ethically by 2030 will be in place. However, I don’t think that will likely be a sure thing. Social media, human resources, customer services, etc. platforms are and will have continuing issues to iron out (bias issues especially). Given the existing climate politically on a global scale, it will take more than the next 10 years for AI to shake off such bias.”

Marc H. Noble, a retired technology developer/administrator, wrote, “Although I believe most AI will be developed for the benefit of mankind, my great concern is that you only need one bad group to develop AI for the wrong reasons to create a potential catastrophe. Despite that, AI should be explored and developed, however, with a great deal of caution.”

Eduardo Villanueva-Mansilla, associate professor of communications at Pontificia Universidad Catolica, Peru, predicted, “Public pressure will be put upon AI actors. However, there is a significant risk that the agreed [-upon] ethical principles will be shaped too closely to the societal and political demands of the developed world. They will not consider the needs of emerging economies or local communities in the developing world.”

Garth Graham, a longtime leader of Telecommunities Canada, said, “The drive in governance worldwide to eradicate the public good in favour of market-based approaches is inexorable. The drive to implement AI-based systems is not going to place the public good as a primary priority. For example, existing Smart City initiatives are quite willing to outsource the design and operation of complex adaptive systems that learn as they operate civic functions, not recognizing that the operation of such systems is replacing the functions of governance.”

The AI genie is already out of the bottle, abuses are already occurring and some are not very visible and hard to remedy

A share of these experts note that AI applications designed with little or no attention to ethical considerations are already deeply embedded across many aspects of human activity, and they are generally invisible to the people they affect. These respondents said algorithms are at work in systems that are opaque at best and impossible to dissect at worst. They argue that it is highly unlikely that ethical standards can or will be applied in this setting. Others also point out that there is a common dynamic that plays out when new technologies sweep through societies: Abuses occur first and then remedies are attempted. It’s hard to program algorithm-based digital tools in a way that predicts, addresses and subverts all problems. Most problems remain unknown until they are recognized, sometimes long after they are produced, distributed and actively in use.

Henning Schulzrinne, Internet Hall of Fame member and former chief technology officer for the Federal Communications Commission, said, “The answer strongly depends on the shape of the government in place in the country in the next few years. In a purely deregulatory environment with strong backsliding toward law-and-order populism, there will be plenty of suppliers of AI that will have little concern about the fine points of AI ethics. Much of that AI will not be visible to the public – it will be employed by health insurance companies that are again free to price-discriminate based on preexisting conditions, by employers looking for employees who won’t cause trouble, by others who will want to nip any unionization efforts in the bud, by election campaigns targeting narrow subgroups.”

Jeff Johnson, a professor of computer science, University of San Francisco, who previously worked at Xerox, HP Labs and Sun Microsystems, responded, “The question asks about ‘most AI systems.’ Many new applications of AI will be developed to improve business operations. Some of these will be ethical and some will not be. Many new applications of AI will be developed to aid consumers. Most will be ethical, but some won’t be. However, the vast majority of new AI applications will be ‘dark,’ i.e., hidden from public view, developed for military or criminal purposes. If we count those, then the answer to the question about ‘most AI systems’ is without a doubt that AI will be used mostly for unethical purposes.”

John Harlow, smart cities research specialist at the Engagement Lab @ Emerson College, predicted, “AI will mostly be used in questionable ways in the next decade. Why? That’s how it’s been used thus far, and we aren’t training or embedding ethicists where AI is under development, so why would anything change? What gives me the most hope is that AI dead-ends into known effective use cases and known ‘impossibilities.’ Maybe AI can be great at certain things, but let’s dispense with areas where we only have garbage in (applications based on any historically biased data). Most AI applications that make a difference in the lives of most people will be in the backend, invisible to them. ‘Wow, the last iOS update really improved predictive text suggestions.’ ‘Oh, my dentist has AI-informed radiology software?’ One of the ways it could go mainstream is through comedy. AI weirdness is an accessible genre, and a way to learn/teach about the technology (somewhat) – I guess that might break through more as an entertainment niche. As for global AI competition, what concerns me is the focus on AI, beating other countries at AI and STEM generally. Our challenges certainly call for rational methods. Yet, we have major problems that can’t be solved without historical grounding, functioning societies, collaboration, artistic inspiration and many other things that suffer from overfocusing on STEM or AI.”

Steve Jones, professor of communication at the University of Illinois at Chicago and editor of New Media and Society, commented, “We’ll have more discussion, more debate, more principles, but it’s hard to imagine that there’ll be – in the U.S. case – a will among politicians and policymakers to establish and enforce laws based on ethical principles concerning AI. We tend to legislate the barn after the horses have left. I’d expect we’ll do the same in this case.”

Andy Opel, professor of communications at Florida State University, said, “Because AI is likely to gain access to a widening gyre of personal and societal data, constraining that data to serve a narrow economic or political interest will be difficult.”

Doug Schepers, a longtime expert in web technologies and founder of Fizz Studio, observed, “As today, there will be a range of deliberately ethical computing, poor-quality inadvertent unethical computing and deliberately unethical computing using AI. Deepfakes are going to be worrisome for politics and other social activities. It will lead to distrustability overall. By themselves, most researchers or product designers will not rigorously pursue ethical AI, just as most people don’t understand or rigorously apply principles of digital accessibility for people with disabilities. It’ll largely be inadvertent oversight, but it will still be a poor outcome. My hope is that best practices will emerge and continue to be refined through communities of practice, much like peer review in science. I also have some hope that laws may be passed that codify some of the most obvious best practices, much like the Americans With Disabilities Act and Section 508 improve accessibility through regulation, while still not being overly onerous. My fear is that some laws will be stifling, like those regarding stem-cell research. Machine learning and AI naturally have the capacity for improving people’s lives in many untold ways, such as computer vision for blind people. This will be incremental, just as commodity computing and increasing internet have improved (and sometimes harmed) people. It will most likely not be a seismic shift, but a drift. One of the darker aspects in the existing increase of surveillance capitalism and its use by authoritarian states. My hope is that laws will rein this in.”

Jay Owens, research director at pulsarplatform.com and author of HautePop, said, “Computer science education – and Silicon Valley ideology overall – focuses on ‘what can be done’ (the technical question) without much consideration of ‘should it be done’ (a social and political question). Tech culture would have to turn on its head for ethical issues to become front-and-centre of AI research and deployment; this is vanishingly unlikely. I’d expect developments in machine learning to continue along the same lines they have done so for the last decade – mostly ignoring the ethics question, with occasional bursts of controversy when anything particularly sexist or racist occurs. A lot of machine learning is already (and will continue to be) invisible to people’s everyday lives but creating process efficiencies (e.g., in weather forecasting, warehousing and logistics, transportation management). Other processes that we might not want to be more efficient (e.g., oil and gas exploration, using satellite imagery and geology analysis) will also benefit. I feel positively toward systems where ML and human decision-making are combined (e.g., systems for medical diagnostics). I would imagine machine learning is used in climate modelling, which is also obviously helpful. Chinese technological development cannot be expected to follow Western ethical qualms, and, given the totalitarian (and genocidal) nature of this state, it is likely that it will produce some ML systems that achieve these policing ends. Chinese-owned social apps such as TikTok have already shown racial biases and are likely less motivated to address them. I see no prospect that ‘general AI’ or generalisable machine intelligence will be achieved in 10 years and even less reason to panic about this (as some weirdos in Silicon Valley do).”

Robert W. Ferguson, a hardware robotics engineer at Carnegie Mellon Software Engineering Institute, wrote, “How many times do we need to say it? Unsupervised machine learning is at best incomplete. If supplemented with a published causal analysis, it might recover some credibility. Otherwise, we suffer from what is said by Cathy O’Neil in ‘Weapons of Math Destruction.’ Unsupervised machine learning without causal analysis is irresponsible and bad.”

Michael Richardson, open-source consulting engineer, responded, “In the 1980s, ‘AI’ was called ‘expert systems,’ because we recognized that it wasn’t ‘intelligent.’ In the 2010s, we called it ‘machine learning’ for the same reason. ML is just a new way to build expert systems. They replicate the biases of the ‘experts’ and cannot see beyond them. Is algorithmic trading ethical? Let me rephrase: Does our economy actually need it? If the same algorithm is used to balance ecosystems, does the answer change? We already have AI. They are called ‘corporations.’ Many have pointed this out already. Automation of that collective mind is really what is being referred to. I believe that use of AI in sentencing violates people’s constitutional rights, and I think that it will be stopped as it is realised that it just institutionalises racism.”

A principal architect at a technology company said, “I see no framework or ability for any governing agencies to understand how AI works. Practitioners don’t even know how it works, and they keep the information as proprietary information. Consider how long it took to mandate seat belts or limit workplace smoking, where the cause and effect were so clear, how can we possibly hope to control AI within the next 10 years?”

Global competition, especially between China and the U.S., will matter more to the development of AI than any ethical issues

A number of these respondents framed their answers around the “arms race” dynamic driving the tech superpowers, noting that it instills a damn-the-ethics-full-speed-ahead attitude. Some said there are significant differences in the ethical considerations various nation-states are applying and will apply in the future to AI development. Many pointed to the U.S. and China as the leading competitors in the nation-states arms race.

Daniel Farber, author, historian and professor of law at the University of California-Berkeley, responded, “There’s enormous uncertainty. Why? First of all, China. That’s a huge chunk of the world, and there’s nothing in what I see there right now to make me optimistic about their use of AI. Second, AI in the U.S. is mostly in the hands of corporations whose main goal is naturally to maximize profits. They will be under some pressure to incorporate ethics both from the public and employees, which will be a moderating influence. The fundamental problem is that AI is likely to be in the hands of institutions and people that already have power and resources, and that will inevitably shape how the technology is used. So, I worry that it will simply reinforce or increase current power imbalances. What we need is not only ethical AI but ethical access to AI, so that individuals can use it to increase their own capabilities.”

J. Scott Marcus, an economist, political scientist and engineer who works as a telecommunications consultant, wrote, “Policy fragmentation globally will get in the way. As long as most AI investment is made in the U.S. and China, no consensus is possible. The European Union will attempt to bring rules into play, but it is not clear if they can drive much change in the face of the U.S. and China rivalry. The U.S. (also Japan) are large players in consumption but not so large in production of many aspects. They are larger, however, in IoT and robotics, so maybe there is more hope there. For privacy, the European Union forced a fair degree of global convergence thanks to its large purchasing power. It is not clear whether that can work for AI.”

Charles M. Ess, a professor of media studies at the University of Oslo whose expertise is in information and computing ethics, commented, “The most hope lies in the European Union and related efforts to develop ‘ethical AI’ in both policy and law. Many first-rate people and reasonably solid institutions are working on this, and, in my view, some promising progress is being made. But the EU is squeezed between China and the U.S. as the world leaders, neither of which can be expected to take what might be called ethical leadership. China is at the forefront of exporting the technologies of ‘digital authoritarianism.’ Whatever important cultural caveats may be made about a more collective society finding these technologies of surveillance and control positive as they reward pro-social behavior – the clash with the foundational assumptions of democracy, including rights to privacy, freedom of expression, etc. is unavoidable and unquestionable.

“For its part, the U.S. has a miserable record (at best) of attempting to regulate these technologies – starting with computer law from the 1970s that categorizes these companies as carriers, not content providers, and thus not subject to regulation that would include attention to freedom of speech issues, etc. My prediction is that Google and its corporate counterparts in Silicon Valley will continue to successfully argue against any sort of external regulation or imposition of standards for an ethical AI, in the name of having to succeed in the global competition with China. We should perhaps give Google in particular some benefit of the doubt and see how its recent initiatives in the direction of ethical AI in fact play out. But 1) what I know first-hand to be successful efforts at ethics-washing by Google (e.g., attempting to hire in some of its more severe and prominent ethical critics in the academy in order to buy their silence), and 2) given its track record of cooperation with authoritarian regimes, including China, it’s hard to be optimistic here.

“Of course, we will see some wonderfully positive developments and improvements – perhaps in medicine first of all. And perhaps it’s okay to have recommender systems to help us negotiate, e.g., millions of song choices on Spotify. But even these applications are subject to important critique, e.g., under the name of ‘the algorithmization of taste’ – the reshaping of our tastes and preferences is influenced by opaque processes driven by corporate interests in maximizing our engagement and consumption, not necessarily helping us discover liberating and empowering new possibilities. More starkly, especially if AI and machine-learning techniques remain black-boxed and unpredictable, even to those who create them (which is what AI and ML are intended to do, after all), I mostly see a very dark and nightmarish future in which more and more of our behaviors are monitored and then nudged by algorithmic processes we cannot understand and thereby contest. The starkest current examples are in the areas of so-called ‘predictive policing’ and related efforts to replace human judgment with machine-based ‘decision-making.’ As Mireille Hildebrandt has demonstrated, when we can no longer contest the evidence presented against us in a court of law – because it is gathered and processed by algorithmic processes even its creators cannot clarify or unpack – that is the end of the modern practices of law and democracy. It’s clearly bad enough when these technologies are used to sort out human beings in terms of their credit ratings: Relying on these technologies for judgments/decisions about who gets into what educational institution, who does and does not deserve parole, and so on seem to me to be a staggeringly nightmarish dystopian future.

“Again, it may be a ‘Brave New World’ of convenience and ease, at least as long as one complies with the behaviors determined to be worth positive reward, etc. But to use a different metaphor – one perhaps unfamiliar to younger generations, unfortunately – we will remain the human equivalent of Skinner pigeons in nice and comfortable Skinner cages, wired carefully to maximize desired behaviors via positive reinforcement, if not discouraging what will be defined as undesirable behaviors via negative reinforcement (including force and violence) if need be.”

Adam Clayton Powell III, senior fellow at the USC Annenberg Center on Communication Leadership and Policy, observed, “By 2030, many will use ethical AI and many won’t. But in much of the world, it is clear that governments, especially totalitarian governments in China, Russia, et seq., will want to control AI within their borders, and they will have the resources to succeed. And those governments are only interested in self-preservation – not ethics.”

Alf Rehn, professor of innovation, design and management at the University of Southern Denmark, said, “There will be a push for ethical AI during the next 10 years, but good intentions alone do not morality make. AI is complicated, as is ethics, and combining the two will be a very complex problem indeed. We are likely to see quite a few clumsy attempts to create ethical AI-systems, with the attendant problems. It is also important to take cultural and geopolitical issues into consideration. There are many interpretations of ethics, and people put different value on different values, so that, e.g., a Chinese ethical AI may well function quite differently – and generate different outcomes – from, e.g., a British ethical AI. This is not to say that one is better than the other, just that they may be rather different.”

Sean Mead, senior director of strategy and analytics at Interbrand, wrote, “Chinese theft of Western and Japanese AI technologies is one of the most worrisome ethics issues that we will be facing. We will have ethical issues over both potential biases built into AI systems through the choice or availability of training data and expertise sets and the biases inherent in proposed solutions attempting to counter such problems. The identification systems for autonomous weapons systems will continue to raise numerous ethics issues, particularly as countries deploy land-based systems interacting with people. AI driving social credit systems will have too much power over peoples’ lives and will help vitalize authoritarian systems. AI will enable increased flight from cities into more hospitable and healthy living areas through automation of governmental services and increased transparency of skill sets to potential employers.”

Mark Perkins, an information science professional active in the Internet Society, noted, “AI will be developed by corporations (with government backing) with little respect for ethics. The example of China will be followed by other countries – development of AI by use of citizens’ data, without effective consent, to develop products not in the interest of such citizens (surveillance, population control, predictive policing, etc.). AI will also be developed to implement differential pricing/offers further enlarging the ‘digital divide’ AI will be used by both governments and corporations to take nontransparent, nonaccountable decisions regarding citizens AI will be treated as a ‘black box,’ with citizens having little – if any – understanding of how they function, on what basis they make decisions, etc.”

Wendell Wallach, ethicist and scholar at Yale University’s Interdisciplinary Center for Bioethics, responded, “While I applaud the proliferation of ethical principles, I remain concerned about the ability of countries to put meat on the bone. Broad principles do not easily translate into normative actions, and governments will have difficulty enacting strong regulations. Those that do take the lead in regulating digital technologies, such as the EU, will be criticized for slowing innovation, and this will remain a justification for governments and corporations to slow putting in place any strong regulations backed by enforcement. So far, ethics whitewashing is the prevailing approach among the corporate elite. While there are signs of a possible shift in this posture, I remain skeptical while hopeful.”

Pamela McCorduck, writer, consultant and author of several books, including “Machines Who Think,” wrote, “Many efforts are underway worldwide to define ethical AI, suggesting that this is already considered a grave problem worthy of intense study and legal remedy. Eventually, a set of principles and precepts will come to define ethical AI, and I think they will define the preponderance of AI applications. But you can be assured that unethical AI will exist, be practiced and sometimes go unrecognized until serious damage is done. Much of the conflict between ethical and unethical applications is cultural. In the U.S. we would find the kind of social surveillance practiced in China to be not only repugnant – but illegal. It forms the heart of Chinese practices. In the short term, only the unwillingness of Western courts to accept evidence gathered this way (as inadmissible) will protect Western citizens from this kind of thing, including the ‘social scores’ the Chinese government assigns to its citizens as a consequence of what surveillance turns up. I sense more everyday people will invest social capital in their interactions with AIs, out of loneliness or for other reasons. This is unwelcome to me, but then I have a wide social circle. Not everybody does, and I want to resist judgment here.”

An architect of practice specializing in AI for a major global technology company said, “The European Union has the most concrete proposals, and I believe we will see their legislation in place within three years. My hope is that we will see a ripple effect in the U.S. like we did from GDPR – global companies had to comply with GDPR, so some good actions happened in the U.S. as a result. … We may be more likely to see a continuation of individual cities and states imposing their own application-specific laws (e.g., facial-recognition technology limits in Oakland, Boston, etc.). The reasons I am doubtful that the majority of AI apps will be ethical/benefit the social good are:

  1. Even the EU’s proposals are limited in what they will require;
  2. China will never limit AI for social benefit over the government’s benefit;
  3. The ability to create a collection of oversight organizations with the budget to audit and truly punish offenders is unlikely.

“I look at the Food and Drug Administration or NTSB [National Transportation Safety Board] and see how those organizations got too cozy with the companies they were supposed to regulate and see their failures. These organizations are regulating products much less complex than AI, so I have little faith the U.S. government will be up to the task. Again, maybe the EU will be better.”

A researcher in bioinformatics and computational biology observed, “Take into account the actions of the CCP [Chinese Communist Party] in China. They have been leading the way recently in demonstrating how these tools can be used in unethical ways. And the United States has failed to make strong commitments to ethics in AI, unlike EU nations. AI and the ethics surrounding its use could be one of the major ideological platforms for the incoming next Cold War. I am most concerned about the use of AI to further invade privacy and erode trust in institutions. I also worry about its use to shape policy in nontransparent, noninterpretable and nonreproducible ways. There is also the risk that some of the large datasets that are the fundamental to a lot of decision-making – from facial recognition, to criminal sentencing, to loan applications – being conducted using AI that are critically biased and will continue to produce biased outcomes if they are used without undergoing severe audits – issues with transparency compound these problems. Advances to medical treatment using AI run the risk of not being fairly distributed as well.”

Sam Lehman-Wilzig, professor and former chair of communication at Bar-Ilan University, Israel, said, “I am optimistic because the issue is now on the national agenda – scientific, academic and even political/legislative. I want to believe that scientists and engineers are somewhat more open to ‘ethical’ considerations than the usual ‘businesspeople.’ The major concern is what other (nondemocratic) countries might be doing – and whether we should be involved in such an ‘arms race,’ e.g., AI-automated weaponry. Thus, I foresee a move to international treaties dealing with the most worrisome aspects of ‘AI ethics.’”

An economist who works in government responded, “Ethical principles will be developed and applied in democratic countries by 2030, focusing on the public good, global competition and cyber breaches. Other less-democratic countries will be focused more on cyberbreaches and global competition. Nongovernmental entities such as private companies will presumably concentrate on innovation and other competitive responses. AI will have a considerable impact on people, especially regarding their jobs and also regarding their ability to impact the functions controlled by AI. This control and the impact of cybercrimes will be of great concern, and innovation will intrigue.”

Ian Peter, a pioneering internet rights activist, said, “The biggest threats we face are weaponisation of AI and development of AI being restricted within geopolitical alliances. We are already seeing the beginnings of this in actions taken to restrict activities of companies because they are seen to be threatening (e.g., Huawei). More and more developments in this field are being controlled by national interests or trade wars rather than ethical development, and much of the promise which could arise from AI utilisation may not be realised. Ethics is taking a second-row seat behind trade and geopolitical interests.”

Jannick Pedersen, a co-founder, CEO and futurist based in Europe, commented, “AI is the next arms race. Though mainstream AI applications will include ethical considerations, a large amount of AI will be made for profit and be applied in business systems, not visible to the users.”

Marita Prandoni, linguist, freelance writer, editor, translator and research associate with the Shape of History group, predicted, “Ethical uses of AI will dominate, but it will be a constant struggle against disruptive bots and international efforts to undermine nations. Algorithms have proven to magnify bias and engender injustice, so reliance on them for distracting, persuading or manipulating opinion is wrong. What excites me is that advertisers are rejecting platforms that allow for biased and dangerous hate speech and that increasingly there are economic drivers (i.e., corporate powers) that take the side of social justice.”

Gus Hosein, executive director of Privacy International, observed, “Unless AI becomes a competition problem and gets dominated by huge American and Chinese companies, then the chances of ethical AI are low, which is a horrible reality. If it becomes widespread in deployment, as we’ve seen with facial recognition, then the only way to stem its deployment in unethical ways is to come up with clear bans and forced transparency. This is why AI is so challenging. Equally, it’s quite pointless, but that won’t stop us from trying to deploy it everywhere. The underlying data quality and societal issues mean that AI will just punish people in new, different and the same ways. If we continue to be obsessed with innovators and innovation rather than social infrastructure, then we are screwed.”

Sign up for our weekly newsletter

Fresh data delivery Saturday mornings

Sign up for The Briefing

Weekly updates on the world of news & information