Numbers, Facts and Trends Shaping Your World

Experts Doubt Ethical AI Design Will Be Broadly Adopted as the Norm Within the Next Decade

2. Hopes about developments in ethical AI

Early developments in AI have been of overwhelmingly great importance and value to society. Most of the experts responding to this canvassing – both the pessimists and the optimists – expect that it will continue to provide clear benefits to humanity. Those who are hopeful argue that its advance will naturally include mitigating activities, noting that these problems are too big to ignore. They said society will begin to better-anticipate potential harms and act to mute them. Among the commonly expressed points:

  • Historically, ethics have evolved as new technologies mature and become embedded in cultures; as problems arise so do adjustments.
  • Fixes are likely to roll out in different ways along different timelines in different domains.
  • Expert panels concerned about ethical AI are being convened in many settings across the globe.
  • Social upheavals arising due to AI problems are a force that may drive it closer to the top of human agendas.
  • Political and judicial systems will be asked to keep abuses in check, and evolving case law will emerge (some experts are concerned this could be a net negative).
  • AI itself can be used to assess AI impacts and hunt down unethical applications.
  • A new generation of technologists whose training has been steeped with ethical thinking will lead the movement toward design that values people and positive progress above profit and power motives and the public will become wiser about the downsides of being code-dependent.

This section includes hopeful comments about the potential development of ethical AI.

AI advances are inevitable; we will work on fostering ethical AI design

A number of these expert respondents noted breakthroughs that have already occurred in AI and said they imagine a future in which even more applications emerge to help solve problems and make people’s lives easier and safer. They expect that AI design will evolve positively as these tools continue to influence the majority of human lives in mostly positive ways.

They especially focused on the likelihood that there will be more medical and scientific breakthroughs that help people live healthier and more productive lives, and they noted that there will be increasing efficiency in AI quickly mastering most tasks. They said AI tools are simply better than humans at pattern-recognition and crunching massive amounts of data. Some said they expect AI will expand positively to augment humans, working in sync as their ally.

Benjamin Grosof, chief scientist at Kyndi, a Silicon Valley start-up aimed at the reasoning and knowledge representation side of AI, wrote, “Some things that give me hope are the following: Most AI technical researchers (as distinguished from business or government deployers of AI) care quite a lot about ethicality of AI. It has tremendous potential to improve productivity economically and to save people effort even when money is not flowing directly by better automating decisions and information analysis/supply in a broad range of work processes. Conversational assistants and question-answering, smarter-workflow and manufacturing robots are some examples where I foresee AI applications making a positive difference in the lives of most people, either indirectly or directly. I am excited by the fact that many national governments are increasing funding for scientific research in AI. I am concerned that so much of that is directed toward military purposes or controlled by military branches of governments.”

Perry Hewitt, chief marketing officer at data.org, responded, “I am hopeful that ‘ethical AI’ will extend beyond the lexicon to the code by 2030. The awareness of the risks gives me the most hope. For example, for centuries we have put white men in judicial robes and trusted them to make the right decisions and pretended that biases, proximity to lunchtime and the case immediately preceding had no effect on the outcome. Scale those decisions with AI and the flaws emerge. And when these flaws are visible, effective regulation can begin. This is the decade of narrow AI – specific applications that will affect everything from the jobs you are shown on LinkedIn to the new sneakers advertised to you on Instagram. Clearly, the former makes more of a difference than the latter for your economic well-being, but in all cases, lives are changed by AI under the hood. Transparency around the use of AI will make a difference as will effective regulation.”

Donald A. Hicks, a professor of public policy and political economy at the University of Dallas whose research specialty is technological innovation, observed, “AI/automation technologies do not assert themselves. They are always invited in by investors, adopters and implementers. They require investments to be made by someone, and those investments are organized around the benefits of costs cut or reduced and/or possibilities for new revenue flows. New technologies that cannot offer those prospects remain on the shelf. So, inevitably, new technologies like AI only proliferate if they are ‘pulled’ into use. This gives great power to users and leads to applications that look to be beneficial to a widening user base. This whole process ‘tilts’ toward long-term ethical usage. I know of no technology that endured while delivering unwanted/unfair outcomes broadly. Consider our nation’s history with slavery. Gradually and inevitably, as the agricultural economies of Southern states became industrialized, it no longer made sense to use slaves. It was not necessary to change men’s hearts before slavery could be abandoned. Economic outcomes mattered more, although eventually hearts did follow.

“But again, the transitions between old and new do engender displacements via turnover and replacement, and certain people and places can feel – and are – harmed by such technology-driven changes. But their children are likely thankful that their lives are not simply linear extensions of the lives of their parents. To date, AI and automation have had their greatest impacts in augmenting the capabilities of humans, not supplanting them. The more sophisticated AI applications and automation become, the more we appreciate the special capabilities in human beings that are of ultimate value and that are not likely to be captured by even the most sophisticated software programs. I’m bullish on where all of this is leading us because I’m old enough to compare today with yesterday.”

Jim Spohrer, director of cognitive open technologies and the AI developer ecosystem at IBM, noted, “The Linux Foundation Artificial Intelligence Trusted AI Committee is working on this. The members of that community are taking steps to put principles in place and collect examples of industry use cases. The contribution into Linux Foundation AI (by major technology companies) of the open-source project code for Trusted AI for AI-Fairness, AI-Robustness and AI-Explainability on which their products are based is a very positive sign.”

Michael Wollowski, a professor of computer science at Rose-Hulman Institute of Technology and expert in artificial intelligence, said, “It would be unethical to develop systems that do not abide by ethical codes, if we can develop those systems to be ethical. Europe will insist that systems will abide by ethical codes. Since Europe is a big market, since developing systems that abide by ethical code is not a trivial endeavor and since the big tech companies (except for Facebook) by and large want to do good (well, their employees by and large want to work for companies that do good), they will develop their systems in a way that they abide by ethical codes. I very much doubt that the big tech companies are interested (or are able to find young guns) in maintaining an unethical version of their systems.

“AI systems, in concert with continued automation, including the Internet of Things, will bring many conveniences. Think along the lines of personal assistants [who] manage various aspects of people’s lives. Up until COVID-19, I would have been concerned about bad actors using AI to do harm. I am sure that right now bad actors are probably hiring virologists to develop viruses with which they can hold the world hostage. I am very serious that rogue leaders are thinking about this possibility. The AI community in the U.S. is working very hard to establish a few large research labs. This is exciting, as it enables the AI community to develop and test systems at scale. Many good things will come out of those initiatives. Finally, let us not forget that AI systems are engineered systems. They can do many interesting things, but they cannot think or understand. While they can be used to automate many things and while people by and large are creatures of habit, it is my fond hope that we will rediscover what it means to be human.”

Paul Epping, chairman and co-founder of XponentialEQ and well-known keynote speaker on exponential change, wrote, “The power of AI and machine learning (and deep learning) is underestimated. The speed of advancements is incredible and will lead to automating of virtually all processes (blue- and white-collar jobs). In health care: Early detection of diseases, fully AI-driven triage, including info from sensors (on or inside your body), leading to personalised health (note: not personalised medicine). AI will help to compose the right medication for you – and not the generic stuff that we get today, surpassing what the pharmaceutical industry is doing. AI is helping to solve the world’s biggest problems, finding new materials, running simulations, digital twins (including personal digital twins that can be used to run experiments in case of treatments). My biggest concern: How are we going to solve the control problem? (Read Stuart Russell’s ‘Human Compatible’) and follow the Future of Life Institute and the problem of biased data and algorithms.)”

“The power of AI and machine learning (and deep learning) is underestimated. The speed of advancements is incredible and will lead to automating of virtually all processes (blue- and white-collar jobs).”

Paul Epping, Paul Epping, chairman and co-founder of XponentialEQ and well-known keynote speaker on exponential change

Adel Elmaghraby, a leader in IEEE and professor and former chairman of the Computer Engineering and Computer Science Department at the University of Louisville, responded, “Societal pressure will be a positive influence for adoption of ethical and transparent approaches to AI. However, the uncomfortable greed for political and financial benefit will need to be reined in.”

Gregory Shannon, chief scientist at the CERT software engineering institute at Carnegie Mellon University, said, “There will be lots of unethical applications as AI matures as an engineering discipline. I expect that to improve. Just like there are unethical uses of technology today, there will be for AI. AI provides transformative levels of efficiency for digesting information and making pretty good decisions. And some will certainly exploit that in unethical ways. However, the ‘demand’ from the market (most of the world’s population) will be for ethical AI products and services. It will be bumpy, and in 2030 we might be halfway there. The use of AI by totalitarian and authoritarian governments is a clear concern. But I don’t expect the use of such tech to overcome the yearning of populations for agency in their lives, at least after a few decades of such repression. Unethical systems/solutions are not trustworthy. So, they can only have narrow application. Ethical systems/solutions will be more widely adopted, eventually.”

Robert D. Atkinson, president of the Information Technology and Innovation Foundation, wrote, “The real question is not whether all AI developers sign up to some code of principles, but rather whether most AI applications work in ways that society expects them to, and the answer to that question is almost 100% ‘yes.’”

Ben Shneiderman, distinguished professor of computer science and founder of the Human-Computer Interaction Lab at the University of Maryland, commented, “While technology raises many serious problems, efforts to limit malicious actors should eventually succeed and make these technologies safer. The huge interest in ethical principles for AI and other technologies is beginning to shift attention toward practical steps that will produce positive change. Already the language of responsible and human-centered AI is changing the technology, guiding students in new ways and reframing the work of researchers. … I foresee improved appliance-like and tele-operated devices with highly automated systems that are reliable, safe and trustworthy. Shoshanna Zuboff’s analysis in her book ‘Surveillance Capitalism’ describes the dangers and also raises awareness enough to promote some changes. I believe the arrival of independent oversight methods will help in many cases. Facebook’s current semi-independent oversight board is a small step forward, but changing Facebook’s culture and Zuckerberg’s attitudes is a high priority to ensuring better outcomes. True change will come when corporate business choices are designed to limit the activity of malicious actors – criminals, political operatives, hate groups and terrorists – while increasing user privacy.”

Carol Smith, a senior research scientist in human–machine interaction at Carnegie Mellon University’s Software Engineering Institute, said, “There are still many lessons to be learned with regard to AI and very little in the way of regulation to support human rights and safety. I’m hopeful that the current conversations about AI ethics are being heard, and that, as we see tremendous misuse and abuse of these systems, the next generation will be much more concerned about ethical implications. I’m concerned that many people, organizations and governments see only monetary gain from unethical applications of these technologies and will continue to misuse and abuse data and AI systems for as long as they can. AI systems short-term will continue to replace humans in dull, dirty, dangerous and dear work. This is good for overall safety and quality of life but is bad for family livelihoods. We need to invest in making sure that people can continue to contribute to society when their jobs are replaced. Longer-term, these systems will begin to make many people more efficient and effective at their jobs. I see AI systems improving nearly every industry and area of our lives when used properly. Humans must be kept in the loop with regard to decisions involving people’s lives, quality of life, health and reputation, and humans must be ultimately responsible for all AI decisions and recommendations (not the AI system).”

Marvin Borisch, chief technology officer at RED Eagle Digital based in Berlin, wrote, “When used for the greater good, AI can and will help us fight a lot of human problems in the next decade. Prediagnostics, fair ratings for insurance or similar, supporting humans in space and other exploration and giving us theoretical solutions for economic and ecological problems – these are just a few examples of how AI is already helping us and can and will help us in the future. If we focus on solving specific human problems or using AI as a support for human work instead of replacing human work, I am sure that we can and will tackle any problem. What worries me the most is that AI developers are trying to trump each other – not for the better use but for the most medial outcome in order to impress stakeholders and potential investors.”

Tim Bray, well-known technology leader who has worked for Amazon, Google and Sun Microsystems, predicted, “Unethical AI-driven behavior will produce sufficiently painful effects that legal and regulatory frameworks will be imposed that make its production and deployment unacceptable.”

Gary M. Grossman, associate director in the School for the Future of Innovation in Society at Arizona State University, responded, “AI will be used in both ethical and questionable ways in the next decade. Such is the nature of the beast, and we, the beasts that will make the ethical choices. I do not think policy alone will be sufficient to ensure ethical choices to be made every time. Like everything else, it will stabilize in some type of compromised structure within the decade time frame the question anticipates.”

Erhardt Graeff, a researcher expert in the design and use of digital technologies for civic and political engagement, noted, “Ethical AI is boring directly into the heart of the machine-learning community and, most importantly, influencing how it is taught in the academy. By 2030, we will have a generation of AI professionals that will see ethics as inseparable from their technical work. Companies wishing to hire these professionals will need to have clear ethical practices built into their engineering divisions and strong accountability to the public good at the top of their org charts. This will certainly describe the situation at the major software companies like Alphabet, Apple, Facebook, Microsoft and Salesforce, whose products are used on a massive scale. Hopefully, smaller companies and those that don’t draw the same level of scrutiny from regulators and private citizens will adopt similar practices and join ethical AI consortia and find themselves staffed with upstanding technologists. One application of AI that will touch nearly all sectors and working people is in human resources and payroll technology. I expect we will see new regulation and scrutiny of those tools and the major vendors that provide them.

“I caveat my hopes for ethical AI with three ways unethical AI will persist.

  1. There will continue to be a market for unethical AI, especially the growing desire for surveillance tools from governments, corporations and powerful individuals.
  2. The democratization of machine learning as APIs, simple libraries and embedded products will allow many people who have not learned to apply this technology in careful ways to build problematic tools and perform bad data analysis for limited, but meaningful distributions that will be hard to hold to account.
  3. A patchwork of regulations across national and international jurisdictions and fights over ethical AI standards will undermine attempts to independently regulate technology companies and their code through auditing and clear mechanisms for accountability.”

Katie McAuliffe, executive director for Digital Liberty, wrote, “There are going to be mistakes in AI, even when companies and coders try their best. We need to be patient with the mistakes, find them and adjust. We need to accept that some mistakes don’t equal failure in the entire system. No, AI will not be used in mostly questionable ways. We are using forms of AI every day already. The thing about AI is that, once it works, we call it something else. With a new name, it’s not as amorphous and threatening. AI and machine learning will benefit us the most in the health context – being able to examine thousands of possibilities and variables in a few seconds, but human professionals will always have to examine the data and context to apply any results. We need to be sure that something like insurance doesn’t affect a doctor or researcher’s readout in these contexts.”

Su Sonia Herring, a Turkish-American internet policy researcher with Global Internet Policy Digital watch, said, “AI will be used in questionable ways due to companies and governments putting profit and control in front of ethical principles and the public good. Civil society, researchers and institutions who are concerned about human rights give me hope. Algorithmic black boxes, digital divide, the need to control, surveil and profit off masses worry me the most. I see AI applications making a difference in people’s lives in taking care of making mundane, time-consuming work (while making certain jobs obsolete), helping identify trends and informing public policy. Issues related to privacy, security and accountability and transparency related to AI tech concerns me, while the potential of processing big data to solve global issues excites me.”

Edson Prestes, a professor of computer science at Federal University of Rio Grande do Sul, Brazil, commented, “By 2030, technology in general will be developed taking into account ethical considerations. We are witnessing a huge movement these days. Most people who have access to information are worried about the misuse of technology and its impact on their own lives. Campaigns to ban lethal weapons powered by AI are growing. Discussions on the role of technology and its impact on jobs are also growing. People are becoming more aware about fake news and proliferation of hate speech. All these efforts are creating a massive channel of information and awareness. Some communities will be left behind, either because some governments want to keep their citizens in poverty and consequently keep them under control, or because they do not have enough infrastructure and human and institutional capacities to reap the benefits of the technological domain. In these cases, efforts led by the United Nations are extremely valuable. The UN Secretary-General António Guterres is a visionary in establishing the High-Level Panel on Digital Cooperation. Guterres used the panel’s recommendations to create a roadmap with concrete actions that address the digital domain in a holistic way, engaging a wide group of organisations to deal with the consequences emerging from the digital domain.”

James Blodgett, futurist, author and consultant, said, “‘Focused primarily on the public good’ is not enough if the exception is a paperclip maximizer. ‘Paperclip maximizer’ is an improbable metaphor, but it makes the point that one big mistake can be enough. We can’t refuse to do anything – because not to decide is also a decision. The best we can do is to think carefully, pick what seems to be the best plan and execute, perhaps damning [the] torpedoes as part of that execution. But we had better think very carefully and be very careful.”

A futurist and managing principal for a consultancy commented, “AI offers extremely beneficial opportunities, but only if we actively address the ethical principles and regulate and work toward:

  1. Demographically balanced human genome databases,
  2. Gender-balanced human genome databases (especially in the area of clinical drug trials where women are severely under tested),
  3. Real rules around the impact of algorithms and machine learning developed by poor data collection of availability. We see this in the use of facial recognition, police data, education data, Western bias in humanities collections, etc.

“AI also has the potential to once again be a job killer but also assist the practice medicine, law enforcement, etc. It can also be a job creator, but countries outside the U.S. are making greater progress on building AI hubs.”

A European business leader argued, “I do believe AI systems will be governed by ethics, but they will be equal parts new legislation and litigation avoidance. If your AI rejects my insurance claim, I can sue to find out why and ensure your learning model wasn’t built on, say, racially biased source data.”

Christina J. Colclough, an expert on the future of work and the politics of technology and ethics in AI, observed, “By 2030, governments will have woken up to the huge challenges AI (semi/autonomous systems, machine learning, predictive analytics, etc.) pose to democracy, legal compliance and our human and fundamental rights. What is necessary is that these ‘ethical principles’ are enforceable and governed. Otherwise, they risk being good intentions with little effect.”

Thomas Birkland, professor of public and international affairs at North Carolina State University, wrote, “AI will be informed by ethical considerations in the coming years because the stakes for companies and organizations making investments in AI are too high. However, I am not sure that these ethical considerations are going to be evenly applied, and I am not sure how carefully these ethical precepts will be adopted. What gives me the most hope is the widespread discussions that are already occurring about ethics in AI – no one will be caught by surprise by the need for an ethical approach to AI. What worries me the most is that the benefits of such systems are likely to flow to the wealthy and powerful. For example, we know that facial-recognition software, which is often grounded in AI, has severe accuracy problems in recognizing ‘nonwhite’ faces. This is a significant problem. AI systems may be able to increase productivity and accuracy in systems that require significant human intervention. I am somewhat familiar with AI systems that, for example, can read x-rays and other scans to look for signs of disease that may not be immediately spotted by a physician or radiologist. AI can also aid in pattern recognition in large sets of social data. For example, AI systems may aid researchers in coding data relating to the correlates of health. What worries me is the uncritical use of AI systems without human intervention. There has been some talk, for example, of AI applications to warfare – do we leave weapons targeting decisions to AI? This is a simplistic example, but it illustrates the problem of ensuring that AI systems do not replace humans as the ultimate decision-maker, particularly in areas where there are ethical considerations. All this being said, the deployment of AI is going to be more evolutionary than revolutionary, and that the effects on our daily lives will be subtle and incremental over time.”

Jerome C. Glenn, co-founder and CEO of the futures-research organization The Millennium Project, wrote, “There were few discussions about the ethical issues in the early spread of the internet in the 1970s and 1980s. Now there are far, far more discussions about AI around the world. However, most do not make clear distinctions among narrow, general and super AI. If we don’t get our standards right in the transition from artificial narrow intelligence to artificial general intelligence, then the emergence of super from general could have the consequences science fiction has warned about.”

Benjamin Kuipers, a professor of computer science and engineering at the University of Michigan known for research in qualitative simulation, observed, “I choose to believe that things will work out well in the choice we face as a society. I can’t predict the likelihood of that outcome. Ethics is the set of norms that society provides, telling individuals how to be trustworthy, because trust is essential for cooperation, and cooperation is essential for a society to thrive, and even to survive (see Robert Wright’s book ‘Nonzero’). Yes, we need to understand ethics well enough to program AIs so they behave ethically. More importantly, we need to understand that corporations, including nonprofits, governments, churches, etc., are also artificially intelligent entities participating in society, and they need to behave ethically. We also need to understand that we as a society have been spreading an ideology that teaches individuals that they should behave selfishly rather than ethically. We need an ethical society, not just ethical AI. But AI gives us new tools to understand the mind, including ethics.”

“We need to understand ethics well enough to program AIs so they behave ethically. More importantly, we need to understand that corporations, including nonprofits, governments, churches, etc., are also artificially intelligent entities participating in society, and they need to behave ethically… We need an ethical society, not just ethical AI.”

Benjamin Kuipers, a Professor of computer Science and engineering at the University of Michigan known for research in qualitative simulation

John Verdon, a retired complexity and foresight consultant, said, “Ultimately what is most profitable in the long run is a well-endowed citizenry able to pursue their curiosities and expand the agencies. To enable this condition will require the appropriate legislative protections and constraints. The problems of today and the future are increasing in complexity. Any systems that seek monopolistic malevolence essentially will act like a cancer killing its own host. Distributed-ledger technologies may well enable the necessary ‘accounting systems’ to both credit creators and users of what has been created while liberating creations to be used freely (like both free beer and liberty). This enables a capacity to unleash all human creativity to explore the problem and possibility space. Slaves don’t create a flourishing society – only a static and fearful elite increasingly unable to solve the problems they create. New institutions like an ‘auditor general of algorithms’ (to oversee that algorithms and other computations actually produce the results they intend, and to offer ways to respond and correct) will inevitably arise – just like our other institutions of oversight.”

James Morris, professor of computer science at Carnegie Mellon, wrote, “I had to say ‘yes.’ The hope is that engineers get control away from capitalists and rebuild technology to embody a new ‘constitution.’ I actually think that’s a longshot in the current atmosphere. Ask me after November. If the competition between the U.S. and China becomes zero-sum, we won’t be able to stop a rush toward dystopia.”

J. Francisco Álvarez, professor of logic and philosophy of science at UNED, the National University of Distance Education in Spain, commented, “Concerns about the use of AI and its ethical aspects will be very diverse and will produce very uneven effects between public good and a set of new, highly marketable services. We will have to expand the spheres of personal autonomy and the recognition of a new generation of rights in the digital society. It is not enough to apply ethical codes in AI devices. Instead, a new ‘constitution’ must be formulated for the digital age and its governance.”

Aaron Chia Yuan Hung, assistant professor of educational technology at Adelphi University, said, “The use of AI now for surveillance and criminal justice is very problematic. The AI can’t be fair if it is designed based on or drawing from the data collected from a criminal justice system that is inherently unjust. The fact that some people are having these conversations makes me think that there is positive potential. Humans are not the best at decision-making. We have implicit bias. We have cognitive biases. We are irrational (in the behavioral economics sense). AI can correct that or at least make it visible to us so that we can make better decisions. Most people are wary enough of AI systems not to blindly adopt another country’s AI system without a lot of scrutiny. Hopefully that allows us to remain vigilant.”

Moira de Roche, chair of the International Federation of Information Processing’s professional practice sector, wrote, “There is a trend toward ethics, especially in AI applications. AI will continue to improve people’s lives in ways we cannot even anticipate presently. Pretty much every technology we use on a day-to-day basis employs AI (email, mobile phones, etc.). In fact, it worries me that AI is seen as something new, whereas we have used it on a daily basis for a decade or more. Perhaps the conversation should be more about robotics and automation than AI, per se. I am concerned that there are so many codes of ethics. There should not be so many (at present there are several). I worry that individuals will choose the code they like the best – which is why a plethora of codes is dangerous.”

Nigel Cameron, president emeritus at the Center for Policy on Emerging Technologies, commented, “The social and economic shifts catalyzed by the COVID plague are going to bring increasing focus to our dependence on digital technologies, and with that focus will likely come pressure for algorithmic transparency and concerns over equity and so forth. Antitrust issues are highly relevant, as is the current pushback against China and, in particular, Huawei (generally, I think a helpful response).”

Peter B. Reiner, professor of neuroethics at the University of British Columbia, said, “As AI-driven applications become ever more entwined in our daily lives, there will be substantial demand from the public for what might be termed ‘ethical AI.’ Precisely how that will play out is unknown, but it seems unlikely that the present business model of surveillance capitalism will hold, at least not to the degree that it does today. I expect that clever entrepreneurs will recognize opportunities and develop new, disruptive business models that can be marketed both for the utility of the underlying AI and the ethics that everyone wishes to see put into place. An alternative is that a new regulatory regime emerges, constraining AI service providers and mandating ethical practice.”

Ronnie Lowenstein, a pioneer in interactive technologies, noted, “AI and the related integration of technologies holds the potential of altering lives in profound ways. I fear the worse but have hopes. Two things that bring me hope:

  1. Increased civic engagement of youth all over the world – not only do I see youth as hope for the future, but seeing more people listening to youth encourages me that the adults are re-examining their beliefs and assumptions so necessary for designing transformative policies and practices.
  2. The growth futures/foresight strategies as fostered by The Millennium Project.”

Peter Dambier, a longtime Internet Engineering Task Force participant based in Germany, said, “Personal AI must be as personal as your underwear. No spying, no malware. AI will develop like humans and should have rights like humans. I do not continue visiting a doctor I do not trust. I do not allow anything or anybody I do not trust to touch my computer. Anything that is not open-source I do not trust. Most people should learn informatics and have one person in the family who understands computers.”

Ray Schroeder, associate vice chancellor of online learning, University of Illinois-Springfield, responded, “One of the aspects of this topic that gives me the most hope is that, while there is the possibility of unethical use of AI, the technology of AI can also be used to uncover those unethical applications. That is, we can use AI to help patrol unethical AI. I see that artificial intelligence will be able to bridge communications across languages and cultures. I see that AI will enable us to provide enhanced telemedicine and agricultural planning. I see that AI will enable us to more clearly predict vulnerabilities and natural disasters so that we can intervene before people are hurt. I am most excited about quantum computing supercharging AI to provide awesome performance in solving our world’s problems. I am further excited about the potential for AI networking to enable us to work across borders to benefit more of the world’s citizens.”

Melissa R. Michelson, a professor of political science at Menlo College, commented, “Because of the concurrent rise of support for the Black Lives Matter movement, I see people taking a second look at the role of AI in our daily lives, as exemplified by the decision to stop police use of facial recognition technology. I am optimistic that our newfound appreciation of racism and discrimination will continue to impact decisions about when and how to implement AI.”

Andrew K. Koch, president and chief operating officer at the John N. Gardner Institute for Excellence in Undergraduate Education, wrote, “If there was a ‘Yes, but’ option, I would have selected it. I am an optimist. But I am also a realist. AI is moving quickly. Self-interested (defined in individual and corporate ways) entities are exploiting AI in dubious and unethical ways now. They will do so in the future. But I also believe that national and global ethical standards will continue to develop and adapt. The main challenge is the pace of evolution for these standards. AI may have to be used to help keep up with adaptation needed for the ethical standards needed for AI systems.”

Anne Collier, editor of Net Family News and founder of The Net Safety Collaborative, responded, “Policymakers, newsmakers, users and consumers will exert and feel the pressure for ethics with regard to tech and policy because of three things:

  1. A blend of the internet and a pandemic has gotten us all thinking as a planet more than ever.
  2. The disruption COVID-19 introduced to business- and governance-as-usual.
  3. Because of the growing activism and power of youth seeking environmental ethics and social justice.

“Populism and authoritarianism in a number of countries certainly threaten that trajectory, but – though seemingly on the rise now – I don’t see this as a long-term threat (a sense of optimism that comes from watching the work of so-called ‘Gen Z’). I wish, for example, that someone could survey a representative sample of Gen Z citizens of the Philippines, Turkey, Brazil, China, Venezuela, Iran and the U.S. and ask them this question, explaining how AI could affect their everyday lives, then publish that study. I believe it would give many other adults a sense of optimism similar to mine.”

Eric Knorr, pioneering technology journalist and editor in chief of International Data Group, the publisher of a number of leading technology journals, commented, “First, only a tiny slice of AI touches ethics – it’s primarily an automation tool to relieve humans of performing rote tasks. Current awareness of ethical issues offers hope that AI will either be adjusted to compensate for potential bias or sequestered from ethical judgment.”

Anthony Clayton, an expert policy analysis, futures studies and scenario and strategic planning based at the University of the West Indies, said, “Technology firms will come under increasing regulatory pressure to introduce standards (with regard to, e.g., ethical use, error-checking and monitoring) for the use of algorithms when dealing with sensitive data. AI will also enable, e.g., autonomous lethal weapons systems, so it will be important to develop ethical and legal frameworks to define acceptable use.”

Fabrice Popineau, an expert on AI, computer intelligence and knowledge engineering based in France, responded, “I have hope that AI will follow the same path as other potential harmful technologies before (nuclear, bacteriological); safety mechanisms will be put in motion to guarantee that AI use stays beneficial.”

Concepcion Olavarrieta, foresight and economic consultant and president of the Mexico node of The Millennium Project, wrote, “Yes, there will be progress:

  1. Ethical issues are involved in most human activities.
  2. The pandemic experience plays into this development.
  3. Societal risk factors will not be attended.
  4. AI will become core in most people’s lives by 2030.
  5. It is important to assure an income and or offer a basic income to people.”

Sharon Sputz, executive director of strategic programs at The Data Science Institute at Columbia University, predicted, “In the distant future, ethical systems will prevail, but it will take time.”

A well-known cybernetician and emeritus professor of business management commented, “AI will be used to help people who can afford to build and use AI systems. Lawsuits will help to persuade companies what changes are needed. Companies will learn to become sensitive to AI-related issues.”

A consensus around ethical AI is emerging, and open-source solutions can help

A portion of these experts optimistically say they expect progress toward ethical AI systems. They say there has been intense and widespread activity across all aspects of science and technology development on this front for years, and it is bearing fruit. Some point out that the field of bioethics has already managed to broadly embrace the concepts of beneficence, nonmaleficence, autonomy and justice in its work to encourage and support positive biotech evolution that serves the common good.

Some of these experts expect to see an expansion of the type of ethical leadership already being demonstrated by open-source AI developers, a cohort of highly principled AI builders who take the view that it should be thoughtfully created in a fairly transparent manner and be sustained and innovated in ways that serve the public well and avoid doing harm. They are hopeful that there is enough energy and brainpower in this cohort to set the good examples that can help steer positive AI evolution across all applications.

Also of note: Over the past few years, tech industry, government and citizen participants have been enlisted to gather in many different and diverse working groups on ethical AI; while some experts in this canvassing see this to mostly be public relations window-dressing, others believe that these efforts will be effective.

Micah Altman, a social and information scientist at MIT, said, “First, the good news: In the last several years, dozens of major reports and policy statements have been published by stakeholders from across all sectors arguing that the need for ethical design of AI is urgent and articulating general ethical principles that should guide such design. Moreover, despite significant differences in the recommendations of these reports, most share a focused common core of ethical principles. This is progress. And there are many challenges to meaningfully incorporating these principles into AI systems; into the processes and methods that would be needed to design, evaluate and audit ethical AI systems; and into the law, economics and culture of society that is needed to drive ethical design.

“There are many challenges to meaningfully incorporating these principles into AI systems; into the processes and methods that would be needed to design, evaluate and audit ethical AI systems; and into the law, economics and culture of society that is needed to drive ethical design.”

Micah Altman, a social and information scientist at MIT

“We do not yet know (generally) how to build ethical decision-making into AI systems directly; but we could and should take steps toward evaluating and holding organizations accountable for AI-based decisions. And this is more difficult than the work of articulating these principles. It will be a long journey.”

Henry E. Brady, dean of the Goldman School of Public Policy at the University of California-Berkeley, responded, “There seems to be a growing movement to examine these issues, so I am hopeful that by 2030 most algorithms will be assessed in terms of ethical principles. The problem, of course, is that we know that, in the case of medical experiments, it is a long time from the infamous Tuskegee study to committees for the protection of human subjects. But I think the emergence of AI has actually helped to make clear the inequities and injustices in some of our practices. Consequently, they provide a useful focal point for democratic discussion and action.

“I think public agencies will take these issues very seriously, and mechanisms will be created to improve AI (although the issues pose difficult problems for legislators due to [the] highly technical nature). I am more worried about private companies and their use of algorithms. It is important, by the way, to recognize that a great deal of AI (perhaps all of it) is simply the application of ‘super-charged’ statistical methods that have been known for quite a long time.

“It is also worth remembering that AI is very good at predictions given a fixed and unchanging set of circumstances, but it is not good at causal inference, and its predictions are often based upon proxies for an outcome that may be questionable or unethical.

“Finally, AI uses training sets that often embed practices that should be questioned. A lot of issues in AI concern me. The possibility of ‘deepfakes’ means that reality may become protean and shape-shifting in ways that will be hard to cope with. Facial recognition provides for the possibility of tracking people that has enormous privacy implications. Algorithms that use proxies and past practice can embed unethical and unfair results. One of the problems with some multilayer AI methods is that it is hard to understand what rules or principles they are using. Hence, it is hard to open up the ‘black box’ and see what is inside.”

J. Nathan Matias, an assistant professor at Cornell University and expert in digital governance and behavior change in groups and networks, noted, “Unless there is a widespread effort to halt their deployment, artificial intelligence systems will become a basic part of how people and institutions make decisions. By 2030, a well-understood set of ethical guidelines and compliance checks will be adopted by the technology industry. These compliance checks will assuage critics but will not challenge the underlying questions of conflicting values that many societies will be unable to agree on. By 2030, computer scientists will have made great strides in attempts to engineer fairer, more equitable algorithmic decision-making.

“Attempts to deploy these systems in the field will face legal and policy attacks from multiple constituencies for constituting a form of discrimination. By 2030, scientists will have an early answer to the question of whether it is possible to make general predictions about the behavior of algorithms in society.

“If the behavior and social impacts of artificial intelligence can be predicted and modeled, then it may become possible to reliably govern the power of such systems. If the behavior of AI systems in society cannot be reliably predicted, then the challenge of governing AI will continue to remain a large risk of unknown dimensions.”

Jean Paul Nkurunziza, secretary-general of the Burundi Youth Training Centre, wrote, “The use of AI is still at its infancy. The ethical aspects of that domain are not yet clear. I believe that around 2025 ethical issues about the use of AI may erupt (privacy, the use of the AI in violence such as war and order keeping by police, for instance). I foresee that issues caused by the primary use of AI will bring the community to debate about that, and we will come up with some ethical guidelines around AI by 2030.”

Doris Marie Provine, emeritus professor of justice and social inquiry at Arizona State University, noted, “I am encouraged by the attention that ethical responsibilities are getting. I expect that attention to translate into action. The critical discussion around facial-recognition technology gives me hope. AI can make some tasks easier, e.g., sending a warning signal about a medical condition. But it also makes people lazier, which may be even more dangerous. At a global level, I worry about AI being used as the next phase of cyber warfare, e.g., to mess up public utilities.”

Judith Schoßböck, research fellow at Danube University Krems, said, “I don’t believe that most AI systems will be used in ethical ways. Governments would have to make this a standard, but, due to the pandemic and economic crisis, they might have other priorities. Implementation and making guidelines mandatory will be important. The most difference will be felt in the area of bureaucracy. I am excited about AI’s prospects for assisted living.”

Gary L. Kreps, distinguished professor of communication and director of the Center for Health and Risk Communication at George Mason University, responded, “I am guardedly optimistic that ethical guidelines will be used to govern the use of AI in the future. Increased attention to issues of privacy, autonomy and justice in digital activities and services should lead to safeguards and regulations concerning ethical use of AI.”

Michael Marien, director of Global Foresight Books, futurist and compiler of the annual list of the best futures books of the year, said, “We have too many crises right now, and many more ahead, where technology can only play a secondary role at best. Technology should be aligned with the UN’s 17 Sustainable Development Goals and especially concerned about reducing the widening inequality gap (SDG #10), e.g., in producing and distributing nutritious food (SDG #2).”

Ibon Zugasti, futurist, strategist and director with Prospektiker, wrote, “The use of technologies, ethics and privacy must be guaranteed through transparency. What data will be used and what will be shared? There is a need to define a new governance system for the transition from current narrow AI to the future general AI.”

Gerry Ellis, an accessibility and usability consultant, said, “The concepts of fairness and bias are key to ensure that AI supports the needs of all of society, particularly those who are vulnerable, such as many (but not all) persons with disabilities. Overall, AI and associated technologies will be for the good, but individual organizations often do not look beyond their own circumstances and their own profits. One does not need to look beyond sweatshops, dangerous working conditions and poor wages in some industries to demonstrate this. Society and legislation must keep up with technological developments to ensure that the good of society is at the heart of society, and that is in its industrial practices.”

Ethics will evolve and progress will come as different fields show the way

A number of these experts insisted that no technology endures if it broadly delivers unfair or unwanted outcomes. They said technologies that cause harms are adjusted or replaced as, over time, people recognize and work to overcome difficulties to deliver better results. Others said ethics will come to rule at least some aspects of AI but it will perhaps not gain ground until regulatory constraints or other incentives for tech businesses emerge.

A number of respondents made the case that the application of ethics to AI applications will likely unfold in different ways, depending upon the industry or public arena in which they unfold. Some say this rollout will depend upon the nature of the data involved. For instance, elaborate ethics regimes have already been developed around the use of health and medical data. Other areas such as the arguments over privacy and surveillance systems have been more contested.

Jon Lebkowsky, CEO, founder and digital strategist at Polycot Associates, wrote, “I have learned from exposure to strategic foresight thinking and projects that we can’t predict the future, but we can determine scenarios that we want to see and work to make those happen. So, I am not predicting that we’ll have ethical AI so much as stating an aspiration – it’s what I would work toward. Certainly, there will be ways to abuse AI/ML/big data, especially in tracking and surveillance. Globally, we need to start thinking about what we think the ethical implications will be and how we can address those within technology development. Given the current state of global politics, it’s harder to see an opportunity for global cooperation, but hopefully the pendulum will swing back to a more reasonable global political framework. The ‘AI for Good’ gatherings might be helpful if they continue. AI can be great for big data analysis and data-driven action, especially where data discretion can be programmed into systems via machine-learning algorithms. Some of the more interesting applications will be in translation, transportation, including aviation; finance, government (including decision support), medicine and journalism.

“I worry most about uses of AI for surveillance and political control, and I’m a little concerned about genetic applications that might have unintended consequences, maybe because I saw a lot of giant bug movies in the 1950s. I think AI can facilitate better management and understanding of complexity and greater use of knowledge- and decision-support systems. Evolving use of AI for transportation services has been getting a lot of attention and may be the key to overcoming transportation inefficiency and congestion.”

Amar Ashar, assistant director of research at the Berkman Klein Center for Internet & Society, said, “We are currently in a phase where companies, countries and other groups who have produced high-level AI principles are looking to implement them in practice. This application into specific real-world domains and challenges will play out in different ways. Some AI-based systems may adhere to certain principles in a general sense, since many of the terms used in principles documents are broad and defined differently by different actors. But whether these principles meet those definitions or the spirit of how these principles are currently being articulated is still an open question.

“Implementation of AI principles cannot be left to AI designers and developers alone. The principles often require technical, social, legal, communication and policy systems to work in coordination with one another. If implemented without accountability mechanisms, these principles statements are also bound to fail.”

A journalism professor emeritus predicted, “The initial applications, ones easily accepted in society, will be in areas where the public benefit is manifestly apparent. These would include health and medicine, energy management, complex manufacturing and quality control applications. All good and easy to adhere to ethical standards, because they’re either directly helpful to an individual or they make things less expensive and more reliable. But that won’t be the end of it. Unless there are both ethical and legal constraints with real teeth, we’ll find all manner of exploitations in finance, insurance, investing, employment, personal data harvesting, surveillance and dynamic pricing of almost everything from a head of lettuce to real estate. And those who control the AI will always have the advantage – always.

“What most excites me beyond applications in health and medicine are applications in materials science, engineering, energy and resource management systems and education. The ability to deploy AI as tutors and learning coaches could be transformative for equalizing opportunities for educational attainment. I am concerned about using AI to write news stories unless the ‘news’ is a sports score, weather report or some other description of data … My greatest fear, not likely in my lifetime, is that AI eventually is deployed as our minders – telling us when to get up, what to eat, when to sleep, how much and how to exercise, how to spend our time and money, where to vacation, who to socialize with, what to watch or read and then secretly rates us for employers or others wanting to size us up.”

Michael R. Nelson, research associate at CSC Leading Edge Forum, observed, “What gives me hope: Companies providing machine learning and big data services so all companies and governments can apply these tools. Misguided efforts to make technology ‘ethical by design’ worry me. Cybersecurity making infrastructure work better and more safely is an exciting machine-learning application, and ‘citizen science’ and sousveillance knowledge tools that help me make sense of the flood of data we swim in.”

Edward A. Friedman, professor emeritus of technology management at Stevens Institute of Technology, responded, “AI will greatly improve medical diagnostics for all people. AI will provide individualized instruction for all people. I see these as ethically neutral applications.”

Lee McKnight, associate professor at the Syracuse University School of Information Studies, wrote, “When we say ‘AI,’ most people really mean a wider range of systems and applications, including machine learning, neural networks and natural language processing to name a few. ‘Artificial general intelligence’ remains the province through 2030 of science fiction and Elon Musk.

“A wide array of ‘industrial AI’ will in 2023, for example, help accelerate or slow down planes, trains and rocket ships. Most of those industrial applications of AI will be designed by firms, and the exact algorithms used and adapted will be considered proprietary trade secrets – not subject to public review or ethics audit. I am hopeful that smart cities and communities initially and eventually all levels of public organizations, and nonprofits – will write into their procurement contracts requirements that firms not only commit to an ethical review process for AI applications touching on people directly – such as facial recognition. Further, I expect communities will in their requests for proposals make clear that inability to explain how an algorithm is being used and where the data generated is going/who will control the information will be disqualifying.

“These steps will be needed to restore communities’ trust in smart systems, which was shaken by self-serving initiatives by some of the technology giants trying to turn real communities into company towns. I am excited to see this clear need and also the complexity of developing standards and curricula for ‘certified ethical AI developers,’ which will be a growth area worldwide. How exactly to determine if one is truly ‘certified’ in ethics is obviously an area where the public would laugh in the faces of corporate representatives claiming their internal, not publicly disclosed, or audited, ethical training efforts are sufficient. This will take years to sort out and will require wide public dialogue and new international organizations to emerge. I am excited to help in this effort where I can.”

A professor at university in the U.S. Midwest said, “I anticipate some ethical principles for AI will be adopted by 2030; however, they will not be strong or transparent. Bottom line: Capitalism incentivizes exploitation of resources, and the development of AI and its exploitation of information is no different than any other industry. AI has great potential, but we need to better differentiate its uses. It can help us understand disease and how to treat it but has already inflicted great harms on individuals. As we have seen, AI has also disproportionately impacted those already marginalized – the COMPAS recidivism algorithm and the use of facial-recognition technology by police agencies are examples. The concept of general and narrow AI that Meredith Broussard uses is appropriate. Applied in particular areas, AI is hugely important and will better the lives of most. Other applications are nefarious and should be carefully implemented.”

Mark Monchek, author, keynote speaker and self-described “culture of opportunity strategist,” commented, “In order for ethical principles to prevail, we need to embrace the idea of citizenship. By ‘citizenship,’ I mean a core value that each of us, our families and communities have a responsibility to actively participate in the world that affects us. This means carefully ‘voting’ every day when choosing who we buy from, consume technology from, work for and live with, etc. We would need to be much more proactive in our use of technology, including privacy issues, understanding, consuming media more like we consume food, etc.”

Monica Murero, director, E-Life International Institute and associate professor in Communication and New Technologies at the University of Naples Federico II, asked, “Will there be ethical or questionable outcomes? In the next decade (2020-2030), I see both, but I expect AI to become more questionable. I think about AI as an ‘umbrella’ term with different technologies, techniques, and applications that may lead to pretty different scenarios. The real challenge to consider is how AI will be used in combination with other disruptive technologies such as Internet of Things, 3D printing, cloud computing, blockchain, genomics engineering, implantable devices, new materials and environment-friendly technologies, new ways to store energy and how the environment and people will be affected and at the same part of the change – physically and mentally for the human race. I am worried about the changes in ‘humans’ and the rise of new inequalities in addition to the effects on objects and content that will be around us. The question is much broader than ‘ethical,’ and the answers, as a society, should start in a public debate at the international level. We should decide who or what should benefit the most. Many countries and companies are still very behind this race, and others will take advantage of it. This worries me the most because I do not expect that things will evolve in a transparent and ‘ethical’ manner. I am very much in favor of creating systems of evaluation and regulation that seriously look at the outcomes over time.”

Sign up for our weekly newsletter

Fresh data delivery Saturday mornings

Sign up for The Briefing

Weekly updates on the world of news & information