Numbers, Facts and Trends Shaping Your World

Artificial Intelligence and the Future of Humans

2. Solutions to address AI’s anticipated negative impacts

A number of participants in this canvassing offered solutions to the worrisome potential future spawned by AI. Among them: 1) improving collaboration across borders and stakeholder groups; 2) developing policies to assure that development of AI will be directed at augmenting humans and the common good; and 3) shifting the priorities of economic, political and education systems to empower individuals to stay ahead in the “race with the robots.”

A number of respondents sketched out overall aspirations:

Andrew Wycoff, the director of OECD’s directorate for science, technology and innovation, and Karine Perset, an economist in OECD’s digital economy policy division, commented, Twelve years from now, we will benefit from radically improved accuracy and efficiency of decisions and predictions across all sectors. Machine learning systems will actively support humans throughout their work and play. This support will be unseen but pervasive – like electricity. As machines’ ability to sense, learn, interact naturally and act autonomously increases, they will blur the distinction between the physical and the digital world. AI systems will interconnect and work together to predict and adapt to our human needs and emotions. The growing consensus that AI should benefit society at-large leads to calls to facilitate the adoption of AI systems to promote innovation and growth, help address global challenges, and boost jobs and skills development, while at the same time establishing appropriate safeguards to ensure these systems are transparent and explainable, and respect human rights, democracy, culture, nondiscrimination, privacy and control, safety, and security. Given the inherently global nature of our networks and applications that run across then, we need to improve collaboration across countries and stakeholder groups to move toward common understanding and coherent approaches to key opportunities and issues presented by AI. This is not too different from the post-war discussion on nuclear power. We should also tread carefully toward Artificial General Intelligence and avoid current assumptions on the upper limits of future AI capabilities.”

Wendy Hall, professor of computer science at the University of Southampton and executive director of the Web Science Institute, said, “By 2030 I believe that human-machine/AI collaboration will be empowering for human beings overall. Many jobs will have gone, but many new jobs will have been created and machines/AI should be helping us do things more effectively and efficiently both at home and at work. It is a leap of faith to think that by 2030 we will have learnt to build AI in a responsible way and we will have learnt how to regulate the AI and robotics industries in a way that is good for humanity. We may not have all the answers by 2030 but we need to be on the right track by then.”

I believe in human-machine/AI collaboration, but the challenge is whether humans can adapt our practices to these new opportunities.Ian O’Byrne

Ian O’Byrne, an assistant professor focusing on literacy and technology at the College of Charleston, said, “I believe in human-machine/AI collaboration, but the challenge is whether humans can adapt our practices to these new opportunities.”

Arthur Bushkin, an IT pioneer who worked with the precursors to the Advanced Research Projects Agency Network (ARPANET) and Verizon, wrote, “The principal issue will be society’s collective ability to understand, manage and respond to the implications and consequences of the technology.”

Daniel Obam, information and communications technology policy advisor, responded, “As we develop AI, the issue of ethical behaviour is paramount. AI will allow authorities to analyse and allocate resources where there is the greatest need. AI will also change the way we work and travel. … Digital assistants that mine and analyse data will help professionals in making concise decisions in health care, manufacturing and agriculture, among others. Smart devices and virtual reality will enable humans to interact with and learn from historical or scientific issues in a more-clear manner. Using AI, authorities will be able to prevent crime before it happens. Cybersecurity needs to be at the forefront to prevent unscrupulous individuals from using AI to perpetrate harm or evil on the human race.”

Ryan Sweeney, director of analytics at Ignite Social Media, commented, “Our technology continues to evolve at a growing rate, but our society, culture and economy are not as quick to adapt. We’ll have to be careful that the benefits of AI for some do not further divide those who might not be able to afford the technology. What will that mean for our culture as more jobs are automated? We will need to consider the impact on the current class divide.”

Susan Mernit, executive director of The Crucible and co-founder and board member of Hack the Hood, responded, “If AI is in the hands of people who do not care about equity and inclusion, it will be yet another tool to maximize profit for a few.”

The next three sections of this report focus on solutions most often mentioned by respondents to this canvassing.

Improve human collaboration across borders and stakeholder groups

A number of these experts said ways must be found for people of the world to come to a common understanding of the evolving concerns over AI and digital life and to reach agreement in order to create cohesive approaches to tackling AI’s challenges.

[Bank of England chief economist]

Edson Prestes, a professor and director of robotics at the Federal University of Rio Grande do Sul, responded, “We must understand that all domains (technological or not) have two sides: a good and a bad one. To avoid the bad one we need to create and promote the culture of AI/Robotics for good. We need to stimulate people to empathize toward others. We need to think about potential issues, even if they have small probability to happen. We need to be futurists, foreseeing potential negative events and how to circumvent them before they happen. We need to create regulations/laws (at national and international levels) to handle globally harmful situations for humans, other living beings and the environment. Applying empathy, we should seriously think about ourselves and others – if the technology will be useful for us and others and if it will not cause any harm. We cannot develop solutions without considering people and the ecosystem as the central component of development. If so, the pervasiveness of AI/robotics in the future will diminish any negative impact and create a huge synergy among people and environment, improving people’s daily lives in all domains while achieving environment sustainability.”

Adam Nelson, a software developer for one of the “big five” global technology companies, said, “Human-machine/AI collaboration will be extremely powerful, but humans will still control intent. If human governance isn’t improved, AI will merely make the world more efficient. But the goals won’t be human welfare. They’ll be wealth aggregation for those in power.”

Wendy Seltzer, strategy lead and counsel at the World Wide Web Consortium, commented, “I’m mildly optimistic that we will have devised better techno-social governance mechanisms. such that if AI is not improving the lives of humans, we will restrict its uses.”

Jen Myronuk, a respondent who provided no identifying details, said, “The optimist’s view includes establishing and implementing a new type of ISO standard – ‘encoded human rights’ – as a functional data set alongside exponential and advancing technologies. Global human rights and human-machine/AI technology can and must scale together. If applied as an extension of the human experience, human-machine/AI collaboration will revolutionize our understanding of the world around us.”

Fiona Kerr, industry professor of neural and systems complexity at the University of Adelaide, commented, “The answer depends very much on what we decide to do regarding the large questions around ensuring equality of improved global health; by agreeing on what productivity and worth now look like, partly supported by the global wage; through fair redistribution of technology profits to invest in both international and national social capital; through robust discussion on the role of policy in rewarding technologists and businesses to build quality partnerships between humans and AI; through the growth of understanding in the neurophysiological outcomes of human-human and human-technological interaction which allows us to best decide what not to technologies, when a human is more effective, and how to ensure we maximise the wonders of technology as an enabler of a human-centric future.”

Benjamin Kuipers, a professor of computer science at the University of Michigan, wrote, “We face several critical choices between positive and negative futures. … Advancing technology will provide vastly more resources; the key decision is whether those resources will be applied for the good of humanity as a whole or if they will be increasingly held by a small elite. Advancing technology will vastly increase opportunities for communication and surveillance. The question is whether we will find ways to increase trust and the possibilities for productive cooperation among people or whether individuals striving for power will try to dominate by decreasing trust and cooperation. In the medium term, increasing technology will provide more powerful tools for human, corporate or even robot actors in society. The actual problems will be about how members of a society interact with each other. In a positive scenario, we will interact with conversational AIs for many different purposes and even when the AI belongs to a corporation we will be able to trust that it takes what in economics is called a ‘fiduciary’ stance toward each of us. That is, the information we provide must be used primarily for our individual benefit. Although we know, and are explicitly told, that our aggregated information is valuable to the corporation, we can trust that it will not be used for our manipulation or our disadvantage.”

Denise Garcia, an associate professor of political science and international affairs at Northeastern University, said, “Humanity will come together to cooperate.”

Charles Geiger, head of the executive secretariat for the UN’s World Summit on the Information Society, commented, “As long as we have a democratic system and a free press, we may counterbalance the possible threats of AI.”

Warren Yoder, longtime director of the Public Policy Center of Mississippi, now an instructor at Mississippi College, optimistically responded, “Human/AI collaborations will … augment our human abilities and increase the material well-being of humanity. At the same time the concomitant increase in the levels of education and health will allow us to develop new social philosophies and rework our polities to transform human well-being. AI increases the disruption of the old social order, making the new transformation both necessary and more likely, though not guaranteed.”

Wangari Kabiru, author of the MitandaoAfrika blog, based in Nairobi, Kenya, commented, “In 2030, advancing AI and tech will not leave most people better off than they are today, because our global digital mission is not strong enough and not principled enough to assure that ‘no, not one is left behind’ – perhaps intentionally. The immense positive-impact potential for enabling people to achieve more in nearly every area of life – the full benefits of human-machine/AI collaboration can only be experienced when academia, civil society and other institutions are vibrant, enterprise is human-values-based, and governments and national constitutions and global agreements place humanity first. … Engineering should serve humanity and never should humanity be made to serve the exploits of engineering. More people MUST be creators of the future of LIFE – the future of how they live, future of how they work, future of how their relationships interact and overall how they experience life. Beyond the coexistence of human-machine, this creates synergy.”

A professor expert in AI connected to a major global technology company’s projects in AI development wrote, “Precision democracy will emerge from precision education, to incrementally support the best decisions we can make for our planet and our species. The future is about sustaining our planet. As with the current development of precision health as the path from data to wellness, so too will artificial intelligence improve the impact of human collaboration and decision-making in sustaining our planet. ”

Some respondents argued that individuals must do better at taking a more active role in understanding and implementing the decision-making options available to them in these complex, code-dependent systems.

Kristin Jenkins, executive director of BioQUEST Curriculum Consortium, said, “Like all tools the benefits and pitfalls of AI will depend on how we use it. A growing concern is the collection and potential uses of data about people’s day-to-day lives. ‘Something’ always knows where we are, the layout of the house, what’s in the fridge and how much we slept. The convenience provided by these tools will override caution about data collection, so strong privacy protection must be legislated and culturally nurtured. We need to learn to be responsible for our personal data and aware of when and how it is collected and used.”

Peng Hwa Ang, professor of communications at Nanyang Technological University and author of “Ordering Chaos: Regulating the Internet,” commented, “AI is still in its infancy. A lot of it is ruled-based and not demanding of true intelligence or learning. But even so, I find it useful. My car has lane-assistance. I find that it makes me a better driver. When AI is more full-fledged, it would make driving safer and faster. I am using AI for some work I am doing on sentiment analysis. I find that I am able to be more creative in asking questions to be investigated. I expect AI will compel greater creativity. Right now, the biggest fear of AI is that it is a black-box operation – yes, the factors chosen are good and accurate and useful, but no one knows why those criteria are chosen. We know the percentages of the factors, but we do not know the whys. Hopefully, by 2030, the box will be more transparent. That’s on the AI side. On the human side, I hope human beings understand that true AI will make mistakes. If not, it is not real AI. This means that people have got to be ready to catch the mistakes that AI will make. It will be very good. But it will (still) not be foolproof.”

[historical]

An anonymous respondent said, “We should ensure that values (local or global) and basic philosophical theories on ethics inform the development and implementation of AI systems.”

Develop policies to assure that development of AI will be directed at augmenting humans and the common good

Many experts who shared their insights in this study suggested there has to be an overall change in the development, regulation and certification of autonomous systems. They generally said the goal should be values-based, inclusive, decentralized, networks “imbued with empathy” that help individuals assure that technology meets social and ethical responsibilities for the common good.

In order for AI technologies to be truly transformative in a positive way, we need a set of ethical norms, standards and practical methodologies to ensure that we use AI responsibly and to the benefit of humanity.Susan Etlinger

Susan Etlinger, an industry analyst for Altimeter Group and expert in data, analytics and digital strategy, commented, “In order for AI technologies to be truly transformative in a positive way, we need a set of ethical norms, standards and practical methodologies to ensure that we use AI responsibly and to the benefit of humanity. AI technologies have the potential to do so much good in the world: identify disease in people and populations, discover new medications and treatments, make daily tasks like driving simpler and safer, monitor and distribute energy more efficiently, and so many other things we haven’t yet imagined or been able to realize. And – like any tectonic shift – AI creates its own type of disruption. We’ve seen this with every major invention from the Gutenberg press to the invention of the semiconductor. But AI is different. Replication of some human capabilities using data and algorithms has ethical consequences. Algorithms aren’t neutral; they replicate and reinforce bias and misinformation. They can be opaque. And the technology and means to use them rests in the hands of a select few organizations, at least today.”

Bryan Johnson, founder and CEO of Kernel, a leading developer of advanced neural interfaces, and OS Fund, a venture capital firm, said, “We could start with owning our own digital data and the data from our bodies, minds and behavior, and then follow by correcting our major tech companies’ incentives away from innovation for everyday convenience and toward radical human improvement. As an example of what tech could look like when aligned with radical human improvement, cognitive prosthetics will one day give warnings about biases – like how cars today have sensors letting you know when you drift off to sleep or if you make a lane change without a signal – and correct cognitive biases and warn an individual away from potential cognitive biases. This could lead to better behaviors in school, home and work, and encourage people to make better health decisions.”

Marc Rotenberg, executive director of Electronic Privacy Information Center (EPIC), commented, “The challenge we face with the rise of AI is the growing opacity of processes and decision-making. The favorable outcomes we will ignore. The problematic outcomes we will not comprehend. That is why the greatest challenge ahead for AI accountability is AI transparency. We must ensure that we understand and can replicate the outcomes produced by machines. The alternative outcome is not sustainable.”

John C. Havens, executive director of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems and the Council on Extended Intelligence, wrote, “While today people provide ‘consent’ for their data usage, most people don’t understand the depth and breadth of how their information is utilized by businesses and governments at large. Until every individual is provided with a sovereign identity attached to a personal data cloud they control, information won’t truly be shared – just tracked. By utilizing blockchain or similar technologies and adopting progressive ideals toward citizens and their data, as demonstrated by countries like Estonia, we can usher in genuine digital democracy in the age of the algorithm. The other issue underlying the ‘human-AI augmentation’ narrative rarely discussed is the economic underpinnings driving all technology manufacturing. Where exponential growth, shareholder models are prioritized human and environmental well-being diminishes. Multiple reports from people like Joseph Stiglitz point out that while AI will greatly increase GDP in the coming decades, the benefits of these increases will favor the few versus the many. It’s only by adopting ‘Beyond GDP’ or triple-bottom-line metrics that ‘people, planet and profit’ will shape a holistic future between humans and AI.”

Greg Lloyd, president and co-founder at Traction Software, presented a future scenario: “By 2030 AIs will augment access and use of all personal and networked resources as highly skilled and trusted agents for almost every person – human or corporate. These agents will be bound to act in accordance with new laws and regulations that are fundamental elements of their construction much like Isaac Asimov’s ‘Three Laws of Robotics’ but with finer-grain ‘certifications’ for classes of activities that bind their behavior and responsibility for practices much like codes for medical, legal, accounting and engineering practice. Certified agents will be granted access to personal or corporate resources, and within those bounds will be able to converse, take direction, give advice and act like trusted servants, advisers or attorneys. Although these agents will ‘feel’ like intelligent and helpful beings, they will not have any true independent will or consciousness, and must not pretend to be human beings or act contrary to the laws and regulations that bind their behavior. Think Ariel and Prospero.”

[What about]

Joël Colloc, professor at Université du Havre Normandy University and author of “Ethics of Autonomous Information Systems,” commented, “When AI supports human decisions as a decision-support system it can help humanity enhance life, health and well-being and supply improvements for humanity. See Marcus Flavius Quintilianus’s principles: Who is doing what, with what, why, how, when, where? Autonomous AI is power that can be used by powerful persons to control the people, put them in slavery. Applying the Quintilian principles to the role of AI … we should propose a code of ethics of AI to evaluate that each type of application is oriented toward the well-being of the user: 1) do not harm the user, 2) benefits go to the user, 3) do not misuse her/his freedom, identity and personal data, and 4) decree as unfair any clauses alienating the user’s independence or weakening his/her rights of control over privacy in use of the application. The sovereignty of the user of the system must remain total.”

Joseph Turow, professor of communication at the University of Pennsylvania, wrote, “Whether or not AI will improve society or harm it by 2030 will depend on the structures governing societies of the era. Broadly democratic societies with an emphasis on human rights might encourage regulations that push AI in directions that help all sectors of the nation. Authoritarian societies will, by contrast, set agendas for AI that further divide the elite from the rest and use technology to cultivate and reinforce the divisions. We see both tendencies today; the dystopian one has the upper hand especially in places with the largest populations. It is critical that people who care about future generations speak out when authoritarian tendencies of AI appear.”

Henry E. Brady, dean of the Goldman School of Public Policy at the University of California, Berkeley, wrote, “I believe that policy responses can be developed that will reduce biases and find a way to accommodate AI and robotics with human lives.”

Jennifer King, director of privacy at Stanford Law School’s Center for Internet and Society, said, “Unless we see a real effort to capture the power of AI for the public good, I do not see an overarching public benefit by 2030. The shift of AI research to the private sector means that AI will be developed to further consumption, rather than extend knowledge and public benefit.”

Gary Kreps, distinguished professor of communication and director of the Center for Health and Risk Communication at George Mason University, wrote, “The tremendous potential for AI to be used to engage and adapt information content and computer services to individual users can make computing increasingly helpful, engaging and relevant. However, to achieve these outcomes, AI needs to be programmed with the user in mind. For example, AI services should be user-driven, adaptive to individual users, easy to use, easy to understand and easy for users to control. These AI systems need to be programmed to adapt to individual user requests, learning about user needs and preferences.”

Thomas Streeter, a professor of sociology at the University of Vermont, said, “The technology will not determine whether things are better or worse in 2030; social and political choices will.”

Paul Werbos, a former program director at the National Science Foundation who first described the process of training artificial neural networks through backpropagation of errors in 1974, said, “We are at a moment of choice. The outcome will depend a lot on the decisions of very powerful people who do not begin to know the consequences of the alternatives they face, or even what the substantive alternatives are.”

Divina Frau-Meigs, professor of media sociology at the University of Paris III: Sorbonne Nouvelle and UNESCO chair for sustainable digital development, responded, “The sooner the ethics of AI are aligned with human rights tenets the better.”

Juan Ortiz Freuler, a policy fellow at the World Wide Web Foundation, wrote “We believe technology can and should empower people. If ‘the people’ will continue to have a substantive say on how society is run, then the state needs to increase its technical capabilities to ensure proper oversight of these companies. Tech in general and AI in particular will promote the advancement of humanity in every area by allowing processes to scale efficiently, reducing the costs and making more services available to more people (including quality health care, mobility, education, etc.). The open question is how these changes will affect power dynamics. To operate effectively, AI requires a broad set of infrastructure components, which are not equally distributed. These include data centers, computing power and big data. What is more concerning is that there are reasons to expect further concentration. On the one hand, data scales well: The upfront (fixed) costs of setting up a datacenter are large compared to the cost of keeping it running. Therefore, the cost of hosting each extra datum is marginally lower than the previous one. Data is the fuel of AI, and therefore whoever gets access to more data can develop more effective AI. On the other hand, AI creates efficiency gains by allowing companies to automate more processes, meaning whoever gets ahead can undercut competitors. This circle fuels concentration. As more of our lives are managed by technology there is a risk that whoever controls these technologies gets too much power. The benefits in terms of quality of life and the risks to people’s autonomy and control over politics are qualitatively different and there cannot (and should not) be up for tradeoffs.”

Meryl Alper, an assistant professor of communication at Northeastern University and a faculty associate at Harvard University’s Berkman Klein Center for Internet and Society, wrote, “My fear is that AI tools will be used by a powerful few to further centralize resources and marginalize people. These tools, much like the internet itself, will allow people to do this ever more cheaply, quickly and in a far-reaching and easily replicable manner, with exponentially negative impacts on the environment. Preventing this in its worst manifestations will require global industry regulation by government officials with hands-on experience in working with AI tools on the federal, state and local level, and transparent audits of government AI tools by grassroots groups of diverse (in every sense of the term) stakeholders.”

David Wilkins, instructor in computer science at the University of Oregon, responded, “AI must be able to explain the basis for its decisions.”

A top research director and technical fellow at a major global technology company said, “There is a huge opportunity to enhance folks’ lives via AI technologies. The positive uses of AI will dominate as they will be selected for their value to people. I trust the work by industry, academia and civil society to continue to play an important role in moderating the technology, such as pursuing understandings of the potential costly personal, social and societal influences of AI. I particularly trust the guidance coming from the long-term, ongoing One Hundred Year Study on AI and the efforts of the Partnership on AI.”

Peter Stone, professor of computer science at the University of Texas at Austin and chair of the first study panel of the One Hundred Year Study on Artificial Intelligence (AI100), responded, “As chronicled in detail in the AI100 report, I believe that there are both significant opportunities and significant challenges/risks when it comes to incorporating AI technologies into various aspects of everyday life. With carefully crafted industry-specific policies and responsible use, I believe that the potential benefits outweigh the risks. But the risks are not to be taken lightly.”

Anita Salem, systems research and design principal at SalemSystems, warned of a possible dystopian outcome, “Human-machine interaction will result in increasing precision and decreasing human relevance unless specific efforts are made to design in ‘humanness.’ For instance, AI in the medical field will aid more precise diagnosis, will increase surgical precision and will increase evidence-based analytics. If designed correctly, these systems will allow the humans to do what they do best –provide empathy, use experience-based intuition and utilize touch and connection as a source of healing. If human needs are left out of the design process, we’ll see a world where humans are increasingly irrelevant and more easily manipulated. We could see increasing under-employment leading to larger wage gaps, greater poverty and homelessness, and increasing political alienation. We’ll see fewer opportunities for meaningful work, which will result in increasing drug and mental health problems and the further erosion of the family support system. Without explicit efforts to humanize AI design, we’ll see a population that is needed for purchasing, but not creating. This population will need to be controlled and AI will provide the means for this control: law enforcement by drones, opinion manipulation by bots, cultural homogeny through synchronized messaging, election systems optimized from big data and a geopolitical system dominated by corporations that have benefited from increasing efficiency and lower operating costs.”

As it becomes more difficult for humans to understand how AI/tech works, it will become harder to resolve inevitable problems.Chris Newman

Chris Newman, principal engineer at Oracle, commented, “As it becomes more difficult for humans to understand how AI/tech works, it will become harder to resolve inevitable problems. A better outcome is possible with a hard push by engineers and consumers toward elegance and simplicity (e.g., Steve-Jobs-era Apple).”

A research scientist based in North America wrote, “The wheels of legislation, which is a primary mechanism to ensure benefits are distributed throughout society, move slowly. While the benefits of AI/automation will accrue very quickly for the 1%, it will take longer for the rest of the populace to feel any benefits, and that’s ONLY if our representative leaders DELIBERATELY enact STRONG social and fiscal policy. For example, AI will save billions in labor costs – and also cut the bargaining power of labor in negotiations with capital. Any company using AI technologies should be heavily taxed, with that money going into strong social welfare programs like job retraining and federal jobs programs. For another example, any publicly funded AI research should be prevented from being privatized. The public ought to see the reward from its own investments. Don’t let AI follow the pattern of Big Pharma’s exploitation of the public-permitted Bayh-Dole Act.”

Ken Birman, a professor in the department of computer science at Cornell University, responded, “By 2030, I believe that our homes and offices will have evolved to support app-like functionality, much like the iPhone in my pocket. People will customize their living and working spaces, and different app suites will support different lifestyles or special needs. For example, think of a young couple with children, a group of students sharing a home or an elderly person who is somewhat frail. Each would need different forms of support. This ‘applications’ perspective is broad and very flexible. But we also need to ensure that privacy and security are strongly protected by the future environment. I do want my devices and apps linked on my behalf, but I don’t ever want to be continuously spied-upon. I do think this is feasible, and as it occurs we will benefit in myriad ways.”

Martin Geddes, a consultant specializing in telecommunications strategies, said, “The unexpected impact of AI will be to automate many of our interactions with systems where we give consent and to enable a wider range of outcomes to be negotiated without our involvement. This requires a new presentation layer for the augmented reality metaverse, with a new ‘browser’ – the Guardian Avatar – that helps to protect our identity and our interests.”

Lindsey Andersen, an activist at the intersection of human rights and technology for Freedom House and Internews, now doing graduate research at Princeton University, commented, “Already, there is an overreliance on AI to make consequential decisions that affect people’s lives. We have rushed to use AI to decide everything, from what content we see on social media to assigning credit scores to determining how long a sentence a defendant should serve. While often well-intentioned, these uses of AI are rife with ethical and human rights issues, from perpetuating racial bias to violating our rights to privacy and free expression. If we have not dealt with these problems through smart regulation, consumer/buyer education and establishment of norms across the AI industry, we could be looking at a vastly more unfair, polarized and surveilled world in 2030.”

Yeseul Kim, a designer for a major South Korean search firm, wrote, “The prosperity generated by and the benefits of AI will promote the quality of living for most people only when its ethical implications and social impacts are widely discussed and shared inside the human society, and only when pertinent regulations and legislation can be set up to mitigate the misconduct that can be brought about as the result of AI advancement. If these conditions are met, computers and machines can process data at unprecedented speed and at an unrivaled precision level, and this will improve the quality of life, especially in medical and healthcare sectors. It has already been proven and widely shared among medical expert groups that doctors perform better in detecting diseases when they work with AI. Robotics for surgery is also progressing, so this will also benefit the patients as they can assist human surgeons who inevitably face physical limits when they conduct surgeries.”

Mark Maben, a general manager at Seton Hall University, wrote, “The AI revolution is, sadly, likely to be dystopian. At present, governmental, educational, civic, religious and corporate institutions are ill-prepared to handle the massive economic and social disruption that will be caused by AI. I have no doubt that advances in AI will enhance human capacities and empower some individuals, but this will be more than offset by the fact that artificial intelligence and associated technological advances will mean far fewer jobs in the future. Sooner than most individuals and societies realize, AI and automation will eliminate the need for retail workers, truck drivers, lawyers, surgeons, factory workers and other professions. In order to ensure that the human spirit thrives in a world run and ruled by AI, we will need to change the current concept of work. That is an enormous task for a global economic system in which most social and economic benefits come from holding a traditional job. We are already seeing a decline in democratic institutions and a rise in authoritarianism due to economic inequality and the changing nature of work. If we do not start planning now for the day when AI results in complete disruption of employment, the strain is likely to result in political instability, violence and despair. This can be avoided by policies that provide for basic human needs and encourage a new definition of work, but the behavior to date by politicians, governments, corporations and economic elites gives me little confidence in their ability to lead us through this transition.”

Eduardo Vendrell, a computer science professor at the Polytechnic University of Valencia in Spain, responded, “These advances will have a noticeable impact on our privacy, since the basis for this application is focused on the information we generate with the use of different technologies. … It will be necessary to regulate in a decisive way the access to the information and its use.”

Yoram Kalman, an associate professor at the Open University of Israel and member of The Center for Internet Research at the University of Haifa, wrote, “The main risk is when communication and analysis technologies are used to control others, to manipulate them, or to take advantage of them. These risks are ever-present and can be mitigated through societal awareness and education, and through regulation that identifies entities that become very powerful thanks to a specific technology or technologies, and which use that power to further strengthen themselves. Such entities – be they commercial, political, national, military, religious or any other – have in the past tried and succeeded in leveraging technologies against the general societal good, and that is an ever-present risk of any powerful innovation. This risk should make us vigilant but should not keep us from realizing one of the most basic humans urges: the strive to constantly improve the human condition.”

Sam Gregory, director of WITNESS and digital human rights activist, responded, “We should assume all AI systems for surveillance and population control and manipulation will be disproportionately used and inadequately controlled by authoritarian and non-democratic governments. These governments and democratic governments will continue to pressure platforms to use AI to monitor for content, and this monitoring, in and of itself, will contribute to the data set for personalization and for surveillance and manipulation. To fight back against this dark future we need to get the right combination of attention to legislation and platform self-governance right now, and we need to think about media literacy to understand AI-generated synthetic media and targeting. We should also be cautious about how much we encourage the use of AI as a solution to managing content online and as a solution to, for example, managing hate speech.”

Jonathan Kolber, futurist, wrote, “My fear is that, by generating AIs that can learn new tasks faster and more reliably than people can do, the future economy will have only evanescent opportunities for most people. My hope is that we will begin implementing a sustainable and viable universal basic income, and in particular Michael Haines’ MUBI proposal. (To my knowledge, the only such proposal that is sustainable and can be implemented in any country at any time.) I have offered a critique of alternatives. Given that people may no longer need depend on their competitive earning power in 2030, AI will empower a far better world. If, however, we fail to implement a market-oriented universal basic income or something equally effective, vast multitudes will become unemployed and unemployable without means to support themselves. That is a recipe for societal disaster.”

Walid Al-Saqaf, senior lecturer at Södertörn University, member of the board of trustees of the Internet Society (ISOC) and vice president of the ISOC Blockchain Special Interest Group, commented, “The challenge is to ensure that the data used for AI procedures is reliable. This entails the need for strong cyber security and data integrity. The latter, I believe, can be tremendously enhanced by distributed ledger technologies such as blockchain. I foresee mostly positive results from AI so long as there is enough guards to protect from automated execution of tasks in areas that may have ethical considerations such as taking decisions that may have life-or-death implications. AI has a lot of potential. It should be used to add to and not replace human intellect and judgement.”

Danny O’Brien, international director for a nonprofit digital rights group, commented, “I’m generally optimistic about the ability of humans to direct technology for the benefit of themselves and others. I anticipate human-machine collaboration to take place at an individual level, with tools and abilities that enhance our own judgment and actions, rather than this being a power restricted to a few actors. So, for instance, if we use facial-recognition or predictive tools, it will be under the control of an end-user, transparent and limited to personal use. This may require regulation, internal coding restraints or a balance being struck between user capabilities. But I’m hopeful we can get there.”

Fernando Barrio, director of the law program at the Universidad Nacional de Río Negro in Argentina, commented, “The interaction between humans and networked AI could lead to a better future for a big percentage of the population. In order to do so efforts need to be directed not only at increasing AI development and capabilities but also at positive policies to increase the availability and inclusiveness of those technologies. The challenge is not technical; it is sociopolitical.”

Paul Jones, professor of information science at the University of North Carolina at Chapel Hill, responded, “AI as we know it in 2018 is just beginning to understand itself. Like HAL, it will have matured by 2030 into an understanding of its post-adolescent self and of its relationship to humans and to the world. But, also, humans will have matured in our relationship to AI. Like all adolescent relationships there will have been risk taking and regrets and hopefully reconciliation. Language was our first link to other intelligences, then books, then the internet – each a more intimate conversation than the one before. AI will become our link, adviser and to some extent our wise loving companion.”

Jean-Claude Heudin, a professor with expertise in AI and software engineering at the De Vinci Research Center at Pole Universitaire Leonard de Vinci in France, wrote, “Natural intelligence and artificial intelligence are complementary. We need all of the possible intelligence possible for solving the problems yet to come. More intelligence is always better.”

Bryan Alexander, futurist and president of Bryan Alexander Consulting, responded, “I hope we will structure AI to enhance our creativity, to boost our learning, to expand our relationships worldwide, to make us physically safer and to remove some drudgery.”

But some have concerns that the setting of policy could do some damage.

Scott Burleigh, software engineer and intergalactic internet pioneer, wrote, “Advances in technology itself, including AI, always increase our ability to change the circumstances of reality in ways that improve our lives. It also always introduces possible side effects that can make us worse off than we were before. Those effects are realized when the policies we devise for using the new technologies are unwise. I don’t worry about technology; I worry about stupid policy. I worry about it a lot, but I am guardedly optimistic; in most cases I think we eventually end up with tolerable policies.”

What worries me most is worry itself: An emerging moral panic that will cut off the benefits of this technology for fear of what could be done with it.Jeff Jarvis

Jeff Jarvis, director of the Tow-Knight Center at City University of New York’s Craig Newmark School of Journalism, commented, “What worries me most is worry itself: An emerging moral panic that will cut off the benefits of this technology for fear of what could be done with it. What I fear most is an effort to control not just technology and data but knowledge itself, prescribing what information can be used for before we know what those uses could be. I could substitute ‘book’ for ‘AI’ and the year 1485 (or maybe 1550) for 2030 in your question and it’d hold fairly true. Some thought it would be good, some bad; both end up right. We will figure this out. We always have. Sure, after the book there were wars and other profound disturbances. But in the end, humans figure out how to exploit technologies to their advantage and control them for their safety. I’d call that a law of society. The same will be true of AI. Some will misuse it, of course, and that is the time to identify limits to place on its use – not speculatively before. Many more will use it to find economic, societal, educational and cultural benefit and we need to give them the freedom to do so.”

Some respondents said no matter how society comes together to troubleshoot AI concerns there will still be problems.

Dave Gusto, professor of political science and co-director of the Consortium for Science, Policy & Outcomes at Arizona State University, said, “The question asked about ‘most people.’ Most people in the world live a life that is not well regarded by technology, technology developers and AI. I don’t see that changing much in the next dozen years.”

A longtime Silicon Valley communications professional who has worked at several of the top tech companies over the past few decades responded, “AI will continue to improve *if* quality human input is behind it. If so, better AI will support service industries at the top of the funnel, leaving humans to handle interpretation, decisions and applied knowledge. Medical data-gathering for earlier diagnostics comes to mind. Smarter job-search processes, environmental data collection for climate-change actions – these applications all come to mind.”

Hari Shanker Sharma, an expert in nanotechnology and neurobiology at Uppsala University in Sweden, said, “AI has not yet peaked hence growth will continue, but evil also uses such developments. That will bring bigger dangers to mankind. The need will be to balance growth with safety, e.g., social media is good and bad. The ways to protect from evil mongers are not sufficient. Tracing an attacker/evil monger in a global village to control and punish is the need. AI will give birth to an artificial human being who could be an angel or a devil. Plan for countering evil at every development stage.”

A changemaker working for digital accessibility wrote, “There is no reason to assume some undefined force will be able to correct for or ameliorate the damage of human nature amplified with power-centralizing technologies. There is no indication that governments will be able to counterbalance power-centralization trends, as governments, too, take advantage of such market failures. The outward dressing of such interactions is probably the least important aspect of it.”

An information-science futurist commented, “I fear that powerful business interests will continue to put profits above all else, closing their eyes to the second- and third-order effects of their decisions. I fear that we do not have the political will to protect and promote the common interests of citizens and democracy. I fear that our technological tools are advancing more quickly than our ability to manage them wisely. I have, however, recently spotted new job openings with titles like ‘Director of Research, Policy and Ethics in AI’ and ‘Architect, AI Ethical Practice’ at major software companies. There are reasons for hope.”

The following one-liners from anonymous respondents also tie into this theme:

  • An open-source technologist in the automotive industry wrote, “We’ll have to have independent AI systems with carefully controlled data access, clear governance and individuals’ right to be forgotten.”
  • A research professor of international affairs at a major university in Washington, D.C., responded, “We have to find a balance between regulations designed to encourage ethical nondiscriminatory use, transparency and innovation.”
  • A director for a major regional internet registry said, “The ability of government to properly regulate advanced technologies is not keeping up with the evolution of those technologies. This allows many developments to proceed without sufficient notice, analysis, vetting or regulation to protect the interests of citizens (Facebook being a prime example).”
  • A professor at a major Silicon-Valley-area university said, “If technological advances are not integrated into a vision of holistic, ecologically sustainable, politically equitable social visions, they will simply serve gated and locked communities.”
  • A member of the editorial board of the Association of Computing Machinery Journal on autonomous and adaptive systems commented, “By developing an ethical AI, we can provide smarter services in daily life, such as collaborating objects providing on-demand highly adaptable services in any environment supporting daily life activities.”

Other anonymous respondents commented:

  • “It is essential that policymakers focus on impending inequalities. The central question is for whom will life be better, and for whom will it be worse? Some people will benefit from AI, but many will not. For example, folks on the middle and lower end of the income scale will see their jobs disappear as human-machine/AI collaborations become lower-cost and more efficient. Though such changes could generate societal benefits, they should not be born on the backs of middle- and low-income people.”
  • “Results will be determined by the capacity of political, criminal justice and military institutions to adapt to rapidly evolving technologies.”
  • “To assure the best future, we need to ramp up efforts in the areas of decentralizing data ownership, education and policy around transparency.”
  • “Most high-end AI knowhow is and will be controlled by a few giant corporations unless government or a better version of the United Nations step in to control and oversee them.”
  • “Political change will determine whether AI technologies will benefit most people or not. I am not optimistic due to the current growth of authoritarian regimes and the growing segment of the super-rich elite who derive disproportionate power over the direction of society from their economic dominance.”
  • “Mechanisms must be put in place to ensure that the benefits of AI do not accrue only to big companies and their shareholders. If current neo-liberal governance trends continue, the value-added of AI will be controlled by a few dominant players, so the benefits will not accrue to most people. There is a need to balance efficiency with equity, which we have not been doing lately.”

Shift the priorities of economic, political and education systems to empower individuals to stay ahead in the ‘race with the robots’

A share of these experts suggest the creation of policies, regulations or ethical and operational standards should shift corporate and government priorities to focus on the global advancement of humanity, rather than profits or nationalism. They urge that major organizations revamp their practices and make sure AI advances are aimed at human augmentation for all, regardless of economic class.

Evan Selinger, professor of philosophy at the Rochester Institute of Technology, commented, “In order for people, in general, to be better off as AI advances through 2030, a progressive political agenda – one rooted in the protection of civil liberties and human rights and also conscientious of the dangers of widening social and economic inequalities – would have to play a stronger role in governance. In light of current events, it’s hard to be optimistic that such an agenda will have the resources necessary to keep pace with transformative uses of AI throughout ever-increasing aspects of society. To course-correct in time it’s necessary for the general public to develop a deep appreciation about why leading ideologies concerning the market, prosperity and security are not in line with human flourishing.”

AI ‘done right’ will empower.Nicholas Beale

Nicholas Beale, leader of the strategy practice at Sciteb, an international strategy and search firm, commented, “All depends on how responsibly AI is applied. AI ‘done right’ will empower. But unless Western CEOs improve their ethics it won’t. I’m hoping for the best.”

Benjamin Shestakofsky, an assistant professor of sociology at the University of Pennsylvania specializing in digital technology’s impacts on work, said, “Policymakers should act to ensure that citizens have access to knowledge about the effects of AI systems that affect their life chances and a voice in algorithmic governance. The answer to this question will depend on choices made by citizens, workers, organizational leaders and legislators across a broad range of social domains. For example, algorithmic hiring systems can be programmed to prioritize efficient outcomes for organizations or fair outcomes for workers. The profits produced by technological advancement can be broadly shared or can be captured by the shareholders of a small number of high-tech firms.”

Charles Zheng, a researcher into machine learning and AI with the National Institute of Mental Health, wrote, “To ensure the best future, politicians must be informed of the benefits and risks of AI and pass laws to regulate the industry and to encourage open AI research. My hope is that AI algorithms advance significantly in their ability to understand natural language, and also in their ability to model humans and understand human values. My fear is that the benefits of AI are restricted to the rich and powerful without being accessible to the general public.”

Mary Chayko, author of “Superconnected: The Internet, Digital Media, and Techno-Social Life,” said, “We will see regulatory oversight of AI geared toward the protection of those who use it. Having said that, people will need to remain educated as to AI’s impacts on them and to mobilize as needed to limit the power of companies and governments to intrude on their spaces, lives and civil rights. It will take vigilance and hard work to accomplish this, but I feel strongly that we are up to the task.”

R “Ray” Wang, founder and principal analyst at Constellation Research, based in Silicon Valley, said, “We have not put the controls of AI in the hands of many. In fact the experience in China has shown how this technology can be used to take away the freedoms and rights of the individual for the purposes of security, efficiency, expediency and whims of the state. On the commercial side, we also do not have any controls in play as to ethical AI. Five elements should be included – transparency, explainability, reversibility, coachability and human-led processes in the design.”

John Willinsky, professor and director of the Public Knowledge Project at Stanford Graduate School of Education, said, “Uses of AI that reduce human autonomy and freedom will need to be carefully weighed against the gains in other qualities of human life (e.g., driverless cars that improve traffic and increase safety). By 2030, deliberations over such matters will be critical to the functioning of ‘human-machine/AI collaboration.’ My hope, however, is that these deliberations are not framed as collaborations between what is human and what is AI but will be seen as the human use of yet another technology, with the wisdom of such use open to ongoing human consideration and intervention intent on advancing that sense of what is most humane about us.”

A professor of media studies at a U.S. university commented, “Technology will be a material expression of social policy. If that social policy is enacted through a justice-oriented democratic process, then it has a better chance of producing justice-oriented outcomes. If it is enacted solely by venture-funded corporations with no obligation to the public interest, most people in 2030 will likely be worse off.”

Gene Crick, director of the Metropolitan Austin Interactive Network and longtime community telecommunications expert, wrote, “To predict AI will benefit ‘most’ people is more hopeful than certain. … AI can benefit lives at work and home – if competing agendas can be balanced. Key support for this important goal could be technology professionals’ acceptance and commitment regarding social and ethical responsibilities of our work.”

Anthony Picciano, a professor of education at the City of New York University’s Interactive Technology and Pedagogy program, responded, “I am concerned that profit motives will lead some companies and individuals to develop AI applications that will threaten, not necessarily improve, our way of life. In the next 10 years we will see evolutionary progress in the development of artificial intelligence. After 2030, we will likely see revolutionary developments that will have significant ramifications on many aspects of human endeavor. We will need to develop checks on artificial intelligence.”

Bill Woodcock, executive director at Packet Clearing House, the research organization behind global network development, commented, “In short-term, pragmatic ways, learning algorithms will save people time by automating much of tasks like navigation and package delivery and shopping for staples. But that tactical win comes at a strategic loss as long as the primary application of AI is to extract more money from people, because that puts them in opposition to our interests as a species, helping to enrich a few people at the expense of everyone else. In AI that exploits human psychological weaknesses to sell us things, we have for the first time created something that effectively predates our own species. That’s a fundamentally bad idea and requires regulation just as surely as would self-replicating biological weapons.”

Ethem Alpaydın, a professor of computer engineering at Bogazici University in Istanbul, responded, “AI will favor the developed countries that actually develop these technologies. AI will help find cures for various diseases and overall improve the living conditions in various ways. For the developing countries, however, whose labor force is mostly unskilled and whose exports are largely low-tech, AI implies higher unemployment, lower income and more social unrest. The aim of AI in such countries should be to add skill to the labor force rather than supplant them. For example, automatic real-time translation systems (e.g., Google’s Babel fish) would allow people who don’t speak a foreign language to find work in the tourism industry.”

Joe Whittaker, a former professor of sciences and associate director of the NASA GESTAR program, now associate provost at Jackson State University, said, “Actions should be taken to make the internet universally available and accessible, provide the training and know-how for all users.”

John Paschoud, councilor for the London borough of Lewisham, said, “It is possible that advances in AI and networked information will benefit ‘most’ people, but this is highly dependent upon on how those benefits are shared. … If traditional capitalist models of ‘ownership of the means of production’ prevail, then benefits of automated production will be retained by the few who own, not the many who work. Similarly, models of housing, health care, etc., can be equitably distributed and can all be enhanced by technology.”

David Schlangen, a professor of applied computational linguistics at Bielefeld University in Germany, responded, “If the right regulations are put in place and ad-based revenue models can be controlled in such a way that they cannot be exploited by political interest groups, the potential for AI-based information search and decision support is enormous. That’s a big if, but I prefer to remain optimistic.”

Kate Carruthers, a chief data and analytics officer based in Australia, predicted, “Humans will increasingly interact with AI on a constant basis and it will become hard to know where the boundaries are between the two. Just as kids now see their mobile phones as an extension of themselves so too will human/AI integration be. I fear that the cause of democracy and freedom will be lost by 2030, so it might be a darker future. To avoid that, one thing we need to do is ensure the development of ethical standards for the development of AI and ensure that we deal with algorithmic bias. We need to build ethics into our development processes. Further, I assume that tracking and monitoring of people will be an accepted part of life and that there will be stronger regulation on privacy and data security. Every facet of life will be circumscribed by AI, and it will be part of the fabric of our lives.”

David Zubrow, associate director of empirical research at Carnegie Mellon University’s Software Engineering Institute, said, “How the advances are used demands wisdom, leadership and social norms and values that respect and focus on making the world better for all; education and health care will reach remote and underserved areas, for instance. The fear is control is consolidated in the hands of few that seek to exploit people, nature and technology for their own gain. I am hopeful that this will not happen.”

Francisco S. Melo, an associate professor of computer science at Instituto Superior Técnico in Lisbon, Portugal, responded, “I expect that AI technology will contribute to render several services (in health, assisted living, etc.) more efficient and humane and, by making access to information more broadly available, contribute to mitigate inequalities in society. However, in order for positive visions to become a reality, both AI researchers and the general population should be aware of the implications that such technology can have, particularly in how information is used and the ways by which it can be manipulated. In particular, AI researchers should strive for transparency in their work, in order to demystify AI and minimize the possibility of misuse; the general public, on the other hand, should strive to be educated in the responsible and informed use of technology.”

Kyung Sin Park, internet law expert and co-founder of Open Net Korea, responded, “AI consists of software and training data. Software is already being made available on an open source basis. What will decide AI’s contribution to humanity will be whether data (used for training AI) will be equitably distributed. Data-protection laws and the open data movement will hopefully do the job of making more data available equally to all people. I imagine a future where people can access AI-driven diagnosis of symptoms, which will drastically reduce health care costs for all.”

Doug Schepers, chief technologist at Fizz Studio, said, “AI/ML, in applications and in autonomous devices and vehicles, will make some jobs obsolete, and the resulting unemployment will cause some economic instability that impacts society as a whole, but most individuals will be better off. The social impact of software and networked systems will get increasingly complex, so ameliorating that software problem with software agents may be the only way to decrease harm to human lives, but only if we can focus the goal of software to benefit individuals and groups rather than companies or industries.”

Erik Huesca, president of the Knowledge and Digital Culture Foundation, based in Mexico City, said, “There is a concentration of places where specific AI is developed. It is a consequence of the capital investment that seeks to replace expensive professionals. Universities have to rethink what type of graduates to prepare, especially in areas of health, legal and engineering, where the greatest impact is expected, since the labor displacement of doctors, engineers and lawyers is a reality with the incipient developed systems.”

Stephen Abram, principal at Lighthouse Consulting Inc., wrote, “I am concerned that individual agency is lost in AI and that appropriate safeguards should be in place around data collection as specified by the individual. I worry that context can be misconstrued by government agencies like ICE, IRS, police, etc. There is a major conversation needed throughout the time during which AI applications are developed, and they need to be evergreen as innovation and creativity spark new developments. Indeed, this should not be part of a political process, but an academic, independent process guided by principles and not economics and commercial entities.”

David Klann, consultant and software developer at Broadcast Tool & Die, responded, “AI and related technologies will continue to enhance peoples’ lives. I tend toward optimism; I instinctively believe there are enough activists who care about the ethics of AI that the technology will be put to use solving problems that humans cannot solve on their own. Take mapping, for instance. I recently learned about congestion problems caused by directions being optimized for individuals. People are now tweaking the algorithms to account for multiple people taking the ‘most efficient route’ that had become congested and was causing neighborhood disturbance due to the increased traffic. I believe people will construct AI algorithms to learn of and to ‘think ahead’ about such unintended consequences and to avoid them before they become problems. Of course, my fear is that money interests will continue to wield an overwhelming influence over AI and machine learning (ML). These can be mitigated through fully disclosed techniques, transparency and third-party oversight. These third parties may be government institutions or non-government organizations with the strength to ‘enforce’ ethical use of the technologies. Open source code and open ML training data will contribute significantly to this mitigation.”

Andrian Kreye, a journalist and documentary filmmaker based in Germany, said, “If humanity is willing to learn from its mistakes with low-level AIs like social media algorithms there might be a chance for AI to become an engine for equality and progress. Since most digital development is driven by venture capital, experience shows that automation and abuse will be the norm.”

We have to make data unbiased before putting it into AI, but it’s not very easy.Mai Sugimoto

Mai Sugimoto, an associate professor of sociology at Kansai University in Japan, responded, “AI could amplify one’s bias and prejudice. We have to make data unbiased before putting it into AI, but it’s not very easy.”

An anonymous respondent wrote, “There are clearly advances associated with AI, but the current global political climate gives no indication that technological advancement in any area will improve most lives in the future. We also need to think ecologically in terms of the interrelationship between technology and other social-change events. For example, medical technology has increased lifespans, but the current opioid crisis has taken many lives in the U.S. among certain demographics.”

A founder and president said, “The future of AI is more about the policies we choose and the projects we choose to fund. I think there will be large corporate interests in AI that serve nothing but profits and corporations’ interests. This is the force for the ‘bad.’ However, I also believe that most technologists want to do good, and that most people want to head in a direction for the common good. In the end, I think this force will win out.”

A senior strategist in regulatory systems and economics for a top global telecommunications firm wrote, “If we do not strive to improve society, making the weakest better off, the whole system may collapse. So, AI had better serve to make life easier for everyone.”

Sign up for The Briefing

Weekly updates on the world of news & information