Survey Responses Include Cautions for AI Designs

The Pew Research Center report asked nearly 1,000 scientists and AI related professionals whether human lives will improve in the next decade. They responded with mixed answers that were mostly positive.

Researchers surveyed a whopping 979 technology pioneers, innovators, developers, business and policy leaders, researchers and activists through lengthy interviews in the summer of 2018. A majority⏤63%⏤said they are hopeful that most people will be better off by 2030. A sizable minority⏤37%⏤said people will not be better off.

Most of the experts predicted networked artificial intelligence will improve human “effectiveness,” but will also threaten autonomy. They believe:

  • Computers might match or even exceed human intelligence and capabilities on tasks such as complex decision-making, reasoning and learning, advanced analytics and pattern recognition, visual acuity, speech recognition and language translation.
  • They said “smart” systems in communities, in vehicles, in buildings and utilities, on farms and in business processes will save time, money and lives and offer opportunities for people to enjoy a more-customized future.

The respondents broke down their concerns this way:
Human agency
Individuals are already experiencing a loss of control over their lives that may accelerate.

Decision-making on key aspects of digital life is automatically ceded to code-driven, “black box” tools. People lack input and don’t learn the context of how tools work. They give up independence, privacy and power over choice; they have no control over processes. This effect will deepen as automated systems become more prevalent and complex.

Data abuse
Corporations and governments may abuse data and surveillance in complex systems designed for profit or for exercising power.

Most AI tools are and will be in the hands of for-profit companies or governments seeking power. Values and ethics are often not programmed into digital systems making people’s decisions for them. These systems are globally networked and not easy to regulate or rein in.

Job loss
The AI takeover of jobs will widen economic divides, leading to social upheaval.

The efficiencies and other economic advantages of code-based machine intelligence will continue to disrupt all aspects of human work. While some expect new jobs will emerge, others worry about massive job losses, widening economic divides and social upheavals, including populist uprisings.

Dependence lock-in
Reliance on AI systems will reduce cognitive, social and survival skills.

Many see AI as boosting human capacities, but some predict the deepening dependence on machine-driven networks will erode human ability to think for themselves, take action independent of automated systems and interact effectively with others.

Autonomous weapons, cybercrime and weaponized information could go haywire.

Some predict further erosion of traditional sociopolitical structures and the possibility of great loss of lives due to accelerated growth of autonomous military applications and the use of weaponized information, lies and propaganda. Some also fear cybercriminals’ reach into economic systems.

The report’s suggested solutions are as follows:

Global good is No. 1
Improve human collaboration across borders and stakeholder groups.

Digital cooperation to serve humanity’s best interests is the top priority. People around the world need to create common understandings and agreements⏤and join forces to facilitate the innovation of widely accepted approaches for tackling wicked problems and maintaining control over complex human-digital networks.

Values-based system
Developers need policies to focus AI on human needs and the common good.

Build inclusive, decentralized intelligent digital networks “imbued with empathy” that help humans aggressively ensure that technology meets social and ethical responsibilities. Some new level of regulatory and certification process will be necessary.

Prioritize people
Alter economic and political systems to better help humans “race with the robots.”

Reorganize economic and political systems toward the goal of expanding humans’ capacities and capabilities to heighten human/AI collaboration and staunch trends that would compromise human relevance in the face of programmed intelligence.

Despite the downsides of AI, 63% of respondents said they think most people will be better off in 2030; 37% said people won’t be better off.

Some of the technology experts’ comments show their conflicted thinking. Erik Brynjolfsson of MIT said, “We need to work aggressively to make sure technology matches our values.”

Barry Chudakov, founder and principal of Sertain Research said, “Our societal structures are failing to keep pace with the rate of change. He cited Joi Ito’s phrase extended intelligence  in regard to his belief that developers will need to value and revalue virtually every area of human behavior and interaction… “If we are fortunate, we will follow the 23 Asilomar AI Principles outlined by the Future of Life Institute and will work toward ‘not undirected intelligence but beneficial intelligence.”

Marina Gorbis, Executive Director at the Institute for the Future, said, “Without significant changes in our political economy and data governance regimes [AI] is likely to create greater economic inequalities, more surveillance and more programmed and non-human-centric interactions. Every time we program our environments, we end up programming ourselves and our interactions. Humans have to become more standardized, removing serendipity and ambiguity from our interactions. And this ambiguity and complexity is what is the essence of being human.”

Her book “The Nature of the Future,” however, shows the positive side of giving individuals power to connect and share resources to solve problems by reinventing business, education, medicine, banking, government, and scientific research.


Also, an in-depth discussion of how to mitigate the technology’s threats is at Five Standards for Responsible AI Use