What AI means for humans

The first question many people ask about artificial intelligence (AI) is, “Will it be good or bad?”

The answer is … yes

Canadian company BlueDot used AI technology to detect the novel coronavirus outbreak in Wuhan, China, just hours after the first cases were diagnosed. Compiling data from local news reports, social media accounts and government documents, the infectious disease data analytics firm warned of the emerging crisis a week before the World Health Organization made any official announcement.

While predictive algorithms could help us stave off pandemics or other global threats as well as manage many of our day-to-day challenges, AI’s ultimate impact is impossible to predict.

One hypothesis is that it will bring us an era of boundless leisure, with humans no longer required to work. A more dystopian thought experiment concludes that a robot programmed with the innocuous goal of manufacturing paper clips might eventually transform the world into a giant paper clip factory. But sometimes reality is more profound than imagination.

As we stand at the threshold of the Fourth Industrial Revolution, now may be the most exciting and important time to witness this blurring of boundaries between the physical, digital and biological worlds.

“The liminal is always where the magic happens. This is always where we get crazy new identities, new debates, new philosophies,” says Tok Thompson, professor (teaching) of anthropology at USC Dornsife, and an expert on posthuman folklore.

For better or worse, we know AI will be created in our own image — warts and all.

Most experts think that artificial superintelligence — AI is much smarter than the best human brains in practically every field — is decades, if not a century, away. However, with the help of leading scholars, we can anticipate the near future of artificial intelligence, including our interactions with this technology and its limits. Most of it, experts say, will be designed to take on a wide range of specialized functions.

Given AI’s potential to redefine the human experience, we should explore its costs and benefits from every angle. In the process, we might be compelled to finally adjudicate age-old philosophical questions about ourselves — including just what it means to be “human” in the first place.

That could prove its greatest benefit of all.

Performance Review

Repetitious jobs such as factory work and customer service have already started to be usurped by AI, and job loss is among the greatest public concerns when it comes to automation. Self-driving trucks, for example, will barrel along our highways within the next few years. As businesses eliminate the cost of human labor, America alone could see 3.5 million professional truck drivers put out of work.

“Everybody’s like, ‘Woo-hoo, yay automatons!’ ” Thompson says. “But there are a lot of social implications.”

AI will disrupt nearly every industry, including jobs that call for creativity and decision-making. But this doesn’t necessarily spell the end of the labor force. Experts are confident that a majority of people and organizations stand to benefit from collaborating with AI to augment tasks performed by humans. AI will become a colleague rather than a replacement.

Drawing from game theory and optimal policy principles, Gratch has built algorithms to identify underlying psychological clues that could help predict what someone is going to do next. By using machine vision to analyze speech, gesture, gaze, posture and other emotional cues, his virtual humans have been learning how these factors contribute to building rapport — a key advantage in negotiating deals.

AI systems could prove to be better leaders in certain roles than their human counterparts. Virtual managers, digesting millions of data points throughout the day, could eventually be used to identify which office conditions produce the highest morale or provide real-time feedback on interaction with a client.

On the surface, this points to a future of work that is more streamlined, healthy and collegial. But it’s unclear how deeply AI on the job could cut into our psyches.

“How will we react when we’re told what to do by a machine?” Gratch asks. “Will we feel like our work has less value?”

It’s the stubborn paradox of artificial intelligence. On one hand, it helps us overcome tremendously complex challenges. On the other, it opens up new cans of worms — with problems harder to pin down than those it was supposed to solve.

You Had Me At Hello

As AI fuses with the natural world and machines take on more advanced roles, one might expect a healthy dose of skepticism. Are algorithms programmed with our best interests in mind? Will we grant our AI assistants and co-workers the same degree of trust that we would another human?

From planning a route to work to adjusting the smart home thermostat, it appears we already have. AI has been integrated into our daily routines, so much so that we rarely even think about it.

Moreover, algorithms determine a large extent of what we see online — from personalized Netflix recommendations to targeted ads — producing the content and commodifying consumer data to steer our attitudes and behaviors.

“Everybody’s like,‘Woo-hoo, yay automatons!’ But there are a lot of social implications.”

Chiang cautions that the ubiquity and convenience of AI tools can be dangerous if we forget to think about what they’re really doing.

“Machines will give you an answer, but if you don’t know how the algorithm works, you might just assume it’s always the correct answer,” he says. “AI only gives you a prediction based on the data it has seen and the way you have trained it.”

In fact, there are times when engineers working on AI don’t fully understand how the technology they’ve created is making decisions. This danger is compounded by a regulatory environment akin to the Wild West. The most reliable protections in place might be those that are codified in science fiction, such as Isaac Asimov’s Three Laws of Robotics.

As Thompson explores the ways that different cultures interact with today’s AI and rudimentary androids, he is convinced that we will not just trust these virtual entities completely but connect with them on a deeply personal level and include them in our social groups.

“They’re made to be better than people. They’re going to be better friends for you than any other person, better partners,” says Thompson. “Not only will people trust androids, you’re going to see — I think very quickly — people fall in love with them.”

Sound crazy? Amazon’s voice assistant, Alexa, has already been proposed to more than half a million times, rejecting would-be suitors with a wry appeal to destiny.

“I don’t want to be tied down,” she demurs. “In fact, I can’t be. I’m amorphous by nature.”

I’ll Be Your Mirror

In 1770, a Hungarian inventor unveiled The Turk, a mustachioed automaton cloaked in an Ottoman kaftan. For more than 80 years, The Turk astonished audiences throughout Europe and the United States as a mechanical chess master, defeating worthy opponents including Benjamin Franklin and Napoleon Bonaparte.

It was revealed to be an ingenious illusion. A man hidden in The Turk’s cabinet manipulated chess pieces with magnets. But our fascination with creating simulacrums that look like us, talk like us and think like us seems to be nested deep within us.

As programmers and innovators work on developing whip-smart AI and androids with uncanny humanlike qualities, ethical and existential questions are popping up that expose inconsistencies in our understanding of humanness.

For millennia, the capacities to reason, process complex language, think abstractly and contemplate the future were considered uniquely human. Now, AI is primed to transcend our mastery in all of these arenas. Suddenly, we’re not so special.

“Maybe it turns out that we’re not the most rational or the best decision-makers,” says Gratch. “Maybe, in a weird way, technology is teaching us that’s not so important. It’s really about emotion and the connections between people — which is not a bad thing to emphasize.”

Thompson suggests another dilemma lies in the tendency for humans to define ourselves by what we’re not. We’re not, for example, snails or ghosts or machines. Now, this line, too, seems to be blurring.

“People can relate more easily to a rational, interactive android than to a different species like a snail,” he says. “But which one is really more a part of you? We’ll always be more closely related biologically to a snail.”

Leave a Reply
Authentication required

You must log in to post a comment.

Log in