Tech

Can we use AI to predict social unrest?

A team of researchers has created an artificial intelligence (AI) system that mimics human religiosity, laying the groundwork for psychologically accurate AI that can predict human behavior

February 26, 2019
An image of a human brain is highlighted through computer circuits
Researchers are trying to study human behavior through AI (Credit: Pixabay)

In 2016, a Microsoft chatbot went on racist, misogynistic Twitter rants.  Later that same year, Facebook had to shut down two AI robots that had started communicating with each other in a new, unfamiliar language. And then in 2017, viewers nervously watched Google’s deep learning machine system’s stick figure teach itself how to walk, fumbling its way through the digital course onscreen and throwing its hands and legs up in the air to leap to the finish line.

As intriguing as they can be, some of artificial intelligence’s efforts at mimicking human behavior leave us feeling slightly sketched out.

Now, a group of researchers from the University of Oxford, Boston University and University of Agder in Norway is working on AI that goes a little deeper than an animated paper clip. They claim to have developed a new AI model that predicts human behavior — specifically, a scenario of social unrest.

The team wanted to understand the social dynamics behind religiously-motivated acts of violence. Their AI system was designed like the famous computer game The Sims, in which virtual characters with their own unique moods and desires participate in a simulation of human interactions. However, unlike The Sims, there is no visual element to the virtual society. Instead, its members interact with each other through a messaging system that is similar to our social media platforms.

Each of the members, or simulated agents, was developed to have a cognitive architecture as similar as possible to a human’s, says LeRon Shults, director of the Center for Modeling Social Systems in Kristiansand, Norway and one of the co-authors of the study published in the Journal of Artificial Societies and Social Simulations last October.

Like The Sims, their method involved assigning individual traits to each character and then programming them to interact with one another. Each character was assigned to either a religious majority or a religious minority, as well as a corresponding set of beliefs and rituals.

Typically, AI systems evolve using machine learning, in which a computer system is fed data that it then interacts with on its own, says Justin Lane, a researcher at Oxford’s Institute of Cognitive and Evolutionary Anthropology and one of Shults’ co-authors. But in this case, the team designed algorithms that processed information in the same way that the human brain processes information. The point was to create agents that think like humans.

One example of the distinction might be a reaction that is based on our evolved psychology rather than learning, such as our automatic, negative reaction to seeing someone get sick in public.

“Most machine learning algorithms aren’t very realistic in the emotional, in-group human rationality, but all of those variables are what are relevant in social science,” says Shults.

These more cognitively realistic models should yield endless possibilities of real-world applications, he adds.

The team is also working on models of other relevant contemporary human scenarios, such as immigration in European cities, political polarization and climate change.

“What this new approach does is provide more realistic artificial societies that can more directly inform policy,” says Shults.

Eventually, the method could drill down to individual behavior. Researchers might call out one individual agent from the group, for example an ethnic minority female, in a specific virtual society of their creation and ask her specific questions.

With this type of artificial intelligence being in its early phases of development, there are a lot of possible ways it can be used.

“Social simulation is a relatively young field… but the field has grown recently,” says Flaminio Squazzoni, director of a behavior research lab at the University of Milan in Italy. It can be applied in a variety of fields, he says, the most intriguing one being the study of collective opinion.

However, many researchers debate how accurately these virtual models can predict social scenarios.

“Prediction is key for scientific progress but it’s quite problematic when you have humans involved because individuals [vary] from a rational sense,” says Squazzoni.

He still thinks there is potential for social simulation to be used in policy making, since it essentially allows stakeholders to run experiments and test the application of certain policies without running ethically questionable social experiments on actual human beings.

Lane’s work has drawn interest from decision makers, like lawmakers, who want to test the impact of specific policies on people and their reactions. But some stigma still surrounds this type of technology. For instance, technology entrepreneur Elon Musk tweeted in 2014 that AI is “potentially more dangerous than nukes.”

People’s fears of AI systems are in large part caused by media miscommunication, says Cristobal Valenzuela, a researcher at New York University’s Interactive Telecommunications Program and creator of Runway, a company that builds tools that make AI systems more accessible to the general public

Referring to the 2016 chatbot gone rogue incident, Valenzuela says the media could have framed it as a computer system that attempted to learn something and failed instead of describing it like some sort of a humanoid. “It’s just a math function that wasn’t optimized,” he adds.

In an effort to improve the public perception of AI, Shults and his colleagues have made their code public and open source, which means that other developers can see and modify it. “We’re really explicit about our assumptions and the purpose of our model,” he says, which is to reduce conflict in future social scenarios.

The team is also working on a user interface similar to The Sims, but their models would be “empirically validated, far more complex and oriented towards policy,” Shults says.

So, not quite like The Sims. But perhaps more useful.

About the Author

Passant Rabie

Passant Rabie is an award-winning journalist from Cairo, Egypt. She feels strongly about issues related to environmental justice, conservation and access to clean water. Her interests also include genetics and race, artificial intelligence and trees. She loves trees. Prior to moving to New York, she spent years writing for independent media outlets across the Middle East and aims to produce accurate coverage of science stories within a regional context.

Discussion

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe

The Scienceline Newsletter

Sign up for regular updates.