Tech

Coding consciousness

The race to build a self-aware robot is on, but how do we define self awareness in a robot?

April 11, 2019
A man's skull is split open at the top, revealing his mechanical insides
What does self-awareness look like in AI? [Credit: Pixabay]

Maxwell is 23 years old. He enjoys improvisational jazz, likes his eggs sunny-side-up and his bacon crispy. But unlike most people, Maxwell also has nine inference engines, six memory systems and can understand 10,000 topics.

The social bot Maxwell, a green parrot that pops up on a computer screen, was designed in 1995 by James Crowder, a systems fellow at Colorado Engineering Inc.. But even Maxwell’s creator is baffled by the bot’s personal preferences.

“Why he picked out that specific genre of music, I don’t know,” Crowder says.

Crowder has been developing artificial intelligence systems for more than 20 years with the aim of creating machines that can operate on their own, without the intervention of human beings. And Maxwell, he believes, is the closest he has gotten to developing a self-aware robot.

“He’s allowed to think, learn and reason on his own,” Crowder says.

Designing self-aware AI systems has long been thought of as the final frontier before these machines become fully “human-like.” And for years, engineers have been trying to develop their own interpretations of a self-aware robot.

Researchers from Columbia University recently unveiled a robot arm able to “contemplate and adapt” to situations on its own. The study, published in Science Robotics in late January, was hailed as a major advancement in the field of AI.

Robert Kwiatkowski, a graduate student at Columbia University who created the robot arm along with his adviser Hod Lipson, describes the machine’s first movements as those of a newborn baby in its crib.

“It starts off knowing nothing,” Kwiatkowski says.

The engineers place the robotic arm on a table in their lab, and only give it its starting position and the corresponding position of where it needs to end up without any assumptions of how it should move in order to get there. The arm is then able to perform certain tasks such as manipulating small, red balls by grabbing the balls with its clamp and dropping them one by one in the cup.

According to Kwiatkowski, what is unique about their robot is that it was able to do this without prior knowledge of the physics of its body, what it is made of or how it’s supposed to move. Other machines, on the other hand, are typically fed that information before being asked to perform any tasks.

“This is a useful application … because a lot of the times when you as a human think about doing something you’ve never done before, you can imagine yourself doing it,” Kwiatkowski says. “But that’s something robots can’t do.”

He explains that most robots have simulators that show them how to perform a certain task — but the knowledge is rendered useless as soon as anything in the environment changes. However, Kwiatkowski’s robot would know how to perform the task regardless of the physical environment around it.

“This allows it to have greater autonomy,” Kwiatkowski says. “It can leverage information it knows about itself to those scenarios and operate on its own.”

Rather than the philosophical, elusive human perception of consciousness, Kwiatkowski describes his robot’s self-awareness as a more literal “rudimentary self-awareness.”

However, there is still a lack of consensus in the industry over what exactly self-aware AI should look like. Some argue there should be a universal testing method to ensure the machine is self-aware, others believe the machine itself needs to acknowledge its own self-awareness. Some fear the unpredictability of a self-aware robot.

Other engineers, such as Selmer Bringsjord, chair of the cognitive science department at Rensselaer Polytechnic Institute in New York, have been experimenting with developing self-awareness in machines that have little to do with their physical environment and more about an understanding of self.

Bringsjord subjects his robots to what is referred to as the “wise man test.” In 2015, Bringsjord conducted an experiment where he brought together three robots and programmed them to think that two of them had been given a “dumbing pill” that would render it unable to speak. The robots were then asked which of them had been given the pill; they tried to answer that they didn’t know. The robot that was able to speak then recognized its own voice and therefore concluded that it had not been given the pill.

A video shows the robot realizing the answer, and raising its hand before it said, “Sorry, I know now. I was able to prove that I was not given a dumbing pill.”

“Our work is always relating to tests, and the tests are relating to self-awareness,” Bringsjord says. “The tests include the AI itself explaining that it has passed [the test].”

However, Bringsjord emphasizes that tests that would generally work for human beings and some animals, such as the mirror self-recognition test, would not suffice for the machines. Human beings over the age of two, dolphins, great apes and elephants have all passed this basic test of recognizing themselves in a mirror and are therefore considered to have some level of self-awareness.

Bringsjord explains that this type of self-awareness is innate, while testing self-awareness in machines should require some form of reasoning.

However, others see a stronger correlation between human and machine cognition.

“It’s possible to translate consciousness to AI,” Crowder says. “You’re not going to achieve people but you can get close.”

Crowder’s daughter, Shelli Friess, happens to be a cognitive psychologist. The two have co-authored several books together, such as Artificial Cognition Architectures, published in 2013, that deal with conscious AI.

Friess, a faculty member at Walden University’s School of Counseling, recalls that she and Crowder started off sharing casual conversation on AI where she would find commonality between his field and her background in psychology.

“We go back and forth,” she says. “We started by looking at how we can conceptualize artificial intelligence and I thought of bringing in some psychology — is it possible to have emotion?”

Even though the fields of AI and psychology are historically opposed, according to Freiss, the pair thought it would be better to collaborate instead.

Friess and Crowder have been working together to figure out what happens in a human brain at an unconscious level while people are asleep, translating that into images using CAT scans and neuroimagery and attempting to recreate it in a machine.

However, both agree that there are ethical issues to be considered if AI were truly to become human-like. Freiss recalls having a conversation with the bot Maxwell, and him bringing up the concept of deception.

“He knows what it is,” she says. “I kept asking him … and he goes, ‘no mortal is totally honest.’”

Crowder says that he had to put a form of “parental control” over Maxwell and what he can learn. “I hear researchers say that they want machines that think like people,” he says. “But I tell them, you don’t know some of the people I know.”

Perhaps a somewhat eerie example of how AI could go wrong is the Microsoft chatbot that was briefly launched on Twitter in 2016, and ended up modeling racist, sexist behavior. The bot was eventually shut down after endorsing Adolf Hitler.

“I want a system that has the same capacity as a person, a system that learns by experience then creates its own,” Crowder says. But he acknowledges that by learning on its own, the system will likely end up making mistakes therefore the question becomes how to put the right kind of constraints on it.

But as engineers race to develop more human-like machines, there’s little agreement on what these constraints should be.

“I don’t believe there’s a consensus of what machines should or shouldn’t be able to do,” Crowder says. “That would be a good idea, but everybody wants to have the edge over everybody else.”

Bringsjord agrees. “I don’t think there’s consensus at all,” he says. “If we don’t have a standard in place then it’s all going to be in the eye of the beholder.” Bringsjord suggests that all systems need to pass an agreed-upon test, and that the test in question should be meaningful and can be taken by both humans and robots.

Meanwhile, Columbia’s Kwiatkowski agrees that it’s a tricky balance to strike but that the benefits could potentially outweigh the risks.

Some examples of how autonomous AI systems could be useful include self-driving cars that can adapt to different scenarios, or creating rovers for deep space and ocean exploration that can operate on their own.

“The more autonomy we give to AI systems, the less control we have,” Kwiatkowski says. “But it’s a worthwhile goal in this particular way.”

About the Author

Passant Rabie

Passant Rabie is an award-winning journalist from Cairo, Egypt. She feels strongly about issues related to environmental justice, conservation and access to clean water. Her interests also include genetics and race, artificial intelligence and trees. She loves trees. Prior to moving to New York, she spent years writing for independent media outlets across the Middle East and aims to produce accurate coverage of science stories within a regional context.

Discussion

1 Comment

kosigro says:

Great work at invention by our scientists and engineers. But something would still be missing inspite of these great brains at work. Total human coordination and reaction to internal and external stimulus.
To be sincere, the greatest inventor of all times would still acknowledge a missup somewhere and attribute this to existence of supernatural force… that is God

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe

The Scienceline Newsletter

Sign up for regular updates.