Tech

From AI to AGI

What is artificial general intelligence, and how will we know when we’ve achieved it? Understanding the debate over engineered superintelligence

April 16, 2026
Current generative AI systems are powerful, but they lack the flexible, human-like reasoning that defines AGI. [Credit: Pixel-Shot | AdobeStock]

It’s only a matter of time, some of the smartest people in AI tell us. In as little as a year or two, artificial intelligence programs may outperform humans in a wide array of important tasks, Anthropic chief executive Dario Amodei recently declared. His prediction, made in his essay The Adolescence of Technology, was a warning about AI as capable as humans across many domains — which some experts would describe as Artificial General Intelligence, or AGI. 

Artificial intelligence has advanced by leaps and bounds since generative AI tools like ChatGPT entered the market in 2022. Now, Amodei and others predict we will soon reach the point where machines can outthink humans: the dawn of AGI. 

Here’s what you need to know. 

What is AGI?

OpenAI defines it as AI systems that are generally smarter than humans. But Evitable CEO David Scott Krueger puts it this way: AGI is going to be an AI “able to do anything a human can.” 

What they all agree on is that AGI would represent a fundamentally new kind of AI system.

Large language models (LLMs) are AI systems trained on vast amounts of text to generate human-like responses. They are a type of generative AI, which refers to systems that create new content — such as text, images or code — based on patterns in data. That includes systems like ChatGPT, Claude, or Gemini. AGI, by contrast, would be a form of AI able to understand and learn across many domains, adapting to new tasks much like a human can.

How does it differ from today’s AI systems? 

The latest generative AI systems have “kind of revived” talk that we are closing in on AGI, says Melanie Mitchell, a computer scientist at the Santa Fe Institute. Even if these models seem very smart, they’re not quite considered AGI. 

Despite their sophistication, these systems lack the flexible, autonomous and human-like reasoning that would define AGI. “They’re not able to remain as coherent as humans are over long time horizons,” says Krueger. “They’re not able to navigate the real world.”

Krueger says hallucinations — instances in which models generate information that isn’t true — remain a major limitation and are one reason these systems do not qualify as AGI.

Other current AI systems, such as those that are more task-oriented, provide a useful illustration. 

Mitchell points to cases involving Tesla’s self-driving software, where drivers reported unexpected braking at the same location. “They noticed there was a billboard on the highway that had an advertisement with a police officer holding up a stop sign,” she says. Because the AI system had not encountered similar situations during training, it struggled to interpret the image correctly. “As humans, we know that a billboard is not a real stop,” says Mitchell, but the AI didn’t know. 

The question is, “how much do they generalize, beyond memorization?” says Raphaël Millière, a philosopher and cognitive scientist at the University of Oxford.

What will AI systems need in order to achieve AGI? 

“That’s the trillion-dollar question,” says Millière. Some people are betting on a concept called continual learning, he says. It refers to an AI system’s ability to keep learning from new data and experiences over time without forgetting what it previously learned — “like what humans and animals do,” he says. ChatGPT, for example, does not update its core model weights directly based on the user’s input.

“Another big one is data efficiency,” Millière says. Current models need an enormous amount of data to learn to do things that humans can learn from very few examples. 

“For example, we asked an image generation model to draw a pelican riding a bicycle, and it could do that perfectly well. When we asked the LLM to draw a bicycle riding a pelican, however, it wasn’t able to do it. It kept generating a pelican riding a bicycle. When we asked a 7-year-old girl, she drew a very pretty bicycle riding a pelican,” explains Millière. She had never been asked to draw that before, but she could do it immediately. So, “the key to truly flexible general intelligence might also be better learning algorithms that can learn more efficiently from sparse data,” says Millière.

There is another problem, though. Not only is it unclear exactly how AI programs will need to improve in order to win the race to AGI, but no one is even sure where the finish line is. 

How would researchers determine that AGI has been achieved? 

Companies are working on it. Krueger says that Meta, Anthropic, OpenAI and others have been pretty clear that they’re working on developing AGI. 

Experts have proposed different tests and benchmarks, but none of them are satisfactory, says Mitchell. “Some experts say that when a machine can go on the internet and figure out how to make a million dollars, that might be AGI. This seems to me like a misguided definition of human-level intelligence,” she says. 

Part of the difficulty in predicting when AGI will be achieved is that even human intelligence is hard to define, and there is no single agreed-upon definition of it. Without a clear benchmark for what counts as “human-level” intelligence, it becomes difficult to determine when an AI system has truly reached it.

Amodei believes it’s a couple of years away, but not everyone thinks he’s right. Some skeptics argue it may take decades or even centuries to achieve AGI — if we ever get there. 

According to Millière, predicting the trajectory of AI in general is difficult. “The historical record is littered with confident predictions that turned out to be mistaken,” he says. 

There is also no agreed-upon definition of AGI, making it a moving target. And in terms of engineering, “Researchers fundamentally disagree on what will be needed for AI systems to match the flexibility, generality and efficiency of human intelligence,” Millière says.

Why would we want AGI, anyway? 

AGI systems could, at least in principle, do useful things that current AI cannot. A true AGI system would not mistake a billboard for a real stop sign. Mitchell offers another example: a robot designed to load a dishwasher. “We have robots that already know how to do that,” she says.

But what if an unexpected situation arises? For instance, a dog wandering over and licking the dishes? A system able to recognize the change in context and decide to rinse the dishes again would illustrate the kind of flexible, adaptive intelligence associated with AGI, rather than the rigid task execution of conventional AI.

AGI could transform humans’ lives in significant ways. “I think AI systems are poised to automate significant aspects of the research process and to accelerate the pace of scientific discovery,” Millière says. In his opinion, this is one of the most promising candidates for a genuinely beneficial application of advanced AI.

Sounds good, so why are many experts so worried about AGI?

Concern about AGI is not new. In 2023, the Future of Life Institute published an open letter calling on AI labs to pause, for at least six months, the training of systems more powerful than GPT-4. Yoshua Bengio, Elon Musk and Steve Wozniak — three prominent figures in the field — were among more than 30,000 signatories. 

The same year, the Center for AI Safety went further, stating the “risk of extinction from AI,” signed by Bengio, Sam Altman and Geoffrey Hinton. That same year, Hinton left Google, in part to speak more freely about these risks, warning of a loss of control and the possibility that AI could surpass human intelligence.

Why are they so worried? 

Central issues include alignment — whether highly capable systems would reliably follow human intentions — as well as interpretability, since increasingly complex models may behave in ways that are difficult for humans to understand or predict. 

One of the potential risks, Mitchell says, is that humans may overestimate the intelligence of these systems and will let them make decisions that they’re not really capable of making.

The more authority and autonomy we give to thinking machines, the less we will have. “It is clear that once we reach AGI, humans might no longer be in control, and we could face a disempowerment of humanity,” says Krueger.

Amodei has also warned that increasingly capable systems may be difficult to control. He writes in his essay that “humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political and technological systems possess the maturity to wield it.”

While views differ on AGI’s plausibility and timeline, uncertainty remains central to the debate.

About the Author

Alissa de Chassey

Alissa de Chassey is a French science journalist based in New York City. She previously worked for Le Figaro and covers biology, health, and AI.

Discussion

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe

The Scienceline Newsletter

Sign up for regular updates.