Tech companies want to build artificial general intelligence. But who decides when AGI is attained?
Dr. Pei Wang teach­es a arti­fi­cial gen­er­al intel­li­gence class at Tem­ple Uni­ver­si­ty in Philadel­phia, Thurs­day, Feb. 1, 2024. Main­stream AI research “turned away from the orig­i­nal vision of arti­fi­cial intel­li­gence, which at the begin­ning was pret­ty ambi­tious,” said Wang. Cred­it: AP Photo/Matt Rourke

There’s a race under­way to build arti­fi­cial gen­er­al intel­li­gence, a futur­is­tic vision of machines that are as broad­ly smart as humans or at least can do many things as well as peo­ple can.

Achiev­ing such a concept—commonly referred to as AGI—is the dri­ving mis­sion of Chat­G­PT-mak­er Ope­nAI and a pri­or­i­ty for the elite research wings of tech giants Ama­zon, Google, Meta and Microsoft.

It’s also a cause for con­cern for world gov­ern­ments. Lead­ing AI sci­en­tists pub­lished research Thurs­day in the jour­nal Sci­ence warn­ing that unchecked AI agents with “long-term plan­ning” skills could pose an exis­ten­tial risk to human­i­ty.

But what exact­ly is AGI and how will we know when it’s been attained? Once on the fringe of com­put­er sci­ence, it’s now a buzz­word that’s being con­stant­ly rede­fined by those try­ing to make it hap­pen.

What is AGI?

Not to be con­fused with the sim­i­lar-sound­ing gen­er­a­tive AI—which describes the AI sys­tems behind the crop of tools that “gen­er­ate” new doc­u­ments, images and sounds—artificial gen­er­al intel­li­gence is a more neb­u­lous idea.

It’s not a tech­ni­cal term but “a seri­ous, though ill-defined, con­cept,” said Geof­frey Hin­ton, a pio­neer­ing AI sci­en­tist who’s been dubbed a “God­fa­ther of AI.”

“I don’t think there is agree­ment on what the term means,” Hin­ton said by email this week. “I use it to mean AI that is at least as good as humans at near­ly all of the cog­ni­tive things that humans do.”

Hin­ton prefers a dif­fer­ent term—superintelligence—“for AGIs that are bet­ter than humans.”

A small group of ear­ly pro­po­nents of the term AGI were look­ing to evoke how mid-20th cen­tu­ry com­put­er sci­en­tists envi­sioned an intel­li­gent machine. That was before AI research branched into sub­fields that advanced spe­cial­ized and com­mer­cial­ly viable ver­sions of the technology—from face recog­ni­tion to speech-rec­og­niz­ing voice assis­tants like Siri and Alexa.

Main­stream AI research “turned away from the orig­i­nal vision of arti­fi­cial intel­li­gence, which at the begin­ning was pret­ty ambi­tious,” said Pei Wang, a pro­fes­sor who teach­es an AGI course at Tem­ple Uni­ver­si­ty and helped orga­nize the first AGI con­fer­ence in 2008.

Putting the ‘G’ in AGI was a sig­nal to those who “still want to do the big thing. We don’t want to build tools. We want to build a think­ing machine,” Wang said.

Tech companies want to build artificial general intelligence. But who decides when AGI is attained?
Dr. Pei Wang teach­es a arti­fi­cial gen­er­al intel­li­gence class at Tem­ple Uni­ver­si­ty in Philadel­phia, Thurs­day, Feb. 1, 2024. Main­stream AI research “turned away from the orig­i­nal vision of arti­fi­cial intel­li­gence, which at the begin­ning was pret­ty ambi­tious,” said Wang. Cred­it: AP Photo/Matt Rourke

Are we at AGI yet?

With­out a clear def­i­n­i­tion, it’s hard to know when a com­pa­ny or group of researchers will have achieved arti­fi­cial gen­er­al intelligence—or if they already have.

“Twen­ty years ago, I think peo­ple would have hap­pi­ly agreed that sys­tems with the abil­i­ty of GPT‑4 or (Google’s) Gem­i­ni had achieved gen­er­al intel­li­gence com­pa­ra­ble to that of humans,” Hin­ton said. “Being able to answer more or less any ques­tion in a sen­si­ble way would have passed the test. But now that AI can do that, peo­ple want to change the test.”

Improve­ments in “autore­gres­sive” AI tech­niques that pre­dict the most plau­si­ble next word in a sequence, com­bined with mas­sive com­put­ing pow­er to train those sys­tems on troves of data, have led to impres­sive chat­bots, but they’re still not quite the AGI that many peo­ple had in mind. Get­ting to AGI requires tech­nol­o­gy that can per­form just as well as humans in a wide vari­ety of tasks, includ­ing rea­son­ing, plan­ning and the abil­i­ty to learn from expe­ri­ences.

Some researchers would like to find con­sen­sus on how to mea­sure it. It’s one of the top­ics of an upcom­ing AGI work­shop next month in Vien­na, Austria—the first at a major AI research con­fer­ence.

“This real­ly needs a com­mu­ni­ty’s effort and atten­tion so that mutu­al­ly we can agree on some sort of clas­si­fi­ca­tions of AGI,” said work­shop orga­niz­er Jiax­u­an You, an assis­tant pro­fes­sor at the Uni­ver­si­ty of Illi­nois Urbana-Cham­paign. One idea is to seg­ment it into lev­els in the same way that car­mak­ers try to bench­mark the path between cruise con­trol and ful­ly self-dri­ving vehi­cles.

Oth­ers plan to fig­ure it out on their own. San Fran­cis­co com­pa­ny Ope­nAI has giv­en its non­prof­it board of directors—whose mem­bers include a for­mer U.S. Trea­sury secretary—the respon­si­bil­i­ty of decid­ing when its AI sys­tems have reached the point at which they “out­per­form humans at most eco­nom­i­cal­ly valu­able work.”

“The board deter­mines when we’ve attained AGI,” says Ope­nAI’s own expla­na­tion of its gov­er­nance struc­ture. Such an achieve­ment would cut off the com­pa­ny’s biggest part­ner, Microsoft, from the rights to com­mer­cial­ize such a sys­tem, since the terms of their agree­ments “only apply to pre-AGI tech­nol­o­gy.”

Is AGI dangerous?

Hin­ton made glob­al head­lines last year when he quit Google and sound­ed a warn­ing about AI’s exis­ten­tial dan­gers. A new Sci­ence study pub­lished Thurs­day could rein­force those con­cerns.

Its lead author is Michael Cohen, a Uni­ver­si­ty of Cal­i­for­nia, Berke­ley researcher who stud­ies the “expect­ed behav­ior of gen­er­al­ly intel­li­gent arti­fi­cial agents,” par­tic­u­lar­ly those com­pe­tent enough to “present a real threat to us by out plan­ning us.”

Cohen made clear in an inter­view Thurs­day that such long-term AI plan­ning agents don’t yet exist. But “they have the poten­tial to be” as tech com­pa­nies seek to com­bine today’s chat­bot tech­nol­o­gy with more delib­er­ate plan­ning skills using a tech­nique known as rein­force­ment learn­ing.

Tech companies want to build artificial general intelligence. But who decides when AGI is attained?
Pio­neer­ing AI sci­en­tist Geof­frey Hin­ton, pos­es at Google’s Moun­tain View, Calif, head­quar­ters on March 25, 2015. There’s a race under­way to build arti­fi­cial gen­er­al intel­li­gence, nick­named AGI, a futur­is­tic vision of machines that are broad­ly as smart as humans. Hin­ton prefers a dif­fer­ent term for AGI — super­in­tel­li­gence — “for AGIs that are bet­ter than humans.” Cred­it: AP Photo/Noah Berg­er, File

“Giv­ing an advanced AI sys­tem the objec­tive to max­i­mize its reward and, at some point, with­hold­ing reward from it, strong­ly incen­tivizes the AI sys­tem to take humans out of the loop, if it has the oppor­tu­ni­ty,” accord­ing to the paper whose co-authors include promi­nent AI sci­en­tists Yoshua Ben­gioand Stu­art Rus­sell and law pro­fes­sor and for­mer Ope­nAI advis­er Gillian Had­field.

“I hope we’ve made the case that peo­ple in gov­ern­ment decide to start think­ing seri­ous­ly about exact­ly what reg­u­la­tions we need to address this prob­lem,” Cohen said. For now, “gov­ern­ments only know what these com­pa­nies decide to tell them.”

Too legit to quit AGI?

With so much mon­ey rid­ing on the promise of AI advances, it’s no sur­prise that AGI is also becom­ing a cor­po­rate buzz­word that some­times attracts a qua­si-reli­gious fer­vor.

It’s divid­ed some of the tech world between those who argue it should be devel­oped slow­ly and care­ful­ly and others—including ven­ture cap­i­tal­ists and rap­per MC Hammer—who’ve declared them­selves part of an “accel­er­a­tionist” camp.

The Lon­don-based start­up Deep­Mind, found­ed in 2010 and now part of Google, was one of the first com­pa­nies to explic­it­ly set out to devel­op AGI. Ope­nAI did the same in 2015 lat­er with a safe­ty-focused pledge.

But now it might seem that every­one else is jump­ing on the band­wag­on. Google co-founder Sergey Brin was recent­ly seen hang­ing out at a Cal­i­for­nia venue called the AGI House. And less than three years after chang­ing its name from Face­book to focus on vir­tu­al worlds, Meta Plat­forms in Jan­u­ary revealed that AGI was also on the top of its agen­da.

Meta CEO Mark Zucker­berg said his com­pa­ny’s long-term goal was “build­ing full gen­er­al intel­li­gence” that would require advances in rea­son­ing, plan­ning, cod­ing and oth­er cog­ni­tive abil­i­ties. While Zucker­berg’s com­pa­ny has long had researchers focused on those sub­jects, his atten­tion was a change in tone.

At Ama­zon, one sign of the new mes­sag­ing was when the head sci­en­tist for the voice assis­tant Alexa switched job titles to become head sci­en­tist for AGI.

While not as tan­gi­ble to Wall Street as gen­er­a­tive AI, broad­cast­ing AGI ambi­tions may help recruit AI tal­ent who have a choice in where they want to work.

In decid­ing between an “old-school AI insti­tute” or one whose “goal is to build AGI” and has suf­fi­cient resources to do so, many would choose the lat­ter, said You, the Uni­ver­si­ty of Illi­nois researcher.