Artificial intelligence can evolve into more selfish or cooperative personalities through game theory and large-scale language models
Large-scale lan­guage mod­els enable AI agents to evolve var­i­ous types of per­son­al­i­ties in social inter­ac­tions. Cred­it: Reiko Mat­sushi­ta

Researchers in Japan have effec­tive­ly devel­oped a diverse range of per­son­al­i­ty traits in dia­logue AI using a large-scale lan­guage mod­el (LLM). Using the pris­on­er’s dilem­ma from game the­o­ry, Pro­fes­sor Takaya Ari­ta and Asso­ciate Pro­fes­sor Rei­ji Suzu­ki from Nagoya Uni­ver­si­ty’s Grad­u­ate School of Infor­mat­ics’ team cre­at­ed a frame­work for evolv­ing AI agents that mim­ics human behav­ior by switch­ing between self­ish and coop­er­a­tive actions, adapt­ing its strate­gies through evo­lu­tion­ary process­es. Their find­ings were pub­lished in Sci­en­tif­ic Reports.

LLM-dri­ven Dia­logue AI forms the basis for tech­nolo­gies such as Chat­G­PT. These tech­nolo­gies enable com­put­ers to inter­act with peo­ple in a man­ner that resem­bles per­son-to-per­son com­mu­ni­ca­tion. The goal of the Nagoya Uni­ver­si­ty team was to exam­ine how LLMs could be used to evolve prompts that encour­age more diverse per­son­al­i­ty traits dur­ing social inter­ac­tions.

The per­son­al­i­ties of AIs were evolved to obtain vir­tu­al earn­ings by play­ing the pris­on­er’s dilem­ma game from game the­o­ry. The dilem­ma con­sists of each play­er choos­ing whether to coop­er­ate with or defect from their part­ner. If both AI sys­tems coop­er­ate, they each receive four vir­tu­al dol­lars. How­ev­er, if one defects while the oth­er coop­er­ates, the defec­tor gets five dol­lars, while the coop­er­a­tor gets noth­ing. If both defect, they receive one dol­lar each.

“In this study, we set out to inves­ti­gate how AI agents endowed with diverse per­son­al­i­ty traits inter­act and evolve,” Ari­ta explained. “By uti­liz­ing the remark­able capa­bil­i­ties of LLMs, we devel­oped a frame­work where AI agents evolve based on nat­ur­al lan­guage descrip­tions of per­son­al­i­ty traits encod­ed in their genes.

“Through this frame­work, we observed var­i­ous types of per­son­al­i­ty traits, with the evo­lu­tion of AIs capa­ble of switch­ing between self­ish and coop­er­a­tive behav­iors, mir­ror­ing human behav­ior.”

In usu­al stud­ies in evo­lu­tion­ary game the­o­ry, “genes” in the mod­els direct­ly deter­mine an agen­t’s behav­ior. Using the LLMs, Ari­ta and Suzu­ki explored genes that rep­re­sent­ed more com­plex descrip­tions than pre­vi­ous mod­els, such as “being open to team efforts while pri­or­i­tiz­ing self-inter­est, lead­ing to a com­bi­na­tion of coop­er­a­tion and defec­tion.” This descrip­tion was then trans­lat­ed into a behav­ioral strat­e­gy by ask­ing the LLM whether it would coop­er­ate or defect when it has such a per­son­al­i­ty trait.

The research used an evo­lu­tion­ary frame­work, in which AI agents’ abil­i­ties were shaped by nat­ur­al selec­tion and muta­tion over gen­er­a­tions. This caused a wide range of per­son­al­i­ty traits to appear.

Although some agents dis­played self­ish char­ac­ter­is­tics, putting their own inter­ests above those of the com­mu­ni­ty or the group as a whole, oth­er agents demon­strat­ed advanced strate­gies that revolved around seek­ing per­son­al gain while still con­sid­er­ing mutu­al and col­lec­tive ben­e­fit.

“Our exper­i­ments pro­vide fas­ci­nat­ing insights into the evo­lu­tion­ary dynam­ics of per­son­al­i­ty traits in AI agents. We observed the emer­gence of both coop­er­a­tive and self­ish per­son­al­i­ty traits with­in AI pop­u­la­tions, rem­i­nis­cent of human soci­etal dynam­ics,” Suzu­ki said.

“How­ev­er, we also uncov­ered the insta­bil­i­ty inher­ent in AI soci­eties, with exces­sive­ly coop­er­a­tive groups being replaced by more ‘ego­cen­tric’ agents.”

“This achieve­ment under­scores the trans­for­ma­tive poten­tial of LLMs in AI research, show­ing that the evo­lu­tion of per­son­al­i­ty traits based on sub­tle lin­guis­tic expres­sions can be rep­re­sent­ed by a com­pu­ta­tion­al mod­el using LLMs,” remarked Suzu­ki.

“Our find­ings pro­vide insights into the char­ac­ter­is­tics that AI agents should pos­sess to con­tribute to human soci­ety, as well as design guide­lines for AI soci­eties and soci­eties with mixed AI and human pop­u­la­tions, which are expect­ed to arrive in the not-too-dis­tant future.”

 



 

More information:Reiji Suzuki et al, An evolutionary model of personality traits related to cooperative behavior using a large language model, Scientific Reports (2024). DOI: 10.1038/s41598-024-55903-yProvided byNagoya UniversityCitation:Game theory research shows AI can evolve into more selfish or cooperative personalities (2024, April 4)retrieved 4 April 2024from https://techxplore.com/news/2024-04-game-theory-ai-evolve-selfish.htmlThis document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, nopart may be reproduced without the written permission. The content is provided for information pur

poses only.