machine learning
Cred­it: Pixabay/CC0 Pub­lic Domain

With the AI sum­mit well under­way, researchers are keen to raise the very real prob­lem asso­ci­at­ed with the technology—teaching it how to for­get.

Soci­ety is now abuzz with mod­ern AI and its excep­tion­al capa­bil­i­ties; we are con­stant­ly remind­ed its poten­tial ben­e­fits, across so many areas, per­me­at­ing prac­ti­cal­ly all facets of our lives—but also its dan­gers.

In an emerg­ing field of research, sci­en­tists are high­light­ing an impor­tant weapon in our arse­nal towards mit­i­gat­ing the risks of AI—“machine unlearn­ing.” They are help­ing to fig­ure out new ways of mak­ing AI mod­els known as deep neur­al net­works (DNNs) for­get data which pos­es a risk to soci­ety.

The prob­lem is re-train­ing AI pro­grams to “for­get” data is a very expen­sive and an ardu­ous task. Mod­ern DNNs such as those based on “Large Lan­guage Mod­els” (like Chat­G­PT, Bard, etc.) require mas­sive resources to be trained—and take weeks or months to do so. They also require tens of Gigawatt-hours of ener­gy for every train­ing pro­gram, some research esti­mat­ing as much ener­gy as to pow­er thou­sands on house­holds for one year.

Machine Unlearn­ing is a bur­geon­ing field of research that could remove trou­ble­some data from DNNs quick­ly, cheap­ly and using less resources. The goal is to do so while con­tin­u­ing to ensure high accu­ra­cy. Com­put­er Sci­ence experts at the Uni­ver­si­ty of War­wick, in col­lab­o­ra­tion with Google Deep­Mind, are at the fore­front of this research.

Pro­fes­sor Peter Tri­antafil­lou, Depart­ment of Com­put­er Sci­ence, Uni­ver­si­ty of War­wick, recent­ly co-authored a pub­li­ca­tion “Towards Unbound­ed Machine Unlearn­ing,” which appears on the pre-print serv­er arX­iv. He said, “DNNs are extreme­ly com­plex struc­tures, com­prised of up to tril­lions of para­me­ters. Often, we lack a sol­id under­stand­ing of exact­ly how and why they achieve their goals. Giv­en their com­plex­i­ty, and the com­plex­i­ty and size of the datasets they are trained on, DNNs may be harm­ful to soci­ety.

“DNNs may be harm­ful, for exam­ple, by being trained on data with biases—thus prop­a­gat­ing neg­a­tive stereo­types. The data might reflect exist­ing prej­u­dices, stereo­types and faulty soci­etal assumptions—such as a bias that doc­tors are male, nurs­es female—or even racial prej­u­dices.

“DNNs might also con­tain data with ‘erro­neous annotations’—for exam­ple, the incor­rect label­ing of items, such as label­ing an image as being a deep fake or not.

“Alarm­ing­ly, DNNs may be trained on data which vio­lates the pri­va­cy of indi­vid­u­als. This pos­es a huge chal­lenge to mega-tech com­pa­nies, with sig­nif­i­cant leg­is­la­tion in place (for exam­ple GDPR) which aims to safe­guard the right to be forgotten—that is the right of any indi­vid­ual to request that their data be delet­ed from any dataset and AI pro­gram.

“Our recent research has derived a new ‘machine unlearn­ing’ algo­rithm that ensures DNNs can for­get dodgy data, with­out com­pro­mis­ing over­all AI per­for­mance. The algo­rithm can be intro­duced to the DNN, caus­ing it to specif­i­cal­ly for­get the data we need it to, with­out hav­ing to re-train it entire­ly from scratch again. It’s the only work that dif­fer­en­ti­at­ed the needs, require­ments, and met­rics for suc­cess among the three dif­fer­ent types of data need­ed to be for­got­ten: bias­es, erro­neous anno­ta­tions and issues of pri­va­cy.

“Machine unlearn­ing is an excit­ing field of research that can be an impor­tant tool towards mit­i­gat­ing the risks of AI.”

Source