Pixel perfect: Engineers' new approach brings images into focus
Cred­it: Whit­ing School of Engi­neer­ing

Johns Hop­kins researchers have devel­oped an effi­cient new method to turn blur­ry images into clear, sharp ones. Called Pro­gres­sive­ly Deblur­ring Radi­ance Field (PDRF), this approach deblurs images 15 times faster than pre­vi­ous meth­ods while also achiev­ing bet­ter results on both syn­thet­ic and real scenes.

“Often­times, images are blur­ry because aut­o­fo­cus does­n’t work prop­er­ly, or the cam­era or the sub­ject moves. Our method allows you to trans­form those blur­ry images into some­thing clear and three-dimen­sion­al,” said Cheng Peng, a post-doc­tor­al fel­low in Johns Hop­kins’ Arti­fi­cial Intel­li­gence for Engi­neer­ing and Med­i­cine Lab.

“Appli­ca­tions could include every­thing from vir­tu­al and aug­ment­ed real­i­ty appli­ca­tions to 3D scan­ning for e‑commerce to movie pro­duc­tion to robot­ic nav­i­ga­tion systems—not to men­tion just being used to sharp­en and deblur per­son­al pho­tos and videos.”

Peng worked with advi­sor Rama Chel­lap­pa, a Bloomberg Dis­tin­guished Pro­fes­sor in elec­tri­cal and com­put­er engi­neer­ing and bio­med­ical engi­neer­ing, on the project. Their results appear in the Pro­ceed­ings of the 37th Annu­al AAAI Con­fer­ence on Arti­fi­cial Intel­li­gence.

 

Cred­it: Johns Hop­kins Uni­ver­si­ty

Typ­i­cal­ly, the process of deblur­ring images involves two steps. First, the sys­tem esti­mates the posi­tions of the cam­eras that took the blur­ry images, which allows it to insert the 2D images into the 3D scene. Next, the sys­tem recon­structs a more detailed 3D mod­el of the scene pic­tured in the images or pho­tos. While gen­er­al­ly effec­tive, these tra­di­tion­al meth­ods have lim­i­ta­tions, often result­ing in artifacts—distortions and anomalies—and incom­plete recon­struc­tions. Neur­al Radi­ance Field (NeRF), a recent devel­op­ment in 3D image recon­struc­tion, is suc­cess­ful in achiev­ing pho­to­re­al­is­tic results, but only if the input images are of good qual­i­ty.

In con­trast, PDRF can pro­vide clear, clean images even with low-qual­i­ty input images. The secret, Peng said, is that the new approach has the abil­i­ty not only to detect and reduce blur in input pho­tos but also to sharp­en those images using what the team calls a “Pro­gres­sive Blur Esti­ma­tion mod­ule” before it cre­ates 3D recon­struc­tions of images or scenes.

“PDRF is based on neur­al net­works and offers a fast self-super­vised tech­nique that learns from the inputted images them­selves and does not require man­u­al­ly inputted train­ing data. Remark­ably, it address­es var­i­ous types of degra­da­tion, includ­ing cam­era shake, object move­ment, and out-of-focus sce­nar­ios, show­cas­ing its ver­sa­til­i­ty,” he said. “In oth­er words, we designed it to han­dle real-world sit­u­a­tions and images.”

For instance, Peng and his team are work­ing with researchers in the Depart­ment of Der­ma­tol­ogy at the Johns Hop­kins School of Med­i­cine to use the new 3D mod­el­ing tech­nol­o­gy to enhance the detec­tion of skin tumors, par­tic­u­lar­ly neu­rofi­bro­mato­sis: tumors that involve the brain, spinal cord, and nerves.

“In cas­es of neu­rofi­bro­mato­sis, tra­di­tion­al mea­sure­ment meth­ods often prove chal­leng­ing due to the tumors’ soft and deformable nature,” said Peng. “Our ongo­ing project seeks to address this by cre­at­ing pre­cise 3D mod­els, allow­ing for accu­rate analy­sis of tumor vol­ume, posi­tions, and quan­ti­ty. This inno­v­a­tive approach holds par­tic­u­lar promise in telemed­i­cine or tele­health sce­nar­ios, where patients can use their own cam­eras to cap­ture affect­ed areas with this method being ben­e­fi­cial in improv­ing diag­nos­tic accu­ra­cy.”

PDRF has been rec­og­nized by the Intel­li­gence Advanced Research Projects Activ­i­ty’s (IARPA) Walk-Through Ren­der­ing of Images of Vary­ing Alti­tude (WRIVA) pro­gram, which aims to devel­op soft­ware sys­tems to per­form site mod­el­ing in sce­nar­ios where a lim­it­ed vol­ume of ground-lev­el imagery with reli­able meta­da­ta is avail­able.

“Con­tracts like this allow us to apply these meth­ods on a larg­er, city-wide scale. That is where we see the future direc­tion of this going, which is large-scale recon­struc­tion, and gets more into the mixed real­i­ty direc­tion,” he said. “In the future, peo­ple will be able to explore far­away lands and cities in 3D based on 2D images cap­tured by even just ama­teur pho­tog­ra­phers.”

More information:Cheng Peng et al, PDRF: Pro­gres­sive­ly Deblur­ring Radi­ance Field for Fast Scene Recon­struc­tion from Blur­ry Images, Pro­ceed­ings of the AAAI Con­fer­ence on Arti­fi­cial Intel­li­gence (2023). DOI: 10.1609/aaai.v37i2.25295Provided byJohns Hop­kins UniversityCitation:Pixel per­fect: Engi­neers’ new approach brings images into focus (2024, March 18)retrieved 3 April 2024from https://techxplore.com/news/2024–03-pixel-approach-images-focus.htmlThis doc­u­ment is sub­ject to copy­right. Apart from any fair deal­ing for the pur­pose of pri­vate study or research, nopart may be repro­duced with­out the writ­ten per­mis­sion. The con­tent is pro­vid­ed for infor­ma­tion

pur­pos­es only.