New software enables blind and low-vision users to create interactive, accessible charts
The Umwelt inter­face. A) The data, visu­al, and audio tabs of the edi­tor. B) The editor’s fields tab, where users spec­i­fy field def­i­n­i­tions and encod­ings. C) The view­er, where users ana­lyze data with inter­ac­tive mul­ti­modal data rep­re­sen­ta­tions. Cred­it: arX­iv (2024). DOI: 10.48550/arxiv.2403.00106

A grow­ing num­ber of tools enable users to make online data rep­re­sen­ta­tions, like charts, that are acces­si­ble for peo­ple who are blind or have low vision. How­ev­er, most tools require an exist­ing visu­al chart that can then be con­vert­ed into an acces­si­ble for­mat.

This cre­ates bar­ri­ers that pre­vent blind and low-vision users from build­ing their own cus­tom data rep­re­sen­ta­tions, and it can lim­it their abil­i­ty to explore and ana­lyze impor­tant infor­ma­tion.

A team of researchers from MIT and Uni­ver­si­ty Col­lege Lon­don (UCL) wants to change the way peo­ple think about acces­si­ble data rep­re­sen­ta­tions.

They cre­at­ed a soft­ware sys­tem called Umwelt (which means “envi­ron­ment” in Ger­man) that can enable blind and low-vision users to build cus­tomized, mul­ti­modal data rep­re­sen­ta­tions with­out need­ing an ini­tial visu­al chart.

Umwelt, an author­ing envi­ron­ment designed for screen-read­er users, incor­po­rates an edi­tor that allows some­one to upload a dataset and cre­ate a cus­tomized rep­re­sen­ta­tion, such as a scat­ter­plot, that can include three modal­i­ties: visu­al­iza­tion, tex­tu­al descrip­tion, and soni­fi­ca­tion. Soni­fi­ca­tion involves con­vert­ing data into non­speech audio.

The sys­tem, which can rep­re­sent a vari­ety of data types, includes a view­er that enables a blind or low-vision user to inter­ac­tive­ly explore a data rep­re­sen­ta­tion, seam­less­ly switch­ing between each modal­i­ty to inter­act with data in a dif­fer­ent way.

The researchers con­duct­ed a study with five expert screen-read­er users who found Umwelt to be use­ful and easy to learn. In addi­tion to offer­ing an inter­face that empow­ered them to cre­ate data representations—something they said was sore­ly lacking—the users said Umwelt could facil­i­tate com­mu­ni­ca­tion between peo­ple who rely on dif­fer­ent sens­es.

“We have to remem­ber that blind and low-vision peo­ple aren’t iso­lat­ed. They exist in these con­texts where they want to talk to oth­er peo­ple about data,” says Jonathan Zong, an elec­tri­cal engi­neer­ing and com­put­er sci­ence (EECS) grad­u­ate stu­dent and lead author of a paper intro­duc­ing Umwelt.

“I am hope­ful that Umwelt helps shift the way that researchers think about acces­si­ble data analy­sis. Enabling the full par­tic­i­pa­tion of blind and low-vision peo­ple in data analy­sis involves see­ing visu­al­iza­tion as just one piece of this big­ger, mul­ti­sen­so­ry puz­zle.”

Join­ing Zong on the paper are fel­low EECS grad­u­ate stu­dents Isabel­la Pedraza Pineros and Mengzhu “Katie” Chen; Daniel Hajas, a UCL researcher who works with the Glob­al Dis­abil­i­ty Inno­va­tion Hub; and senior author Arvind Satya­narayan, asso­ciate pro­fes­sor of com­put­er sci­ence at MIT who leads the Visu­al­iza­tion Group in the Com­put­er Sci­ence and Arti­fi­cial Intel­li­gence Lab­o­ra­to­ry.

The paper will be pre­sent­ed at the ACM Con­fer­ence on Human Fac­tors in Com­put­ing (CHI 2024), HELD May 11–16 in Hon­olu­lu. The find­ings are pub­lished on the arX­iv preprint serv­er.

De-centering visualization

The researchers pre­vi­ous­ly devel­oped inter­ac­tive inter­faces that pro­vide a rich­er expe­ri­ence for screen read­er users as they explore acces­si­ble data rep­re­sen­ta­tions. Through that work, they real­ized most tools for cre­at­ing such rep­re­sen­ta­tions involve con­vert­ing exist­ing visu­al charts.

Aim­ing to decen­ter visu­al rep­re­sen­ta­tions in data analy­sis, Zong and Hajas, who lost his sight at age 16, began co-design­ing Umwelt more than a year ago.

At the out­set, they real­ized they would need to rethink how to rep­re­sent the same data using visu­al, audi­to­ry, and tex­tu­al forms.

“We had to put a com­mon denom­i­na­tor behind the three modal­i­ties. By cre­at­ing this new lan­guage for rep­re­sen­ta­tions, and mak­ing the out­put and input acces­si­ble, the whole is greater than the sum of its parts,” says Hajas.

To build Umwelt, they first con­sid­ered what is unique about the way peo­ple use each sense.

For instance, a sight­ed user can see the over­all pat­tern of a scat­ter­plot and, at the same time, move their eyes to focus on dif­fer­ent data points. But for some­one lis­ten­ing to a soni­fi­ca­tion, the expe­ri­ence is lin­ear since data are con­vert­ed into tones that must be played back one at a time.

“If you are only think­ing about direct­ly trans­lat­ing visu­al fea­tures into non­vi­su­al fea­tures, then you miss out on the unique strengths and weak­ness­es of each modal­i­ty,” Zong adds.

They designed Umwelt to offer flex­i­bil­i­ty, enabling a user to switch between modal­i­ties eas­i­ly when one would bet­ter suit their task at a giv­en time.

To use the edi­tor, one uploads a dataset to Umwelt, which employs heuris­tics to auto­mat­i­cal­ly cre­ates default rep­re­sen­ta­tions in each modal­i­ty.

If the dataset con­tains stock prices for com­pa­nies, Umwelt might gen­er­ate a mul­ti­series line chart, a tex­tu­al struc­ture that groups data by tick­er sym­bol and date, and a soni­fi­ca­tion that uses tone length to rep­re­sent the price for each date, arranged by tick­er sym­bol.

The default heuris­tics are intend­ed to help the user get start­ed.

“In any kind of cre­ative tool, you have a blank-slate effect where it is hard to know how to begin. That is com­pound­ed in a mul­ti­modal tool because you have to spec­i­fy things in three dif­fer­ent rep­re­sen­ta­tions,” Zong says.

The edi­tor links inter­ac­tions across modal­i­ties, so if a user changes the tex­tu­al descrip­tion, that infor­ma­tion is adjust­ed in the cor­re­spond­ing soni­fi­ca­tion. Some­one could uti­lize the edi­tor to build a mul­ti­modal rep­re­sen­ta­tion, switch to the view­er for an ini­tial explo­ration, then return to the edi­tor to make adjust­ments.

Helping users communicate about data

To test Umwelt, they cre­at­ed a diverse set of mul­ti­modal rep­re­sen­ta­tions, from scat­ter­plots to mul­ti­view charts, to ensure the sys­tem could effec­tive­ly rep­re­sent dif­fer­ent data types. Then they put the tool in the hands of five expert screen read­er users.

Study par­tic­i­pants most­ly found Umwelt to be use­ful for cre­at­ing, explor­ing, and dis­cussing data rep­re­sen­ta­tions. One user said Umwelt was like an “enabler” that decreased the time it took them to ana­lyze data. The users agreed that Umwelt could help them com­mu­ni­cate about data more eas­i­ly with sight­ed col­leagues.

“What stands out about Umwelt is its core phi­los­o­phy of de-empha­siz­ing the visu­al in favor of a bal­anced, mul­ti­sen­so­ry data expe­ri­ence. Often, non­vi­su­al data rep­re­sen­ta­tions are rel­e­gat­ed to the sta­tus of sec­ondary con­sid­er­a­tions, mere add-ons to their visu­al coun­ter­parts. How­ev­er, visu­al­iza­tion is mere­ly one aspect of data rep­re­sen­ta­tion.

“I appre­ci­ate their efforts in shift­ing this per­cep­tion and embrac­ing a more inclu­sive approach to data sci­ence,” says JooY­oung Seo, an assis­tant pro­fes­sor in the School of Infor­ma­tion Sci­ences at the Uni­ver­si­ty of Illi­nois at Urbana-Cham­pagne, who was not involved with this work.

Mov­ing for­ward, the researchers plan to cre­ate an open-source ver­sion of Umwelt that oth­ers can build upon. They also want to inte­grate tac­tile sens­ing into the soft­ware sys­tem as an addi­tion­al modal­i­ty, enabling the use of tools like refre­sh­able tac­tile graph­ics dis­plays.

“In addi­tion to its impact on end users, I am hop­ing that Umwelt can be a plat­form for ask­ing sci­en­tif­ic ques­tions around how peo­ple use and per­ceive mul­ti­modal rep­re­sen­ta­tions, and how we can improve the design beyond this ini­tial step,” says Zong.



More information:Jonathan Zong et al, Umwelt: Accessible Structured Editing of Multimodal Data Representations, arXiv (2024). DOI: 10.48550/arxiv.2403.00106Journal information:arXivProvided byMassachusetts Institute of TechnologyThis story is republished courtesy of MIT News (, a popular site that covers news about MIT research, innovation and teaching.Citation:New software enables blind and low-vision users to create interactive, accessible charts (2024, March 27)retrieved 3 April 2024from document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, nopart may be reproduced without the written permission. The content is provided for infor

mation purposes only.