ACCESSIBILITY ENABLES EQUALITY

The Drive Toward Accessible Devices and Research Leads SCS to Improvements for All

CHRIS QUIRK

If a technological device or application failed one out of four times, most reviewers would deem it a flop. But if a quarter of the population couldn't use a device or application, would the verdict be the same?

According to the Centers for Disease Control and Prevention, 61 million adults in the U.S. live with some disability, around one in four, and too often technological products fail them. And while no single product causes headaches for everyone living with a disability, the widespread lack of accessible features on apps, devices and webpages means that people with disabilities face disproportionate barriers to technology and the needs it can fulfill.

Researchers at Carnegie Mellon University’s Human-Computer Interaction Institute (HCII) want to change that.

“If we care about human-computer interaction, accessibility is a fundamental part of that. More than 20% of people have some sort of disability, and we'll all have a disability if we live long enough,” said Jeffrey Bigham, an associate professor in the HCII. “Accessibility is squarely at the intersection of computer science with people. The HCII was founded on the belief that we need a place with people who are comfortable working at that intersection. That’s why accessibility is a core part of human-computer interaction and the HCII.”

Last year, Pew Research Center analysis found that people with disabilities are less likely to use a computer or smartphone. Only 62% of adults with a disability own a computer — compared with 81% of those without a disability — and smartphone ownership among those with disabilities similarly lagged. Technology gaps like this mean that disabled people are more likely to be excluded from economic and social opportunity.

“Accessibility is incredibly important to understanding human ability and its diversity,” said Patrick Carrington, an assistant professor in the HCII. “We also get to apply many of the principles of good design and creativity in situations that will critically benefit people and potentially change lives.”

Accessibility is squarely at the intersection of computer science with people. The HCII was founded on the belief that we need a place with people who are comfortable working at that intersection. That’s why accessibility is a core part of human-computer interaction and the HCII.
— Jeffrey Bigham, Associate Professor in HCII
 

Using the Eyes with EyeMU

One group of HCII researchers has built a tool called EyeMU, which allows users to execute operations on a smartphone through gaze control. The tool could help users with limited agility control apps on their phones without ever touching the screen. Gaze analysis and prediction aren't new, but achieving an acceptable level of functionality on a smartphone would be a noteworthy advance.

Chris Harrison, Associate Professor in HCII

“The eyes have what you would call the Midas touch problem,“ said Chris Harrison, an associate professor in the HCII. “You can’t have a situation in which everywhere you look something happens on the phone. Too many applications would open.”

Software that tracks the eyes with precision can solve this problem. Andy Kong (SCS 2022) has been interested in eye-tracking technologies since he first came to CMU. He found commercial versions pricey, so he wrote a program that used the built-in camera on a laptop to track the user’s eyes, which in turn moved the cursor around the screen — an important early step toward EyeMU.

“Current phones only respond when we ask them for things, whether by speech, taps or button clicks,” Kong said. “If the phone is widely used now, imagine how much more useful it would be if we could predict what the user wanted by analyzing gaze or other biometrics.”

EyeMU uses Google’s Face Mesh to track gaze patterns and map the data.

Caption Below: Paragraph 3

Kong and HCII Ph.D. student Karan Ahuja advanced that early prototype by utilizing Google’s Face Mesh tool to both study the gaze patterns of users looking at different areas of the screen and render the mapping data. Next, the team developed a gaze predictor that uses the smartphone’s front-facing camera to lock in what the viewer is looking at and register it as the target.

Andy Kong (SCS 2022)

The team made the tool more productive by combining the gaze predictor with the smartphone’s built-in motion sensors to enable commands. For example, a user could look at a notification long enough to secure it as a target and flick the phone to the left to dismiss it or to the right to respond to it. Similarly, a user might pull the phone closer to enlarge an image or move the phone away to disengage gaze control.

To deal with varying facial geometries, EyeMU calibrates itself to the face of the individual user via the smartphone camera. EyeMU could be expanded and combined with other sensing modalities and could share the gaze-tracking information with other apps.

“I believe future applications on devices will use built-in eye tracking to read our intentions before we even have to lift a finger. EyeMU is a step toward that goal,” said Kong.

 

Using the Hands with TouchPose

Touchscreens miss a lot of information. Improving how they work could result in better accessibility for users. Ahuja worked with Paul Streli and Christian Holz while in residency at the Department of Computer Science at ETH Zürich to build TouchPose, a neural network estimator that calculates hand postures based on the geometry of finger touch points on smartphone and tablet screens. The team believes it is the first tool of its kind.

“All interactions with the screen are two-dimensional, but your hands have very complex 3D geometries,” Ahuja said. “I want to see if you could use the knowledge of how the hand is shaped in the moment of interaction to increase the fidelity of information you are exchanging.”

Research in robotics, virtual reality and other fields has provided a strong vocabulary of human hand forms and motion dynamics. Ahuja’s tool determines if the posture of a hand could be reverse engineered based on finger information from the screen. For example, if you move your hand back and forth while keeping the tip of your index finger on a touchscreen, nothing happens. But if a smartphone tool could process the changing shape of the fingertip on the screen to infer if your hand was moving to the left, right, forward or back, your finger could be used like a joystick. A tool like this could also help eliminate false touch mistakes and ambiguities, which cause frustration and slow down users.

Caption Below: Paragraph 3

Caption Below: Paragraph 3

The final data set to train the model contained more than 65,000 images. To build it, Ahuja and his colleagues spent a year recording 10 participants interacting with a flat screen using 14 unique hand positions. For the model, the team developed a new machine learning architecture to handle the novel nature of the research.

“We can’t know for sure what a user’s hand is going to look like, so there’s always a probability associated with priors from the data set we captured naturally,” said Ahuja. “If you have a situation where there’s a single touch point, and the model can’t resolve whether it’s the index finger or the middle finger, it will use probabilistic understanding to figure it out.”

TouchPose could be used on its own or it could form a foundation for accessibility features on other apps and devices. To encourage those efforts, Ahuja and his colleagues have made all their training data, code and the model itself public.

Caption Below: Paragraph 3

I want to see if you could use the knowledge of how the hand is shaped in the moment of interaction to increase the fidelity of information you are exchanging.
— Karan Ahuja, Ph.D. student in HCII
 

Research to Improve Future Accessibility Studies

To build a stronger foundation for accessibility, a team of HCII researchers is assembling a knowledge base and assessing current needs and resources. Sometimes their findings are counterintuitive.

“Building empathy to improve the lives of people with disabilities is necessary, but maybe not in the way most people would expect,” Carrington said. “One of the biggest challenges I see regarding accessibility is ableism. People make assumptions about what people can do, what they should be able to do, and how they can or should do it.”

Carrington and CMU colleagues Franklin Mingzhe Li, Franchesca Spektor, Peter Cederberg and Yuqi Gong joined colleagues from the Rochester Institute of Technology and the KAIST School of Computing in the Republic of Korea to present a study on the use of cosmetics by the visually impaired at the ACM CHI Conference on Human Factors in Computing Systems (CHI 2022) this past May in New Orleans.

More than 40% of people in the U.S. use cosmetics regularly, but few of them are visually impaired. The paper shares testimonials from visually impaired people — who number 2.2 billion globally — on the importance of having agency over their own selfcare, which using makeup can help provide. “When I first lost my eyesight, I was quite sad that I couldn't look in the mirror,” said Lucy Edward, CoverGirl's first blind beauty ambassador. “Applying makeup is a way that I can control my appearance again.”

One of the biggest challenges I see regarding accessibility is ableism. People make assumptions about what people can do, what they should be able to do, and how they can or should do it.
— Patrick Carrington, Assistant Professor in HCII

Carrington and the team analyzed YouTube videos that help visually impaired people use makeup, cataloging the challenges they face and noting what users found helpful. The team also interviewed visually impaired people about their experiences and the effectiveness of some of the videos as a core part of the study.

To gather the source material for their analysis, Carrington and his fellow researchers executed an algorithmic search for relevant videos using targeted keywords for both visual disability and makeup practices, and strategically filtered the results. They then analyzed the 145 collected videos to generate a knowledge base of makeup practices used by visually impaired persons. The researchers documented that people with visual disabilities prefer to learn about makeup practices from individuals with a similar complexion or who were from their own demographic.

Additionally the data showed that people with visual impairments want to easily know the quantity of the cosmetics they are using and that they utilized sound and smell of the products to help them differentiate one product from another. In addition to the wealth of their findings, Carrington’s team hopes their research will result in avenues for future projects and a richer understanding of best practices.

I expect accessibility work will very much influence the future of how we all interact with computers.
— Jeffrey Bigham, Associate Professor in HCII

How research happens also presents accessibility challenges, which HCII researchers Bigham, Emma McDonnell and Jailyn Zabala set out to tackle with colleagues from the University of Washington and Vanderbilt University.

The group shared their findings at CHI 2022.

Rigorous analysis of implementing accessibility into research methods is scarce. Bigham and his collaborators extensively outlined where and how accessibility should be taken into account throughout the research process, including recruitment, interviews and interactions that are part of the data-gathering process. Working from established concepts in critical disability studies, they also made practical suggestions scholars could use to overcome barriers that leave out or hinder the input of disabled persons in future studies.

“Since accessibility directly grapples with the larger part of the interaction space than is typically assumed, I expect accessibility work will very much influence the future of how we all interact with computers,” Bigham said. “I think the work being done to adapt user interfaces to a person’s current abilities and context have huge implications for how we might benefit from devices that adapt to where we are and what we're doing.”■