Eye-Gaze and Haptics

How Robots Learn to Interact and Assist Humans

HALEY R. MOORE

From robotic vacuum cleaners to robots that deliver packages and meals to our doorstep, we are just beginning to grow accustomed to a world where we interact with robots in our daily lives.

Researchers from the Robotics Institute have long been at work on robots that help humans with daily tasks. From soft robotics to medical and surgical robotics, and even robots that can make a pizza, the School of Computer Science continues to build on its foundations to develop robots to assist humans.

However, implementing robots when human beings might be sick, vulnerable or at their most fragile comes with a significant amount of precision, detail and care. Compounding the problem is the fact that human behavior is inconsistent and varies a great deal from person to person. Neither humans nor robots are infallible, which complicates studying the relationship between the two.

Henny Admoni

Henny Admoni, Director of the HARP Lab and Assistant Professor in RI

The Dynamics of Interaction

Henny Admoni, director of the Human and Robotics Partners Lab (HARP), shares this simple goal — to better understand and develop assistive robots that improve the quality of human lives.

Though assistive robots in the home are a relatively recent phenomenon, the idea is not new. Admoni recalled the 1960’s era cartoon “The Jetsons” as an example of assistive robots entering human imagination and our desire to connect and interact with thinking machines. The robots on “The Jetsons” not only helped the family with menial tasks around the house, but they also had personalities and approached being members of the family.

Though the show aired 60 years ago, Admoni said it took until the early 2000s for technologies to begin to catch up. And it’s important to note that assistive robots don’t need to speak and be humanoid in their interactions for humans to develop relationships with them. Admoni pointed to the Roomba as an example.

“They aren’t anthropomorphic at all — they are just discs on wheels,” said Admoni. “People decorate them and if their machine breaks, they would want to repair it instead of getting a new one. So, people can form relationships with something that is very clearly a robotic machine.”

Communicating with Eye-Gaze

The question becomes, how can we best communicate with robots that don’t speak? Admoni’s work in the HARP lab focuses on nonverbal communication and how robots can take advantage of key indicators to allow humans and robots to form better bonds. To overcome this, Admoni’s lab uses eye-gaze technology to enhance human interaction. Using cameras and sensors to interpret and respond to the direction of a person’s gaze enables the robot with more natural and intuitive communication and assistance.

HARP lab researchers developed a functional eye-gaze model, but the problem of inconsistent human movement, even in how they use their eyes, creates confusion in the model. To complicate matters further, most people use their peripheral vision to complete their understanding of their environment. Peripheral vision is tough for eye-gaze to track.

When beginning a new task such as reaching for an object, the gaze of a person’s eye often switches to the next task before finishing the task of picking up the object. When people begin to multitask, the direction of their gaze becomes unpredictable to the model and can provide extra hurdles in tracking research patterns.

“I think we use eye-gaze as a really rich signal, but it’s also a really noisy signal,” Admoni said. “It’s a challenge because we use it for so many different things — like to be aware of the world around us or to have social interactions and manage conversations.”

Admoni said the way to know more about what’s to come for human-robot interaction is to keep amplifying the research.

“If we think about robots and humans together, we are going to be much more successful — for a variety of reasons — than if we try to separate the robots from the humans,” she said. “Human-robot interaction as a community is the most important kind of recent evolution of robotics.”  ■

Co-Bots

Under the direction of Manuela Veloso, the Herbert A. Simon University Professor Emerita, the School of Computer Science pioneered the field of collaborative robots, or Co-bots, that work together with humans rather than perform tasks in isolation.

Throughout the ‘80s and ‘90s, SCS researchers delved into robotic manipulation and interaction with the goal of enabling robots to perform tasks in unstructured environments alongside humans. This research laid the groundwork for co-operative robots.

Integrating advanced sensor technologies and sophisticated control algorithms has allowed for greater perception and the ability to adapt to surroundings in real time, a crucial aspect of effective collaboration for co-bots. By the 2000s, Veloso actively explored human-robot collaboration, designing co-bots not only to perform physical tasks but also to interact intelligently with human collaborators.

CMU’s approach to co-bots has been inherently interdisciplinary, and researchers across the departments of SCS and the university contributed to the design and development of new robotic platforms geared toward collaborative tasks. These platforms have combined mobility, manipulation capabilities and safety features to ensure effective interaction. Collaborations among computer scientists, engineers, cognitive scientists and social scientists have led to a comprehensive understanding of how robots and humans can effectively work together.

Co-bot research in SCS continues, keeping CMU as a hub for advancements in machine learning, natural language processing, computer vision and haptic feedback that enhance co-bot capabilities.  ■

Manuela Veloso

Manuela Veloso walks with students and a co-bot through the halls of SCS.

CMU Robot Puts on Shirts One Sleeve at a Time

STACEY FEDEROFF

Robotic-Assisted Dressing System Accommodates Different Poses, Body Types and Garments

Researchers in the School of Computer Science have developed a robotic system that helps humans dress and accommodates various body shapes, arm poses and clothing selections.

Most people take getting dressed for granted. But data from the National Center for Health Statistics reveals that 92% of nursing facility residents and at-home care patients require assistance with dressing.

Researchers in the Robotics Institute (RI) see a future where robots can help with this need and are working to make it possible.

“Remarkably, existing endeavors in robot-assisted dressing have primarily assumed dressing with a limited range of arm poses and with a single fixed garment, like a hospital gown,” said Yufei Wang, an RI Ph.D. student working on a robot-assisted dressing system. “Developing a general system to address the diverse range of everyday clothing and varying motor function capabilities is our overarching objective. We also want to extend the system to individuals with different levels of constrained arm movement.”

The robot-assisted dressing system leverages the capabilities of artificial intelligence to accommodate various human body shapes, arm poses and clothing selections. The team’s research used reinforcement learning — rewards for accomplishing certain tasks — to achieve this. Specifically, the researchers gave the robot a positive reward each time it properly placed a garment further along a person’s arm. Through continued reinforcement, they increased the system’s learned-dressing strategy success rate.

The researchers used a simulation to teach the robot how to manipulate clothing and dress people. The team had to carefully deal with the properties of the clothing material when transferring the strategy learned in simulation to the real world.

“In the simulation phase, we employ deliberately randomized diverse clothing properties to guide the robot’s learned dressing strategy to encompass a broad spectrum of material attributes,” said Zhanyi Sun, an RI master’s student who also worked on the project. “We hope the randomly varied clothing properties in simulation encapsulate the garments’ property in the real world, so the dressing strategy learned in simulation environments can be seamlessly transferred to the real world.”

The RI team evaluated the robotic dressing system in a human study with 510 dressing trials across 17 participants with different body shapes, arm poses and five different garments. For most participants, the system was able to fully pull the sleeve of each garment onto their arm. When averaged over all test cases, the system dressed 86% of the length of the participants’ arms.

The researchers had to consider several challenges when designing their system. First, clothes are deformable in nature, making it difficult for the robot to perceive the full garment and predict where and how it will move.

“Clothes are different from rigid objects that enable state estimation, so we have to use a high-dimensional representation for deformable objects to allow the robot to perceive the current state of the clothes and how they interact with the human’s arm,” Wang said. “The representation we use is called a segmented point cloud. It represents the visible parts of the clothes as a set of points.”

Safe human-robot interaction was also crucial. It was important that the robot avoid both applying excessive force to the human arm and any other actions that could cause discomfort or compromise the individual’s safety. To mitigate these risks, the team rewarded the robot for gentle conduct.

Future research could head in several directions. For example, the team wants to expand the capabilities of the current system by enabling it to put a jacket on both of a person’s arms or to pull a T-shirt over their head. Both tasks require more complex design and execution. The team also hopes to adapt to the human’s arm movements during the dressing process and to explore more advanced robot manipulation skills such as buttoning or zipping.

As the work progresses, the researchers intend to perform observational studies within nursing facilities to gain insight into the diverse needs of individuals and improvements that need to be made to their current assistive dressing system.

Wang and Sun recently presented their research, “One Policy To Dress Them All: Learning To Dress People With Diverse Poses and Garments,” at the Robotics: Science and Systems conference. The students are advised by Zackory Erickson, assistant professor in the RI and head of the Robotic Caregiving and Human Interaction (RCHI) Lab; and David Held, associate professor in the RI leading the Robots Perceiving And Doing (RPAD) research group.  ■

Haptics for Health Care

Beyond the act of getting dressed, Assistant Professor in RI Zackory Erickson’s lab focuses on other activities of daily living, which include tasks like eating and other functions we take for granted, but which are necessary for survival. Using cameras alone isn’t enough, so his students use haptics, or the perception of touch in nonverbal communication between humans and sensory devices.

Haptic guidance and predictive control help monitor and record human-computer interactions. Sensors mimic force and motion that allows experts to dive deeper into the physical relationship of human-robot interaction. At the core of the research is the idea of keeping humans safe during all interactions. And there’s more work to be done before they are ready for widespread use.

“It’s still a research system, so we’re still looking at how robots can leverage this knowledge and information,” Erickson said. “And there is definitely a need to understand the use of haptics to guide the robot’s motion to inform its interactions.”  ■

Zackory Erickson, Assistant Professor in RI