Soft robots may not be in touch with human feelings, however they are getting better at feeling human touch.

Cornell University scientists have developed a low-priced technique for soft, deformable robots to detect a range of physical interactions, from pats to punches to hugs, without relying on touch at all. Rather, a USB cam situated inside the robot catches the shadow movements of hand gestures on the robotic’s skin and classifies them with machine-learning software.

The group’s paper, “ShadowSense: Identifying Human Touch in a Social Robot Using Shadow Image Classification,” released in the Procedures of the Association for Computing Machinery on Interactive, Mobile, Wearable and Ubiquitous Technologies. The paper’s lead author is doctoral student, Yuhan Hu.

The brand-new ShadowSense technology is the latest project from the Human-Robot Partnership and Companionship Laboratory, led by the paper’s senior author, Person Hoffman, associate teacher in the Sibley School of Mechanical and Aerospace Engineering.

The technology stemmed as part of an effort to establish inflatable robotics that might assist people to safety during emergency situation evacuations. Such a robotic would need to be able to interact with humans in severe conditions and environments. Think of a robotic physically leading someone down a loud, smoke-filled passage by discovering the pressure of the individual’s hand.

Instead of installing a great deal of contact sensing units– which would include weight and complex electrical wiring to the robot, and would be hard to embed in a deforming skin– the team took a counterintuitive approach. In order to determine touch, they looked to sight.

advertisement

“By positioning a cam inside the robot, we can infer how the person is touching it and what the person’s intent is simply by looking at the shadow images,” Hu stated. “We think there is fascinating capacity there, due to the fact that there are great deals of social robotics that are unable to spot touch gestures.”

The model robotic includes a soft inflatable bladder of nylon skin extended around a cylindrical skeleton, roughly 4 feet in height, that is mounted on a mobile base. Under the robotic’s skin is a USB electronic camera, which connects to a laptop. The scientists developed a neural-network-based algorithm that utilizes previously taped training data to distinguish between 6 touch gestures– touching with a palm, punching, touching with two hands, hugging, pointing and not touching at all– with a precision of 87.5 to 96%, depending on the lighting.

The robot can be set to respond to specific touches and gestures, such as rolling away or issuing a message through a speaker. And the robotic’s skin has the prospective to be become an interactive screen.

By gathering adequate data, a robotic could be trained to recognize an even broader vocabulary of interactions, custom-tailored to fit the robotic’s job, Hu stated.

The robotic does not even need to be a robotic. ShadowSense innovation can be integrated into other products, such as balloons, turning them into touch-sensitive devices.

In addition to supplying a simple service to a complex technical difficulty, and making robotics more easy to use to boot, ShadowSense uses a convenience that is significantly uncommon in these state-of-the-art times: privacy.

“If the robot can just see you in the form of your shadow, it can discover what you’re doing without taking high fidelity pictures of your appearance,” Hu said. “That gives you a physical filter and protection, and provides mental comfort.”

The research was supported by the National Science Structure’s National Robotic Initiative.

Story Source:

Products provided by Cornell University. Original composed by David Nutt. Note: Content might be edited for style and length.