From Rigid to Deformable: How Data Enables Robots to Handle Soft Objects

Industrial robotics has excelled at handling rigid objects—metal parts, plastic components, assembled products with fixed shapes and known properties. The rules are simple: position, orientation, geometry are all that matter. Pick, place, insert, fasten.

But the world is not rigid. It is full of soft, deformable, unpredictable objects: clothing, food, packaging, bedding, cables, plants, human tissue. These objects change shape when touched. They sag, stretch, compress, and fold. Their behavior depends on material properties, previous deformations, and the exact manner of interaction.

For robots, soft objects are a nightmare.

Why Soft Objects Are Hard

The difficulty stems from fundamental differences in how rigid and deformable objects behave:

Rigid objects have fixed geometry. A vision system can model them once and recognize them anywhere. Their dynamics are simple: they move as a whole, governed by Newtonian physics with known mass and friction properties.

Deformable objects have infinite possible geometries. A towel can be folded flat, crumpled into a ball, draped over a surface, or twisted into a rope. Its shape at any moment depends on its history. Its dynamics are complex: different parts move relative to each other, internal forces propagate through the material, and behavior varies with material properties that are often unknown.

This complexity defeats traditional robotics approaches. Rule-based manipulation fails because the rules are too numerous and too context-dependent. Vision-based grasping fails because the object’s appearance changes with deformation. Force-based control fails because internal forces are indistinguishable from interaction forces.

The Data-Driven Solution

Recent advances suggest that data-driven approaches can succeed where classical methods fail. By training on thousands of examples of deformable object manipulation, robots can learn the underlying regularities:

  • How different materials respond to different types of contact
  • How deformations propagate through continuous media
  • What manipulation strategies work for different object classes
  • How to predict future states from current observations

This learning requires data—massive amounts of data capturing deformable object interactions from multiple modalities.

What Deformable Object Data Includes

Effective deformable object datasets capture:

Visual deformation fields: How the object’s shape changes over time, tracked through markers or optical flow algorithms. These fields reveal the material’s response to applied forces and provide ground truth for training deformation models.

Contact mechanics: Pressure distributions at the points of interaction, showing how force propagates from the robot’s fingers into the object. This reveals whether the grip is compressing the object, stretching it, or simply holding it.

Material response: How the object recovers (or fails to recover) after release. Elastic materials spring back; plastic materials retain deformation. This hysteresis provides information about material properties.

Manipulation outcomes: Whether the task succeeded—did the robot successfully fold the towel, stuff the pillowcase, tie the cable? These binary labels provide the ultimate training signal.

VISME’s Deformable Object Initiative

VISME has launched a dedicated deformable object data collection program, focusing on the soft objects that appear most frequently in real-world applications:

  • Textiles: Towels, shirts, bedsheets, curtains—varying in fabric type, thickness, and size
  • Packaging: Plastic bags, bubble wrap, cardboard boxes with variable contents
  • Food items: Bread, fruit, vegetables—with natural variation in shape and ripeness
  • Cables and ropes: Varying in thickness, flexibility, and entanglement state
  • Organic materials: Plants, leaves, flowers—with living material properties

Each interaction is recorded with synchronized vision (RGB-D), tactile (pressure arrays), and motion (joint torque/position) data. Annotations include material type, deformation state, interaction success, and physical properties.

Emerging Applications

Early applications of deformable object manipulation data include:

Laundry folding robots: Systems that can handle garments of varying sizes, fabrics, and initial configurations—adapting folding strategies based on real-time tactile feedback.

Food handling robots: Robots that can pick up delicate items like berries or bread without crushing them, using tactile feedback to modulate grip force.

Surgical assistance: Robots that interact with soft tissue, learning from data how different tissues respond to manipulation and how to apply force safely.

Warehouse picking: Systems that can handle polybags and soft packaging without damage, adapting to variable contents and shapes.

The Road Ahead

Deformable object manipulation remains one of the hardest problems in robotics. But with sufficient data, the patterns begin to emerge. The infinite variations are not random; they follow physical laws and material properties. Data teaches robots these laws, enabling them to handle soft objects with increasing skill.

VISME is committed to providing the data infrastructure that makes this possible. Because a robot that cannot handle soft objects is a robot that cannot handle the real world.


Posted

in

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *