Presentation

The main objective of the I-DRESS is to develop a system that will provide proactive assistance with dressing to disabled users or users such as high-risk health-care workers, whose physical contact with the garments must be limited to avoid contamination. The proposed robotic system consists of two highly dexterous robotic arms, sensors for multi-modal human-robot interaction and safety features.

The system will comprise three major components: (a) intelligent algorithms for user and garment recognition, specifically designed for close and physical human-robot interaction, (b) cognitive functions based on the multi-modal user input, environment modelling and safety, allowing the robot to decide when and how to assist the user, and (c) advanced user interface that facilitates intuitive and safe physical and cognitive interaction for support in dressing. The developed interactive system will be integrated on commercial WAM robotic arms and validated through experimentation with users and human factor analysis in two assistive-dressing scenarios.

The I-DRESS defines the following objectives:

  • Objective 1: To develop knowledge-based algorithms for detection and tracking of garments and specific parts of the human body based on computer vision.
  • Objective 2: To develop a multi-modal interaction framework for autonomous selection of interaction modalities and their disambiguation towards recognising user's attention and user’s intentions.
  • Objective 3: To develop learning from demonstration algorithms that use multi-modal input to create user profiles with their range of safe motion, speed and proximity.
  • Objective 4: To develop a hazard analysis method that implements strategies for safe robot operation taking into account environment, user's input reliability and ergonomic limits.
  • Objective 5: To design an intuitive user interface across different interaction modalities that facilitates safe physical interaction and cognitive robot behaviour.
  • Objective 6: To integrate and evaluate the multi-modal interactive system on a commercial robotic platform.

The consortium consisting of three partners provides the expertise for the main lines of research required by the project:

  • IRI will work on cloth and user recognition, multi-modal human-robot interaction and system integration.
  • BRL will provide the expertise in robot safety, human factors and interface design.
  • IDIAP will contribute to robot learning.