Monthly Archives

November 2020

IBEC presents a gesture-based communication protocol to enhance human-robot collaboration

By | Blog | No Comments

SPECS-lab

28/11/2020

Enhancing human-robot collaboration in industrial environments via a gesture-based communication protocol.

One of the core principles behind the HR-Recycler project is to develop new ways of enhancing human-robot collaboration in industrial environments. A key requirement for establishing an efficient collaboration between humans and robots is the development of a concrete communication protocol. In particular, human workers require a fast way to communicate with the robot, as well as a mechanism to control it in case it is necessary.

An obvious choice would be to issue voice commands. However, the context of the industrial recycling plant in which the human-robot interaction will take place is noisy, and any communication protocol that relies on sound will face many problems. A communication protocol based on facial recognition is also discarded since the workers need to wear protective masks in these contexts. Using a set of buttons or a tablet to send information to the robot can be a good solution. However, these mechanisms cannot be the only channel of communication since the worker needs a fast and intuitive communication channel on which he can resort even when he is at the workbench or handling some tool.

SADAKO has built a replica of a workbench in their premises in which the gesture recognition was tested.

In order to achieve that, we have developed a non-verbal communication protocol based on gestures that will serve as an input for the robots. We have identified the following messages where gestures will be employed: start (or resume a previously paused action), pause (current action), cancel (current action), sleep (robot goes to idle mode), point (directional point to focus attention), wave (hello), no, yes.

Ismael performing the Vulcan salutation gesture that means “Live long and prosper”. Probably the most popular way to say “hi” be tween two rational agents (or at least the coolest, according to science).

To choose which gestures will represent each of the communicated messages, we need to consider two things: gestures should be easy to remember (so we do not suggest highly complex gestures) but they should not be gestures habitually used by humans. Also, gestures should be easy to remember, so human workers will not need training sessions for remembering the gestures. However, they should not be too simple or gestures that are largely employed by humans during communication to avoid situations where humans spontaneously perform those gestures, for example, while interacting with a co-worker.

Upon detection of the “hammering” action, the robot signals a blue light. This test served to illustrate that the gesture was correctly identified and received by the robot.

For the technical implementation of such a communication system, two partners of the HR-Recycler consortium have joined efforts. On one side, SADAKO has developed a gesture detection algorithm to correctly identify in real-time each of the communication gestures defined in the protocol. On the other side, IBEC has developed the concrete non-verbal communication repertoire, as well as an interaction-control module that integrates the information coming from SADAKO’s gesture recognition software and transforms it in a specific action command that is issued to the robot.

Óscar performs a “wave” gesture that is correctly identified by the gesture recognition software developed by SADAKO.

The first physical integration session between two partners of the HR-Recycler consortium took place last month at SADAKO’s premises to perform the initial tests of the gesture-based HRI communication protocol. There, a team composed by the IBEC and SADAKO groups tested the real-time detection of several of the proposed gestures. They also verified that the interaction manager was receiving the information of the identified gestures and correctly converting it in the corresponding robot commands.

Robotnik explains how to use MoveIt to develop a robotic manipulation application

By | Blog | No Comments

HOW TO USE MOVEIT TO DEVELOP A ROBOTIC MANIPULATION APPLICATION

European Commission funded HR-Recycler project aims at developing a hybrid human-robot collaborative environment for the disassembly of electronic waste. Humans and robots will be working collaboratively sharing different manipulation tasks. One of these tasks takes place in the disassembly area where electronic components are sorted by type into their corresponding boxes. To speed up the component sorting task Robotnik is developing a mobile robotic manipulator which needs to pick boxes filled with disassembled components from the workbenches and transport them either to their final destination or to further processing areas. MoveIt is an open source robotic manipulation platform which allows you to develop complex manipulation applications using ROS. Here, a brief summary showing how we used MoveIt functionalities to develop a pick and place application will be presented.

Figure 1: Pick and Place task visual description.

We found MoveIt to be very useful in the early stages of developing a robotic manipulation application. It allowed us to decide on the environment setup, whether our robot is capable of performing the manipulation actions we need it to perform in that setup, how to arrange the setup for the best performance, how to design the components of the workspace the robot has to interact with so that they allow for the correct completion of the manipulation actions needed in the application. Workspace layout is very easy to carry out in MoveIt as it allows you to build the environment using mesh objects previously designed in any cad program and allows your robot to interact with them.

One of MoveIt’s strengths is that it allows you to plan towards any goal position not only taking into account the environment scene by avoiding objects, but also interacting with it by grabbing objects and including them in the planification process. Any MoveIt scene Collision Object can be attached to a desired robot link, MoveIt will then allow collisions between that link and the object, once attached the object will move together with the robot’s link.

Figure 2: MoveIt Planning Scene with collision objects (green) and attached objects (purple).

This functionality helped us determine from the very beginning whether our robot arm was able to reach objects in a table with a certain height, how far away from the table should the robot position to reach the objects properly, is there enough space to perform the arm movements it needs to perform to manipulate objects around the workspace area. It also helped us design the boxes needed for the task, allowing us to decide on the correct box size that will allow the robot arm to perform the necessary manipulation movements given the restricted working area.

However MoveIt’s main use is Motion Planning. MoveIt includes various tools that allow you to perform motion planning to a desired pose with high flexibility, you can adjust the motion planning algorithm to your application to obtain the best performance. This is very useful as it allows you to restrict your robot’s allowed motion to fit very specific criteria, which in an application like ours, with a restricted working space where the robot needs to manipulate objects precisely in an environment shared with humans is very important.

Figure 3: Planning to a desired goal taking into account collisions with scene.

One of the biggest motion requirements we have is the need for the robot arm to maintain the boxes parallel to the ground when manipulating them as they will be filled with objects that need to be carried between working stations. This can be easily taken into account with MoveIt as it allows you to plan using constraints. There are various constraints that can be applied, the ones we found more useful for our application are joint constraints and orientation constraints. Orientation constraints allow you to restrict the desired orientation of a robot link, they are very useful to maintain the robot’s end effector parallel to the ground, needed to manipulate the boxes properly. Joint constraints limit the position of a joint to be within a certain bound, they are very useful to shape the way you want your robot to move, in our application it allowed us to move the arm maintaining a relative position between the elbow and shoulder, performing more natural movements and avoiding potentially dangerous motions.

Figure 4: Motion Planning with joint and orientation constraints vs without.

Another useful MoveIt motion planning tool is that it allows you to plan movements to a goal position both in Cartesian and in Joint Space, allowing you to switch between these two options for different desired trajectory outcomes. Cartesian Space planning is used whenever you want to follow a very precise motion with the end effector link. In our application we made use of these functionality when moving down from the box approach position to  the grab position and back again. Our robot has to carry the boxes with it, and due to limited space on his base area all of the boxes are quite close together, using Cartesian planning we could assure we are maintaining verticality while raising the box from its holder avoiding latching between boxes and unnecessary stops. Joint Space planning is however useful to obtain more natural trajectories when the arm is moving between different grabbing positions making movement smoother.

Figure 5: Motion Planning in Cartesian Space vs Joint Space.

This is just a brief summary of how we used MoveIt to develop a preliminary robotic pick and place manipulation application, there are still lots of different tools that MoveIt has to offer. Some of MoveIt’s most advanced applications include integrating 3D sensors to build a perception layer used for object recognition in pick and place tasks or using deep learning algorithms for grasp pose generation, areas that will be explored in the next steps. Stay tuned for future updates in the development of a robotic manipulation application using MoveIt’s latest implementations.

Down below you will find a short demonstration of the currently developed application running on a Robotnik’s RB-KAIROS mobile manipulator.

Component Sorting manipulation application DEMO

https://www.youtube.com/watch?v=JgyDB57xjDw

TUM describes the transition towards Safe Human-Robot Collaboration through Intelligent Collision Handling

By | Blog | No Comments

Towards Safe Human-Robot Collaboration: Intelligent Collision Handling

In recent years, new trends in industrial manufacturing have changed the interaction patterns between humans and robots. Instead of the conventional isolation mechanism, close cooperation between humans and robots is more and more required for complicated tasks. The HR-Recycler project seeks a solution to allow close human-robot collaboration for the disassembly of electronic wastes within industrial environments. In such a scenario, humans and robots are sharing the same workspace, and the handling of unexpected collisions is among the most important issues of the HR-recycler system. To be more specific, the robot platform in a disassembly scenario should be able to appropriately detect an unexpected collision and measure its value, such that emergent reaction strategies can take over the task routine to prevent or mitigate possible damages and injuries. Moreover, the collisions should be distinguished from the demanded physical human-robot interactions (pHRI) or intentional contacts, such that the nominal disassembly tasks are not disturbed. Highly relevant to the HR-Recycler project, the Technical University of Munich (TUM) develops a novel collision-handling scheme for robot manipulators, which is able to precisely measure the collision forces without torque sensors and identify the collision types with incomplete waveforms. The scheme provides a reliable solution to guarantee the safety of the HR-Recycler robot in complicated environments.

Collision Force Estimation without Torque and Velocity Measurements

When an unexpected collision occurs between the robot and the human or the environmental objects, the collision forces are exerted on the robot joints, which can be used to evaluate the effects of the collision. Although force sensors can be installed on the robot to measure the collision forces, they are usually quite expensive for low-cost robot platforms, such as the recycling robots. Thus, TUM proposes a novel method to estimate the collision forces using system dynamics without torque sensors (https://bit.ly/3eUuDD7). Different from the conventional collision force estimation methods, the usage of velocity measurement is also avoided, which improves the estimation response to the differential noise. In general, the method provides a solution for measuring collisions for low-expense robots with incomplete sensory. The method can be used to implement a force-feedback admittance control without force measurement (See Figure 1), which, conventionally, can only be achieved using high precision force sensors.

Figure 1. The application of the collision force estimation method to a force-feedback admittance control in safe HRC. (a). Robot in a nominal task. (b). An external force exerted on the robot. (c). Admittance behavior of the robot to the external force for safety. (d). Robot back to the nominal task after the vanishing of the external force.

 

Intelligent Collision Classification with Incomplete Force Waveform

There are two basic types of physical contacts in HRC scenarios that are commonly considered. The accidental collisions are unexpected pHRI featured with fast changes and are dangerous to humans, while the intentional contacts are demanded physical contacts possessing gentle waveform and are safe in HRC scenarios. In a disassembly scenario of the HR-Recycler, an accidental collision can be a collision with the human or an undesired workpiece, and an intentional contact may be the human-teaching procedure to manually adjust the robot’s posture.  These two kinds of pHRI usually lead to different consequences and should be classified to trigger different safety mechanisms. TUM develops an intelligent collision classification scheme using supervised learning methods (https://bit.ly/2IwpqWk). To adapt the method to the online application, a Bayesian decision method is used to improve the classification accuracy with incomplete signals. The method provides a fast, reliable, and intelligent scheme to identify collisions from safe pHRI, and can be used to trigger different safe reaction strategies (See Figure 2) for the sake of flexible and adaptive HRC, which benefits a low-cost but reliable collision-handling mechanism for HR-recycler robots.

Figure 2. The application of the intelligent collision classification method in a flexible collision handling procedure. (a). A human-robot collision occurs. (b). Collision is identified and an emergent stop is triggered. (c). A safe intentional human-robot contact is exerted. (d). Safe contact is classified and the robot teaching procedure is automatically enabled.

 

M.Sc. Salman Bari

Research Associate

Chair of Automatic Control Engineering (LSR)

Faculty of Electrical Engineering & Information technology

Technical University of Munich (TUM)

80333 Munich, Germany