Sadako’s update on IBEC integration tests

By | Blog | No Comments

The last week of October 2020, Sadako and IBEC hit an important milestone in the progress of the HR Recycler project as the first integration tests were organized between the two partners. The aim of these tests was to assess the joint functioning of the computer vision modules developed by Sadako with the Human-Robot interaction modules developed by IBEC, in the simulated setting of a worker dismantling an electronic waste object on a Disassembly workbench. More specifically, the functionalities tested were how well the worker can interact with the workbench robot through a predetermined set of gestures, and how well the robot changes its behavior as a function of the human-robot distance. These tests were the object of a post by IBEC on this blog last month; we thought from Sadako that we could add a little insight from the computer vision perspective:

Software integration:

During the tests, production-ready versions of the software were employed, using ROS as an interface both to acquire the images from the Realsense cameras and to output the inferred information later processed by IBEC. A high-level architecture diagram of the software can be seen on the image below:

Both the position and action recognition software get images from the camera through ROS, handle the images and send them to the detectors. The detections are then handled back to the pose and action detection software and sent to IBEC via the ROS topic.

 

 

Action recognition software:

Detecting specific actions on a video in real time required the use of a different type of neural network than the one usually used at Sadako Technologies. Indeed, the standard neural networks employed in computer vision extract spatial features from images to detect some specific target object. To infer information from a video feed, temporal features need to be examined in addition to the spatial ones (ie. how the features evolve through time). In 2018, Facebook research published VMZ, a neural network architecture that has the particularity of performing spatial convolutions (examining one area of the image) and temporal convolutions (examining how one singular pixel evolves over time) in parallel on video data. This allows the detection of time-dependent features, such as gestures for example in this particular case. This network uses state of the art deep learning techniques (residual learning, aka ResNets) with some changes and adjustments to the neural network architectures to make it able to detect temporal features.

POSE DETECTION:

The other tested functionality was the ability of the robot to change its behavior (i.e movement speed) when the worker gets closer to the robot than some specific distance. To develop this functionality, the Openpose software was used to retrieve the worker’s skeleton joint coordinates on the RGB image. Those joint coordinates were then matched with the depth image given by the camera to locate the worker in space, relative to the camera. However, knowing where a worker is relative to the camera is not enough information to measure the worker-robot distance. To output the worker-robot coordinates, a calibration procedure had to be developed by Sadako where the camera’s position is measured using the RGB and depth information with respect to a marker visible on the scene. The robot’s position with respect to this marker is also measured; crossing both measurements allows the code to return an accurate measurement of the human-robot distance in real time:

The integration tests with IBEC were successful despite being organized against the ticking clock of the rising Covid-19 cases threatening to limit the access to the facilities and the ability to join multiple teams from different companies. Further tests are currently being organized between Sadako and IBEC to test improved versions of both parties software, taking into account the feedback gathered during the first integration test session.

CS GROUP’s HR Building Editor tool for the creation of 3D Factory models

By | Blog | No Comments

The HR-Recycler project deals with activities related to ‘hybrid human-robot recycling activities for electrical and electronic equipment’ operating in an indoor environment. One of the tasks of the project focus on the development of a “hybrid recycling factory for electrical and electronic equipment”. For this aim, CS GROUP is developing a 3D Building Editor which will has the ability to allow people to rapidly design 3D model of their factory. The main objective of this building editor is to provide an easy-to-use 3D modelling tool. The building editor allows to produce multi-level floor 3D models following BIM and uses the widely used Industry Foundation Classes (.ifc) open as a file format for interoperability.

During the project we have worked to provide a tool that allows to design in a few clicks the 3D plan of the factory that contain the essential elements such as walls, windows, floors, stairs…

For robot navigation purpose the tool was designed to enable the creation of “navigation mesh” used to assign a specific state to an area for example to help AGV’s navigation. This option will allow to distinguish between different areas to avoid human and robot collision.

As shown in the Figure 1 and Figure 2 the tool implemented allows to quickly draw a 3D plan, design windows, walls, doors and even specific areas that represents the robot navigation area, while offering two views to the users one in 2D and a second in 3D.

For example (Figure 3) we have implemented a very simple process to draw walls, the user first selects the “Wall” tool in the top toolbar (Figure 2). The user just has to position the first point of the wall and add the following ones. Once created, a wall can be edited unless its position has been secured and locked.

In the same way windows and doors can be easily placed on the 2D view by selecting the right icon in the toolbar. We highlighted in red if an element cannot be placed at the selected location.

Following this methods storeys can be easily added and managed within the building editor by using the “Buildings” menu as shown in Figure 1.

Furthermore, as described above, we have worked to allow the user to create navigation mesh (Figure 4) as soon as the factory floor model is created. Indeed, through the tool the user can draw areas shown in red in the 2D view and in green in the 3D view on the Figure 4 that represent mesh navigation. The robot will thus be able to know the zones where it is authorized to circulate. These zones, once created with the building editor, can be extracted to be interpreted by the FFM and the factory floor orchestrator during the factory setup export.

Once the 3D model completed the user can export it to the factory floor configurator and the Factory floor orchestrator to allow the positioning of the different POIs and other orchestration purposes.

Comau explains the development of a novel ROS interface

By | Blog | No Comments

COMAU and ROS driver for robot operations

Due to the needs to use a common platform based on ROS environment for all the partners of the consortium (and successively to customer or end users that wants to use the same solution),  Comau worked on a novel ROS interface for the robots involved in WEEE  management (classification and disassembling scenarios) that could be extended to the entire range of its robotics portfolio. In particular, the new ROS interface will be able to manage both joint and Cartesian positions from the robot to external world submitting the appropriate topics through TCP/IP channel.

Two different modalities have been developed for the two specific cases related to classification and disassembling scenarios:

  1. For the classification phase the use of the vision system able to detect the parts in the box has been integrated through a customized ROS driver permitting to send the data from vision system (cartesian positions) to the controller as showed in picture 2,
  2. In the disassembling case the needs is to have an high frequency rate in order to correct the robot motion with a feedforward control using the data of a force/torque sensor on the flange of the robot. In this case a frequency of 500 Hz permit to smoothly correct the path of the robot for delicate tasks of the disassembling phase such as  continuous and contact processes (e.g. grinding or cutting). A specific option has been integrated in the ROS architecture, called SENSOR_TRACKING, able to read and write the sensor data in real time (up to 2 ms) and send it to the controller for the motion execution.

IBEC presents a gesture-based communication protocol to enhance human-robot collaboration

By | Blog | No Comments

SPECS-lab

28/11/2020

Enhancing human-robot collaboration in industrial environments via a gesture-based communication protocol.

One of the core principles behind the HR-Recycler project is to develop new ways of enhancing human-robot collaboration in industrial environments. A key requirement for establishing an efficient collaboration between humans and robots is the development of a concrete communication protocol. In particular, human workers require a fast way to communicate with the robot, as well as a mechanism to control it in case it is necessary.

An obvious choice would be to issue voice commands. However, the context of the industrial recycling plant in which the human-robot interaction will take place is noisy, and any communication protocol that relies on sound will face many problems. A communication protocol based on facial recognition is also discarded since the workers need to wear protective masks in these contexts. Using a set of buttons or a tablet to send information to the robot can be a good solution. However, these mechanisms cannot be the only channel of communication since the worker needs a fast and intuitive communication channel on which he can resort even when he is at the workbench or handling some tool.

SADAKO has built a replica of a workbench in their premises in which the gesture recognition was tested.

In order to achieve that, we have developed a non-verbal communication protocol based on gestures that will serve as an input for the robots. We have identified the following messages where gestures will be employed: start (or resume a previously paused action), pause (current action), cancel (current action), sleep (robot goes to idle mode), point (directional point to focus attention), wave (hello), no, yes.

Ismael performing the Vulcan salutation gesture that means “Live long and prosper”. Probably the most popular way to say “hi” be tween two rational agents (or at least the coolest, according to science).

To choose which gestures will represent each of the communicated messages, we need to consider two things: gestures should be easy to remember (so we do not suggest highly complex gestures) but they should not be gestures habitually used by humans. Also, gestures should be easy to remember, so human workers will not need training sessions for remembering the gestures. However, they should not be too simple or gestures that are largely employed by humans during communication to avoid situations where humans spontaneously perform those gestures, for example, while interacting with a co-worker.

Upon detection of the “hammering” action, the robot signals a blue light. This test served to illustrate that the gesture was correctly identified and received by the robot.

For the technical implementation of such a communication system, two partners of the HR-Recycler consortium have joined efforts. On one side, SADAKO has developed a gesture detection algorithm to correctly identify in real-time each of the communication gestures defined in the protocol. On the other side, IBEC has developed the concrete non-verbal communication repertoire, as well as an interaction-control module that integrates the information coming from SADAKO’s gesture recognition software and transforms it in a specific action command that is issued to the robot.

Óscar performs a “wave” gesture that is correctly identified by the gesture recognition software developed by SADAKO.

The first physical integration session between two partners of the HR-Recycler consortium took place last month at SADAKO’s premises to perform the initial tests of the gesture-based HRI communication protocol. There, a team composed by the IBEC and SADAKO groups tested the real-time detection of several of the proposed gestures. They also verified that the interaction manager was receiving the information of the identified gestures and correctly converting it in the corresponding robot commands.

Robotnik explains how to use MoveIt to develop a robotic manipulation application

By | Blog | No Comments

HOW TO USE MOVEIT TO DEVELOP A ROBOTIC MANIPULATION APPLICATION

European Commission funded HR-Recycler project aims at developing a hybrid human-robot collaborative environment for the disassembly of electronic waste. Humans and robots will be working collaboratively sharing different manipulation tasks. One of these tasks takes place in the disassembly area where electronic components are sorted by type into their corresponding boxes. To speed up the component sorting task Robotnik is developing a mobile robotic manipulator which needs to pick boxes filled with disassembled components from the workbenches and transport them either to their final destination or to further processing areas. MoveIt is an open source robotic manipulation platform which allows you to develop complex manipulation applications using ROS. Here, a brief summary showing how we used MoveIt functionalities to develop a pick and place application will be presented.

Figure 1: Pick and Place task visual description.

We found MoveIt to be very useful in the early stages of developing a robotic manipulation application. It allowed us to decide on the environment setup, whether our robot is capable of performing the manipulation actions we need it to perform in that setup, how to arrange the setup for the best performance, how to design the components of the workspace the robot has to interact with so that they allow for the correct completion of the manipulation actions needed in the application. Workspace layout is very easy to carry out in MoveIt as it allows you to build the environment using mesh objects previously designed in any cad program and allows your robot to interact with them.

One of MoveIt’s strengths is that it allows you to plan towards any goal position not only taking into account the environment scene by avoiding objects, but also interacting with it by grabbing objects and including them in the planification process. Any MoveIt scene Collision Object can be attached to a desired robot link, MoveIt will then allow collisions between that link and the object, once attached the object will move together with the robot’s link.

Figure 2: MoveIt Planning Scene with collision objects (green) and attached objects (purple).

This functionality helped us determine from the very beginning whether our robot arm was able to reach objects in a table with a certain height, how far away from the table should the robot position to reach the objects properly, is there enough space to perform the arm movements it needs to perform to manipulate objects around the workspace area. It also helped us design the boxes needed for the task, allowing us to decide on the correct box size that will allow the robot arm to perform the necessary manipulation movements given the restricted working area.

However MoveIt’s main use is Motion Planning. MoveIt includes various tools that allow you to perform motion planning to a desired pose with high flexibility, you can adjust the motion planning algorithm to your application to obtain the best performance. This is very useful as it allows you to restrict your robot’s allowed motion to fit very specific criteria, which in an application like ours, with a restricted working space where the robot needs to manipulate objects precisely in an environment shared with humans is very important.

Figure 3: Planning to a desired goal taking into account collisions with scene.

One of the biggest motion requirements we have is the need for the robot arm to maintain the boxes parallel to the ground when manipulating them as they will be filled with objects that need to be carried between working stations. This can be easily taken into account with MoveIt as it allows you to plan using constraints. There are various constraints that can be applied, the ones we found more useful for our application are joint constraints and orientation constraints. Orientation constraints allow you to restrict the desired orientation of a robot link, they are very useful to maintain the robot’s end effector parallel to the ground, needed to manipulate the boxes properly. Joint constraints limit the position of a joint to be within a certain bound, they are very useful to shape the way you want your robot to move, in our application it allowed us to move the arm maintaining a relative position between the elbow and shoulder, performing more natural movements and avoiding potentially dangerous motions.

Figure 4: Motion Planning with joint and orientation constraints vs without.

Another useful MoveIt motion planning tool is that it allows you to plan movements to a goal position both in Cartesian and in Joint Space, allowing you to switch between these two options for different desired trajectory outcomes. Cartesian Space planning is used whenever you want to follow a very precise motion with the end effector link. In our application we made use of these functionality when moving down from the box approach position to  the grab position and back again. Our robot has to carry the boxes with it, and due to limited space on his base area all of the boxes are quite close together, using Cartesian planning we could assure we are maintaining verticality while raising the box from its holder avoiding latching between boxes and unnecessary stops. Joint Space planning is however useful to obtain more natural trajectories when the arm is moving between different grabbing positions making movement smoother.

Figure 5: Motion Planning in Cartesian Space vs Joint Space.

This is just a brief summary of how we used MoveIt to develop a preliminary robotic pick and place manipulation application, there are still lots of different tools that MoveIt has to offer. Some of MoveIt’s most advanced applications include integrating 3D sensors to build a perception layer used for object recognition in pick and place tasks or using deep learning algorithms for grasp pose generation, areas that will be explored in the next steps. Stay tuned for future updates in the development of a robotic manipulation application using MoveIt’s latest implementations.

Down below you will find a short demonstration of the currently developed application running on a Robotnik’s RB-KAIROS mobile manipulator.

Component Sorting manipulation application DEMO

https://www.youtube.com/watch?v=JgyDB57xjDw

TUM describes the transition towards Safe Human-Robot Collaboration through Intelligent Collision Handling

By | Blog | No Comments

Towards Safe Human-Robot Collaboration: Intelligent Collision Handling

In recent years, new trends in industrial manufacturing have changed the interaction patterns between humans and robots. Instead of the conventional isolation mechanism, close cooperation between humans and robots is more and more required for complicated tasks. The HR-Recycler project seeks a solution to allow close human-robot collaboration for the disassembly of electronic wastes within industrial environments. In such a scenario, humans and robots are sharing the same workspace, and the handling of unexpected collisions is among the most important issues of the HR-recycler system. To be more specific, the robot platform in a disassembly scenario should be able to appropriately detect an unexpected collision and measure its value, such that emergent reaction strategies can take over the task routine to prevent or mitigate possible damages and injuries. Moreover, the collisions should be distinguished from the demanded physical human-robot interactions (pHRI) or intentional contacts, such that the nominal disassembly tasks are not disturbed. Highly relevant to the HR-Recycler project, the Technical University of Munich (TUM) develops a novel collision-handling scheme for robot manipulators, which is able to precisely measure the collision forces without torque sensors and identify the collision types with incomplete waveforms. The scheme provides a reliable solution to guarantee the safety of the HR-Recycler robot in complicated environments.

Collision Force Estimation without Torque and Velocity Measurements

When an unexpected collision occurs between the robot and the human or the environmental objects, the collision forces are exerted on the robot joints, which can be used to evaluate the effects of the collision. Although force sensors can be installed on the robot to measure the collision forces, they are usually quite expensive for low-cost robot platforms, such as the recycling robots. Thus, TUM proposes a novel method to estimate the collision forces using system dynamics without torque sensors (https://bit.ly/3eUuDD7). Different from the conventional collision force estimation methods, the usage of velocity measurement is also avoided, which improves the estimation response to the differential noise. In general, the method provides a solution for measuring collisions for low-expense robots with incomplete sensory. The method can be used to implement a force-feedback admittance control without force measurement (See Figure 1), which, conventionally, can only be achieved using high precision force sensors.

Figure 1. The application of the collision force estimation method to a force-feedback admittance control in safe HRC. (a). Robot in a nominal task. (b). An external force exerted on the robot. (c). Admittance behavior of the robot to the external force for safety. (d). Robot back to the nominal task after the vanishing of the external force.

 

Intelligent Collision Classification with Incomplete Force Waveform

There are two basic types of physical contacts in HRC scenarios that are commonly considered. The accidental collisions are unexpected pHRI featured with fast changes and are dangerous to humans, while the intentional contacts are demanded physical contacts possessing gentle waveform and are safe in HRC scenarios. In a disassembly scenario of the HR-Recycler, an accidental collision can be a collision with the human or an undesired workpiece, and an intentional contact may be the human-teaching procedure to manually adjust the robot’s posture.  These two kinds of pHRI usually lead to different consequences and should be classified to trigger different safety mechanisms. TUM develops an intelligent collision classification scheme using supervised learning methods (https://bit.ly/2IwpqWk). To adapt the method to the online application, a Bayesian decision method is used to improve the classification accuracy with incomplete signals. The method provides a fast, reliable, and intelligent scheme to identify collisions from safe pHRI, and can be used to trigger different safe reaction strategies (See Figure 2) for the sake of flexible and adaptive HRC, which benefits a low-cost but reliable collision-handling mechanism for HR-recycler robots.

Figure 2. The application of the intelligent collision classification method in a flexible collision handling procedure. (a). A human-robot collision occurs. (b). Collision is identified and an emergent stop is triggered. (c). A safe intentional human-robot contact is exerted. (d). Safe contact is classified and the robot teaching procedure is automatically enabled.

 

M.Sc. Salman Bari

Research Associate

Chair of Automatic Control Engineering (LSR)

Faculty of Electrical Engineering & Information technology

Technical University of Munich (TUM)

80333 Munich, Germany

TECNALIA studies the measurement of human trust in collaborative robots

By | Blog | No Comments

“Workmates of Indumental, Gaiker and Tecnalia interacted with a computer simulation of a manufacturing machine”

Nowadays, in industrial environments the trend is to achieve hybrid human-robot collaboration by replacing multiple currently manual, expensive and time-consuming tasks with correspondingly automatic robotic-based procedures.

More specifically, the goal of HR-Recycler project is to create a hybrid collaborative environment, where humans and robots will share and undertake, at the same time, different processing and manipulation tasks, targeting the industrial application case of WEEE recycling.

However, in order to achieve a successful interaction, a great level of trust is required between human and machine. Therefore, our project considers human factors and social cognition as key components to evaluate robot’s behaviour in terms of trust.

Within the scope of this project, TECNALIA is working in a Human-Robot trust classification model development that will be trained using inputs of psychophysiological signals from Human-Robot interactive collaboration.

Normally, in industrial environments is quite unfriendly to test these type of developments (due to complexity and quantity of items of Personal Protection Equipment used). So, in this research project, the trust classifier will be validated in a realistic 3D Virtual Reality industrial environment, which will be implemented ad hoc for this purpose.

The novelty of this project is the advance in the inference of human trust when interacting with a robot co-worker, in terms of including new psychophysiological signals as respiration and eye-tracker in the development of a trust classifier based on machine learning and the validation in an ad hoc 3D Virtual Reality environment that requires user interaction and physical movements that may generate noise on the desired signals.

The main objective is to detect robust psycho-physiological signals to discriminate between trust and distrust situations in human-robot interaction. This will enable to design machines capable to respond to changes in human trust level and rebalance the robot’s behaviour when a low confidence level is detected.

To achieve this objective, TECNALIA will work in a doble-stage empirical procedure.

The first experiment held on last week with the collaboration of 60 workmates of Indumental, Gaiker and Tecnalia. In this experiment, each person interacted with a computer simulation of a manufacturing machine. Therefore, each trial presented a stimulus (sensor reading), a response (participant’s choice) and an outcome (machine working or not). This studio was a laboratory based and several psychophysiological signals were captured while forcing trust/distrust situations. So, this will allow us to model a Human trust classifier according to the most significant signals.

The second experiment will use a VR environment to recreate real working conditions and validate the previous analytical results. Using a virtual environment allows us to provoke some validating conditions (for instance, robot malfunction) without running unnecessary health risks.

The RV experiment will replicate the exact layout of a Human-Robot collaborative workstation of a manufacturing plant. The participants will be asked to perform the same activities as the workers on the real plant and the virtual robots will interact with them on the same reality basics. However, during the experiments, sometimes robots will malfunction (will have longer waiting time or they will move more abruptly), thus compromising the human trust.

Hr-Recycler workshop “Shared workspace between humans and robots”

By | Blog | No Comments

Blog Hr-Recycler workshop Shared workspace between humans and robots”

The Hr-Recycler workshop “Shared workspace between humans and robots” took place on the 28th of July 2020 hosted at the 9th edition of the Living Machines international conference on biomimetics and biohybrid systems. http://livingmachinesconference.eu/2020/

The aim of this Hr-Recycler workshop was to present and discuss together with scientific and industrial stakeholders novel technological approaches that facilitate the collaboration between robots and humans towards solving challenging tasks in a shared working space without fences.

Human-Robot Collaboration (HRC) has been recently introduced in industrial environments, where the fast and precise, but at the same time dangerous, traditional industrial robots have started being replaced with industrial collaborative robots. The rationale behind the selection of the latter is to combine the endurance and precision of the robot with the dexterity and problem-solving ability of humans. An advantage of industrial collaborative robots (or cobots) is that they can coexist with humans without the need to be kept behind fences. Cobots can be utilised in numerous industrial tasks for automated parts assembly, disassembly, inspection, and co-manipulation.

Embedded in the most exciting environment provided but the Living Machines conference, one of the foremost conferences on robotics in the world, the HR-Recycler workshop was attended by about 50 participants and covered topics related to Smart mechatronics, Computer vision in robot-assisted tasks, Human-robot collaboration, Safety in the workspace. In addition, the workshop not only leveraged on the results of the EU-funded project HR-Recycler (https://www.hr-recycler.eu/), which introduces the use of industrial collaborative robots for disassembling WEEE devices, but also welcomed contributions from projects in industrial collaborative robotics as well as the broader research community. The other EU projects involved in the workshop were Rossini (http://www.rossini-project.com), COLLABORATE (collaborate-project.eu) and SHAREWORK (https://sharework-project.eu/)

What all projects have in common is the use of industrial robots that collaborate with humans while performing assembly and disassembly tasks. The overarching goal of these projects is to improve the overall productivity of a hybrid cell (which includes humans and robots), where ultimately, the human will assume a supervisory role, thus decreasing their cognitive load and fatigue and increasing their free time. Here, the human, safe operation, and adaptability are key components for a successful Human-Robot Collaboration task.

As highlighted during the workshop’s presentations and discussion, the human worker is central to all the participating projects. When developing systems with collaborative robotic agents, the human-related factors need to be included, as they may influence the quality of the robotic operations. An important aspect that was raised during the workshop is the need for new Key Performance Indicators (KPIs) to measure the HRC ergonomics. Participants also acknowledged the value of systematically evaluating the performance between humans and robots, as well as the perceived collaboration from the human workers. Ethics were also discussed, as all systems should operate taking into consideration the regulatory, legal, ethical and societal challenges of robotics in industrial automation. Human safety is essential, and participants reflected on the challenge that rises is the trade-off between performance, accuracy and safe operation. Finally, to develop collaborative systems that will interact with a variety of users in dynamic settings, their design should include a dynamic adaptation to both the operator and the environment.

Although the time of the workshop was limited, a lot of interesting and crucial topics arose for a safe and successful Human-Robot Collaboration. These fruitful discussions not only brought forward the current challenges that this field is facing, but also the opportunity and necessity for the relevant stakeholders to meet, discuss, exchange ideas and collaborate.

We hope to be part of more similar events!

Examples of the workshop’s presentations. The full program of the workshop can be found HERE.

 ORGANIZERS

– Apostolos Axenopoulos: Centre for Research and Technology Hellas – Information Technologies Institute
– Dimitrios Giakoumis: Centre for Research and Technology Hellas – Information Technologies Institute
– Eva Salgado:  Etxebarria, Tecnalia
– Vicky Vouloutsi, Institute for Bioengineering of Catalonia (IBEC), SPECS -Lab,
– Anna Mura: Institute for Bioengineering of Catalonia (IBEC), SPECS -Lab,

Autonomous pallet transportation in factory floor environments

By | Blog | No Comments

Autonomous pallet transportation in factory floor environments

Multi-pallet detection in factory floor environments

Within the HR-Recycler a novel pallet-truck AGV for the autonomous transportation of the pallets between classification and the workstations will be developed to enable automation in intra-factory logistics transportations of WEEE devices within a collaborative factory floor environment. The pallet-truck to be developed should be endowed with multi-pallet detection and pallet pick up navigation and control capabilities.

Technical Challenges

Pallet-truck AGVs’ operation in human populated factory environments is a challenging task, especially when there is a demand to operate without following fixed paths defined by guide wires, magnetic tape, magnets, or transponders embedded in the floor. There are several methods that tackle the task of autonomous pallet transportation and they are usually relied on computer vision approaches varying for indoor/outdoor environments. Yet, most of them are devoted to operate in predetermined paths and their global navigation is controlled by a central system typically linked to the Enterprise Resources Planning (ERP) of the factory.

Robotic Solution: Multi-pallet detection and docking strategy

A dedicated method has been developed by CERTH-ITI for the pallet-truck AGV developed by Robotnik partner within the HR-Recycler project. The solution comprises a vision-based method that enables safe and autonomous operation of pallet moving vehicles that accommodate pallet detection, pose estimation, docking control and pallet pick up in such industrial environments. A dedicated perception topology has been applied relying on stereo vision and laser-based measurements installed on-board a nominal pallet truck model. Pallet detection and pose estimation is performed with model-based geometrical pattern matching on point cloud data, exploiting CAD models of universal types of pallets, while robot alignment to candidate pallet is performed with a dedicated visual servoing controller. To allow safe and unconstrained operation, a human-aware navigation method has been developed to cope with human presence both during global path planing and during pallet-docking navigation phase. The developed method has been assessed in Gazebo simulation environment with a pallet truck model and proved to have real-time performance achieving increased accuracy in navigation, pallet detection and pick-up (see Figure1)

The Visual Analytics Lab (VARLab) of CERTH-ITI

Figure 1 In first figure, the robot approaches the pallet form-up area in Gazebo simulation environment. In the second figure the multi-pallet detection algorithm is applied along with the global planner algorithm that navigates the robot towards the alignment state. A point-to-point visual servoing controller drives the pallet-truck towards the selected pallet, as illustrated in third figure. The last figure outlines the docking and final pick-up of the pallet with a joint state controller.

Lifelong mapping for dynamic factory floor environments

By | Blog | No Comments

Lifelong mapping for dynamic factory floor environments

Mapping the factory floor environments

The HR-Recycler project aims to automate the transportation of WEEE devices and disassembled components within a collaborative factory floor environment. To achieve this, different type of AGVs dedicated to serve specific roles during the disassembly process will be developed. To enable AGVs navigation in such environments, simultaneously localization and mapping (SLAM) methods will be utilized. However, when it comes to highly dynamic environments such as factory floors with moving objects the built maps with classic SLAM are soon get obsolete and the robot navigation performance degrades.

Technical Challenges

Despite the leaps of progress that have been made in the field of mobile robotics in recent years, one major challenge that AGVs still face is that of long-term autonomous operation in dynamic environments. In the HR-Recycler scenario where the AGV operates in a factory floor (see Figure 1), it has to deal with changing conditions where other robots, workers, moving objects such as pallets and even commodities move around the factory environment. In this scenario with the typical SLAM mapping the static objects (such as walls) will constitute only a fraction of the existing information in the map during robot navigation. If we consider this map as the ground truth, and use it disregarding ongoing changes, it is very challenging to maintain stable robot localization even if a robust localization filter will be employed.

Figure 1 Gazebo simulation of the BIANATT S.A. (End-User) factory floor environment. Note that the majority   of the existing information in the illustrated infrastructure corresponds to dynamic objects.

Robotic Solution: Life-long mapping

CERTH-ITI developed an essential solution on solving this issue by employing the life-long mapping approach which will has the ability to distinguish static and dynamic areas and handle this information accordingly during robot navigation. The ability to identify areas that exhibit high or low dynamics can improve the navigation of mobile robots as well as improve the process of long-term mapping of the environment. We utilized temporal persistence modeling in order to predict the state of cells in the life-long map by gathering rare observations from the on-board the robot sensors. This allows the modeling of the objects persistence in the map and provides to the system with prior knowledge regarding the occupancy of the area where robot operates. The method allows robot navigation by avoiding congested areas deteriorating the re-plans leading thus the robot to its target location without unnecessary maneuvering. Simulation results on life-long mapping with temporal persistence modelling are outlined in Figure 2, which illustrates the static metric map (left) and the probability of the dynamic areas through temporal persistence modeling (right).

The Visual Analytics Lab (VARLab) of CERTH-ITI

Figure 2  The outcome of life-long mapping . The left image illustrates the metric map produced by the classical 2D SLAM, while the  right image corresponds to the dynamic obstacles persistence probability calculated from the temporal persistence modeling.