IBEC presents a gesture-based communication protocol to enhance human-robot collaboration

By | Blog | No Comments

SPECS-lab

28/11/2020

Enhancing human-robot collaboration in industrial environments via a gesture-based communication protocol.

One of the core principles behind the HR-Recycler project is to develop new ways of enhancing human-robot collaboration in industrial environments. A key requirement for establishing an efficient collaboration between humans and robots is the development of a concrete communication protocol. In particular, human workers require a fast way to communicate with the robot, as well as a mechanism to control it in case it is necessary.

An obvious choice would be to issue voice commands. However, the context of the industrial recycling plant in which the human-robot interaction will take place is noisy, and any communication protocol that relies on sound will face many problems. A communication protocol based on facial recognition is also discarded since the workers need to wear protective masks in these contexts. Using a set of buttons or a tablet to send information to the robot can be a good solution. However, these mechanisms cannot be the only channel of communication since the worker needs a fast and intuitive communication channel on which he can resort even when he is at the workbench or handling some tool.

SADAKO has built a replica of a workbench in their premises in which the gesture recognition was tested.

In order to achieve that, we have developed a non-verbal communication protocol based on gestures that will serve as an input for the robots. We have identified the following messages where gestures will be employed: start (or resume a previously paused action), pause (current action), cancel (current action), sleep (robot goes to idle mode), point (directional point to focus attention), wave (hello), no, yes.

Ismael performing the Vulcan salutation gesture that means “Live long and prosper”. Probably the most popular way to say “hi” be tween two rational agents (or at least the coolest, according to science).

To choose which gestures will represent each of the communicated messages, we need to consider two things: gestures should be easy to remember (so we do not suggest highly complex gestures) but they should not be gestures habitually used by humans. Also, gestures should be easy to remember, so human workers will not need training sessions for remembering the gestures. However, they should not be too simple or gestures that are largely employed by humans during communication to avoid situations where humans spontaneously perform those gestures, for example, while interacting with a co-worker.

Upon detection of the “hammering” action, the robot signals a blue light. This test served to illustrate that the gesture was correctly identified and received by the robot.

For the technical implementation of such a communication system, two partners of the HR-Recycler consortium have joined efforts. On one side, SADAKO has developed a gesture detection algorithm to correctly identify in real-time each of the communication gestures defined in the protocol. On the other side, IBEC has developed the concrete non-verbal communication repertoire, as well as an interaction-control module that integrates the information coming from SADAKO’s gesture recognition software and transforms it in a specific action command that is issued to the robot.

Óscar performs a “wave” gesture that is correctly identified by the gesture recognition software developed by SADAKO.

The first physical integration session between two partners of the HR-Recycler consortium took place last month at SADAKO’s premises to perform the initial tests of the gesture-based HRI communication protocol. There, a team composed by the IBEC and SADAKO groups tested the real-time detection of several of the proposed gestures. They also verified that the interaction manager was receiving the information of the identified gestures and correctly converting it in the corresponding robot commands.

Robotnik explains how to use MoveIt to develop a robotic manipulation application

By | Blog | No Comments

HOW TO USE MOVEIT TO DEVELOP A ROBOTIC MANIPULATION APPLICATION

European Commission funded HR-Recycler project aims at developing a hybrid human-robot collaborative environment for the disassembly of electronic waste. Humans and robots will be working collaboratively sharing different manipulation tasks. One of these tasks takes place in the disassembly area where electronic components are sorted by type into their corresponding boxes. To speed up the component sorting task Robotnik is developing a mobile robotic manipulator which needs to pick boxes filled with disassembled components from the workbenches and transport them either to their final destination or to further processing areas. MoveIt is an open source robotic manipulation platform which allows you to develop complex manipulation applications using ROS. Here, a brief summary showing how we used MoveIt functionalities to develop a pick and place application will be presented.

Figure 1: Pick and Place task visual description.

We found MoveIt to be very useful in the early stages of developing a robotic manipulation application. It allowed us to decide on the environment setup, whether our robot is capable of performing the manipulation actions we need it to perform in that setup, how to arrange the setup for the best performance, how to design the components of the workspace the robot has to interact with so that they allow for the correct completion of the manipulation actions needed in the application. Workspace layout is very easy to carry out in MoveIt as it allows you to build the environment using mesh objects previously designed in any cad program and allows your robot to interact with them.

One of MoveIt’s strengths is that it allows you to plan towards any goal position not only taking into account the environment scene by avoiding objects, but also interacting with it by grabbing objects and including them in the planification process. Any MoveIt scene Collision Object can be attached to a desired robot link, MoveIt will then allow collisions between that link and the object, once attached the object will move together with the robot’s link.

Figure 2: MoveIt Planning Scene with collision objects (green) and attached objects (purple).

This functionality helped us determine from the very beginning whether our robot arm was able to reach objects in a table with a certain height, how far away from the table should the robot position to reach the objects properly, is there enough space to perform the arm movements it needs to perform to manipulate objects around the workspace area. It also helped us design the boxes needed for the task, allowing us to decide on the correct box size that will allow the robot arm to perform the necessary manipulation movements given the restricted working area.

However MoveIt’s main use is Motion Planning. MoveIt includes various tools that allow you to perform motion planning to a desired pose with high flexibility, you can adjust the motion planning algorithm to your application to obtain the best performance. This is very useful as it allows you to restrict your robot’s allowed motion to fit very specific criteria, which in an application like ours, with a restricted working space where the robot needs to manipulate objects precisely in an environment shared with humans is very important.

Figure 3: Planning to a desired goal taking into account collisions with scene.

One of the biggest motion requirements we have is the need for the robot arm to maintain the boxes parallel to the ground when manipulating them as they will be filled with objects that need to be carried between working stations. This can be easily taken into account with MoveIt as it allows you to plan using constraints. There are various constraints that can be applied, the ones we found more useful for our application are joint constraints and orientation constraints. Orientation constraints allow you to restrict the desired orientation of a robot link, they are very useful to maintain the robot’s end effector parallel to the ground, needed to manipulate the boxes properly. Joint constraints limit the position of a joint to be within a certain bound, they are very useful to shape the way you want your robot to move, in our application it allowed us to move the arm maintaining a relative position between the elbow and shoulder, performing more natural movements and avoiding potentially dangerous motions.

Figure 4: Motion Planning with joint and orientation constraints vs without.

Another useful MoveIt motion planning tool is that it allows you to plan movements to a goal position both in Cartesian and in Joint Space, allowing you to switch between these two options for different desired trajectory outcomes. Cartesian Space planning is used whenever you want to follow a very precise motion with the end effector link. In our application we made use of these functionality when moving down from the box approach position to  the grab position and back again. Our robot has to carry the boxes with it, and due to limited space on his base area all of the boxes are quite close together, using Cartesian planning we could assure we are maintaining verticality while raising the box from its holder avoiding latching between boxes and unnecessary stops. Joint Space planning is however useful to obtain more natural trajectories when the arm is moving between different grabbing positions making movement smoother.

Figure 5: Motion Planning in Cartesian Space vs Joint Space.

This is just a brief summary of how we used MoveIt to develop a preliminary robotic pick and place manipulation application, there are still lots of different tools that MoveIt has to offer. Some of MoveIt’s most advanced applications include integrating 3D sensors to build a perception layer used for object recognition in pick and place tasks or using deep learning algorithms for grasp pose generation, areas that will be explored in the next steps. Stay tuned for future updates in the development of a robotic manipulation application using MoveIt’s latest implementations.

Down below you will find a short demonstration of the currently developed application running on a Robotnik’s RB-KAIROS mobile manipulator.

Component Sorting manipulation application DEMO

https://www.youtube.com/watch?v=JgyDB57xjDw

TUM describes the transition towards Safe Human-Robot Collaboration through Intelligent Collision Handling

By | Blog | No Comments

Towards Safe Human-Robot Collaboration: Intelligent Collision Handling

In recent years, new trends in industrial manufacturing have changed the interaction patterns between humans and robots. Instead of the conventional isolation mechanism, close cooperation between humans and robots is more and more required for complicated tasks. The HR-Recycler project seeks a solution to allow close human-robot collaboration for the disassembly of electronic wastes within industrial environments. In such a scenario, humans and robots are sharing the same workspace, and the handling of unexpected collisions is among the most important issues of the HR-recycler system. To be more specific, the robot platform in a disassembly scenario should be able to appropriately detect an unexpected collision and measure its value, such that emergent reaction strategies can take over the task routine to prevent or mitigate possible damages and injuries. Moreover, the collisions should be distinguished from the demanded physical human-robot interactions (pHRI) or intentional contacts, such that the nominal disassembly tasks are not disturbed. Highly relevant to the HR-Recycler project, the Technical University of Munich (TUM) develops a novel collision-handling scheme for robot manipulators, which is able to precisely measure the collision forces without torque sensors and identify the collision types with incomplete waveforms. The scheme provides a reliable solution to guarantee the safety of the HR-Recycler robot in complicated environments.

Collision Force Estimation without Torque and Velocity Measurements

When an unexpected collision occurs between the robot and the human or the environmental objects, the collision forces are exerted on the robot joints, which can be used to evaluate the effects of the collision. Although force sensors can be installed on the robot to measure the collision forces, they are usually quite expensive for low-cost robot platforms, such as the recycling robots. Thus, TUM proposes a novel method to estimate the collision forces using system dynamics without torque sensors (https://bit.ly/3eUuDD7). Different from the conventional collision force estimation methods, the usage of velocity measurement is also avoided, which improves the estimation response to the differential noise. In general, the method provides a solution for measuring collisions for low-expense robots with incomplete sensory. The method can be used to implement a force-feedback admittance control without force measurement (See Figure 1), which, conventionally, can only be achieved using high precision force sensors.

Figure 1. The application of the collision force estimation method to a force-feedback admittance control in safe HRC. (a). Robot in a nominal task. (b). An external force exerted on the robot. (c). Admittance behavior of the robot to the external force for safety. (d). Robot back to the nominal task after the vanishing of the external force.

 

Intelligent Collision Classification with Incomplete Force Waveform

There are two basic types of physical contacts in HRC scenarios that are commonly considered. The accidental collisions are unexpected pHRI featured with fast changes and are dangerous to humans, while the intentional contacts are demanded physical contacts possessing gentle waveform and are safe in HRC scenarios. In a disassembly scenario of the HR-Recycler, an accidental collision can be a collision with the human or an undesired workpiece, and an intentional contact may be the human-teaching procedure to manually adjust the robot’s posture.  These two kinds of pHRI usually lead to different consequences and should be classified to trigger different safety mechanisms. TUM develops an intelligent collision classification scheme using supervised learning methods (https://bit.ly/2IwpqWk). To adapt the method to the online application, a Bayesian decision method is used to improve the classification accuracy with incomplete signals. The method provides a fast, reliable, and intelligent scheme to identify collisions from safe pHRI, and can be used to trigger different safe reaction strategies (See Figure 2) for the sake of flexible and adaptive HRC, which benefits a low-cost but reliable collision-handling mechanism for HR-recycler robots.

Figure 2. The application of the intelligent collision classification method in a flexible collision handling procedure. (a). A human-robot collision occurs. (b). Collision is identified and an emergent stop is triggered. (c). A safe intentional human-robot contact is exerted. (d). Safe contact is classified and the robot teaching procedure is automatically enabled.

 

M.Sc. Salman Bari

Research Associate

Chair of Automatic Control Engineering (LSR)

Faculty of Electrical Engineering & Information technology

Technical University of Munich (TUM)

80333 Munich, Germany

TECNALIA studies the measurement of human trust in collaborative robots

By | Blog | No Comments

“Workmates of Indumental, Gaiker and Tecnalia interacted with a computer simulation of a manufacturing machine”

Nowadays, in industrial environments the trend is to achieve hybrid human-robot collaboration by replacing multiple currently manual, expensive and time-consuming tasks with correspondingly automatic robotic-based procedures.

More specifically, the goal of HR-Recycler project is to create a hybrid collaborative environment, where humans and robots will share and undertake, at the same time, different processing and manipulation tasks, targeting the industrial application case of WEEE recycling.

However, in order to achieve a successful interaction, a great level of trust is required between human and machine. Therefore, our project considers human factors and social cognition as key components to evaluate robot’s behaviour in terms of trust.

Within the scope of this project, TECNALIA is working in a Human-Robot trust classification model development that will be trained using inputs of psychophysiological signals from Human-Robot interactive collaboration.

Normally, in industrial environments is quite unfriendly to test these type of developments (due to complexity and quantity of items of Personal Protection Equipment used). So, in this research project, the trust classifier will be validated in a realistic 3D Virtual Reality industrial environment, which will be implemented ad hoc for this purpose.

The novelty of this project is the advance in the inference of human trust when interacting with a robot co-worker, in terms of including new psychophysiological signals as respiration and eye-tracker in the development of a trust classifier based on machine learning and the validation in an ad hoc 3D Virtual Reality environment that requires user interaction and physical movements that may generate noise on the desired signals.

The main objective is to detect robust psycho-physiological signals to discriminate between trust and distrust situations in human-robot interaction. This will enable to design machines capable to respond to changes in human trust level and rebalance the robot’s behaviour when a low confidence level is detected.

To achieve this objective, TECNALIA will work in a doble-stage empirical procedure.

The first experiment held on last week with the collaboration of 60 workmates of Indumental, Gaiker and Tecnalia. In this experiment, each person interacted with a computer simulation of a manufacturing machine. Therefore, each trial presented a stimulus (sensor reading), a response (participant’s choice) and an outcome (machine working or not). This studio was a laboratory based and several psychophysiological signals were captured while forcing trust/distrust situations. So, this will allow us to model a Human trust classifier according to the most significant signals.

The second experiment will use a VR environment to recreate real working conditions and validate the previous analytical results. Using a virtual environment allows us to provoke some validating conditions (for instance, robot malfunction) without running unnecessary health risks.

The RV experiment will replicate the exact layout of a Human-Robot collaborative workstation of a manufacturing plant. The participants will be asked to perform the same activities as the workers on the real plant and the virtual robots will interact with them on the same reality basics. However, during the experiments, sometimes robots will malfunction (will have longer waiting time or they will move more abruptly), thus compromising the human trust.

Hr-Recycler workshop “Shared workspace between humans and robots”

By | Blog | No Comments

Blog Hr-Recycler workshop Shared workspace between humans and robots”

The Hr-Recycler workshop “Shared workspace between humans and robots” took place on the 28th of July 2020 hosted at the 9th edition of the Living Machines international conference on biomimetics and biohybrid systems. http://livingmachinesconference.eu/2020/

The aim of this Hr-Recycler workshop was to present and discuss together with scientific and industrial stakeholders novel technological approaches that facilitate the collaboration between robots and humans towards solving challenging tasks in a shared working space without fences.

Human-Robot Collaboration (HRC) has been recently introduced in industrial environments, where the fast and precise, but at the same time dangerous, traditional industrial robots have started being replaced with industrial collaborative robots. The rationale behind the selection of the latter is to combine the endurance and precision of the robot with the dexterity and problem-solving ability of humans. An advantage of industrial collaborative robots (or cobots) is that they can coexist with humans without the need to be kept behind fences. Cobots can be utilised in numerous industrial tasks for automated parts assembly, disassembly, inspection, and co-manipulation.

Embedded in the most exciting environment provided but the Living Machines conference, one of the foremost conferences on robotics in the world, the HR-Recycler workshop was attended by about 50 participants and covered topics related to Smart mechatronics, Computer vision in robot-assisted tasks, Human-robot collaboration, Safety in the workspace. In addition, the workshop not only leveraged on the results of the EU-funded project HR-Recycler (https://www.hr-recycler.eu/), which introduces the use of industrial collaborative robots for disassembling WEEE devices, but also welcomed contributions from projects in industrial collaborative robotics as well as the broader research community. The other EU projects involved in the workshop were Rossini (http://www.rossini-project.com), COLLABORATE (collaborate-project.eu) and SHAREWORK (https://sharework-project.eu/)

What all projects have in common is the use of industrial robots that collaborate with humans while performing assembly and disassembly tasks. The overarching goal of these projects is to improve the overall productivity of a hybrid cell (which includes humans and robots), where ultimately, the human will assume a supervisory role, thus decreasing their cognitive load and fatigue and increasing their free time. Here, the human, safe operation, and adaptability are key components for a successful Human-Robot Collaboration task.

As highlighted during the workshop’s presentations and discussion, the human worker is central to all the participating projects. When developing systems with collaborative robotic agents, the human-related factors need to be included, as they may influence the quality of the robotic operations. An important aspect that was raised during the workshop is the need for new Key Performance Indicators (KPIs) to measure the HRC ergonomics. Participants also acknowledged the value of systematically evaluating the performance between humans and robots, as well as the perceived collaboration from the human workers. Ethics were also discussed, as all systems should operate taking into consideration the regulatory, legal, ethical and societal challenges of robotics in industrial automation. Human safety is essential, and participants reflected on the challenge that rises is the trade-off between performance, accuracy and safe operation. Finally, to develop collaborative systems that will interact with a variety of users in dynamic settings, their design should include a dynamic adaptation to both the operator and the environment.

Although the time of the workshop was limited, a lot of interesting and crucial topics arose for a safe and successful Human-Robot Collaboration. These fruitful discussions not only brought forward the current challenges that this field is facing, but also the opportunity and necessity for the relevant stakeholders to meet, discuss, exchange ideas and collaborate.

We hope to be part of more similar events!

Examples of the workshop’s presentations. The full program of the workshop can be found HERE.

 ORGANIZERS

– Apostolos Axenopoulos: Centre for Research and Technology Hellas – Information Technologies Institute
– Dimitrios Giakoumis: Centre for Research and Technology Hellas – Information Technologies Institute
– Eva Salgado:  Etxebarria, Tecnalia
– Vicky Vouloutsi, Institute for Bioengineering of Catalonia (IBEC), SPECS -Lab,
– Anna Mura: Institute for Bioengineering of Catalonia (IBEC), SPECS -Lab,

Autonomous pallet transportation in factory floor environments

By | Blog | No Comments

Autonomous pallet transportation in factory floor environments

Multi-pallet detection in factory floor environments

Within the HR-Recycler a novel pallet-truck AGV for the autonomous transportation of the pallets between classification and the workstations will be developed to enable automation in intra-factory logistics transportations of WEEE devices within a collaborative factory floor environment. The pallet-truck to be developed should be endowed with multi-pallet detection and pallet pick up navigation and control capabilities.

Technical Challenges

Pallet-truck AGVs’ operation in human populated factory environments is a challenging task, especially when there is a demand to operate without following fixed paths defined by guide wires, magnetic tape, magnets, or transponders embedded in the floor. There are several methods that tackle the task of autonomous pallet transportation and they are usually relied on computer vision approaches varying for indoor/outdoor environments. Yet, most of them are devoted to operate in predetermined paths and their global navigation is controlled by a central system typically linked to the Enterprise Resources Planning (ERP) of the factory.

Robotic Solution: Multi-pallet detection and docking strategy

A dedicated method has been developed by CERTH-ITI for the pallet-truck AGV developed by Robotnik partner within the HR-Recycler project. The solution comprises a vision-based method that enables safe and autonomous operation of pallet moving vehicles that accommodate pallet detection, pose estimation, docking control and pallet pick up in such industrial environments. A dedicated perception topology has been applied relying on stereo vision and laser-based measurements installed on-board a nominal pallet truck model. Pallet detection and pose estimation is performed with model-based geometrical pattern matching on point cloud data, exploiting CAD models of universal types of pallets, while robot alignment to candidate pallet is performed with a dedicated visual servoing controller. To allow safe and unconstrained operation, a human-aware navigation method has been developed to cope with human presence both during global path planing and during pallet-docking navigation phase. The developed method has been assessed in Gazebo simulation environment with a pallet truck model and proved to have real-time performance achieving increased accuracy in navigation, pallet detection and pick-up (see Figure1)

The Visual Analytics Lab (VARLab) of CERTH-ITI

Figure 1 In first figure, the robot approaches the pallet form-up area in Gazebo simulation environment. In the second figure the multi-pallet detection algorithm is applied along with the global planner algorithm that navigates the robot towards the alignment state. A point-to-point visual servoing controller drives the pallet-truck towards the selected pallet, as illustrated in third figure. The last figure outlines the docking and final pick-up of the pallet with a joint state controller.

Lifelong mapping for dynamic factory floor environments

By | Blog | No Comments

Lifelong mapping for dynamic factory floor environments

Mapping the factory floor environments

The HR-Recycler project aims to automate the transportation of WEEE devices and disassembled components within a collaborative factory floor environment. To achieve this, different type of AGVs dedicated to serve specific roles during the disassembly process will be developed. To enable AGVs navigation in such environments, simultaneously localization and mapping (SLAM) methods will be utilized. However, when it comes to highly dynamic environments such as factory floors with moving objects the built maps with classic SLAM are soon get obsolete and the robot navigation performance degrades.

Technical Challenges

Despite the leaps of progress that have been made in the field of mobile robotics in recent years, one major challenge that AGVs still face is that of long-term autonomous operation in dynamic environments. In the HR-Recycler scenario where the AGV operates in a factory floor (see Figure 1), it has to deal with changing conditions where other robots, workers, moving objects such as pallets and even commodities move around the factory environment. In this scenario with the typical SLAM mapping the static objects (such as walls) will constitute only a fraction of the existing information in the map during robot navigation. If we consider this map as the ground truth, and use it disregarding ongoing changes, it is very challenging to maintain stable robot localization even if a robust localization filter will be employed.

Figure 1 Gazebo simulation of the BIANATT S.A. (End-User) factory floor environment. Note that the majority   of the existing information in the illustrated infrastructure corresponds to dynamic objects.

Robotic Solution: Life-long mapping

CERTH-ITI developed an essential solution on solving this issue by employing the life-long mapping approach which will has the ability to distinguish static and dynamic areas and handle this information accordingly during robot navigation. The ability to identify areas that exhibit high or low dynamics can improve the navigation of mobile robots as well as improve the process of long-term mapping of the environment. We utilized temporal persistence modeling in order to predict the state of cells in the life-long map by gathering rare observations from the on-board the robot sensors. This allows the modeling of the objects persistence in the map and provides to the system with prior knowledge regarding the occupancy of the area where robot operates. The method allows robot navigation by avoiding congested areas deteriorating the re-plans leading thus the robot to its target location without unnecessary maneuvering. Simulation results on life-long mapping with temporal persistence modelling are outlined in Figure 2, which illustrates the static metric map (left) and the probability of the dynamic areas through temporal persistence modeling (right).

The Visual Analytics Lab (VARLab) of CERTH-ITI

Figure 2  The outcome of life-long mapping . The left image illustrates the metric map produced by the classical 2D SLAM, while the  right image corresponds to the dynamic obstacles persistence probability calculated from the temporal persistence modeling.

3D modeling a virtual training system

By | Blog | No Comments

HR-Recycler aims at developing a hybrid human-robot collaborative environment for the disassembly of waste of electrical and electronic equipment. Within this environment, humans and robots will be working collaboratively by sharing different processing and manipulation tasks.

The HR-Recycler project will implement learning paradigms in virtual reality (VR) settings, for which a recycling plant will be modeled in 3D to be displayed in a VR environment. One objective is to capture accurately the interaction of the workers with the environment and especially the objects present in the factory. Another objective is to allow workers to practice disassembly procedures and to properly interact with the environment. It is necessary to deliver a VR environment as realistic as possible and to model the procedures so that they can lead to more efficient training of the workers.

Within the project DIGINEXT is developing an effective virtual training system, targeting the WEEE recycling industry. The authoring tool (Procedure Editor) aims at easing the creation of WEEE disassembly virtual training experiences by minimizing production efforts.

Unlike a generic 3D tool, DIGINEXT’s solution does not require any specific programming or 3D modeling skills and allows us to adopt a smoother and faster workflow as compared to classical solutions based on a 3D engine. This means that a field expert can create procedures for virtual training even with very limited programming skills.

The tool is used by following these simple steps:

The first step consists of 3D modeling and animation: setting the scene (building, furniture) and selecting the WEEE that will be recycled.

The 3D model is next split and each part that requires an operation (e.g. screws) has an action attributed (e.g. unscrew, open, etc.). The figure below shows the list of parts for a microwave oven, and for a specific screw it can be seen that it is a “screwable” part that has 2 possible actions “screw” and “unscrew” (circled in red).

The procedure is then constructed very simply by assembling boxes within the graphical environment, by attributing these actions in a linear way that follows the processing as done in the actual factories.

Once the procedure is finished it can be either played using the mouse (see picture below) or exported following the S1000D norm and be used e.g. for I.A. training by other members of the project’s consortium.

Michael Darques, DXT

On improving the data collection process, an end user’s view

By | Blog | No Comments

With the advancement of technology, the adopted processes need to accompany this advance. The main objective of the changes seeks to meet continuous improvement, in order to improve working conditions and the satisfaction of employees who work at the factory, reducing the risks to human health and safety. The processes of recycling waste electrical and electronic equipment are composed of numerous dangerous, monotone, and time-consuming tasks that must be replaced. It is here that automatic processes based on robotics appear as a solution. Through the robotics solutions proposed in the HR-Recycler project, workers and robots will collaborate in a synchronized manner. With this, the risk to the health and safety of workers due to the handling of potentially hazardous waste will be reduced and workers will be able to focus on tasks raising the level of quality, value, and qualification. And as a result, the companies will be able to increase their recycling rates.

It was expected to have collected images between partners Sadako and Interecycling. But unfortunately, with some delays and the difficult situation of COVID-19 that is affecting the whole world, this stage is still to be accomplished. We are hopeful that this whole scenario will soon improve rapidly in order to complete this stage.

Regarding the recordings, we will collect data for the Classification step in the same way as we have done in the previous recording campaigns, in the other end-users partners, Indumetal and Bianatt. For the Disassembly step, we will record hand-held camera data, later used for the co-bot operation, as well as whole-body images used for action recognition software.

The recordings will not be fundamentally different from those made previously, in the end-users mentioned above. In order to improve and to make better the results already obtained, we can try some calibration of the camera for the disassembly step and use the images later at Sadako to assess operational performance rather than only to build neural networks. This camera calibration would allow us to better locate the operator in space and use all the developed software functionalities.

So with this stage and recordings, we hope to gather data to improve our existing neural networks, both for the Classification and Disassembly steps, and to use some of this data to measure our operational performance. To get to know a little better the work developed in the area of ​​image collection you can consult the blog post of SDK from last 26/06/2020 (https://www.hr-recycler.eu/blog/).

Ana Catarina, INT
Albert Tissot, SDK

Monitoring and reviewing the impact assessment

By | Blog | No Comments

Monitoring and reviewing the impact assessment [in HR Recycler]

 In HR Recycler the consortium aims to develop a collaborative human-robot  to aid workers in the performance of their heavy tasks in recycling plants of waste electrical and electronic (WEEE) materials. The objective is that workers and robots “collaborate” in a joint and synchronised manner. What are the impacts of such a development? How it might affect workers rights and society as a whole? One main tool to ensure legal and ethical compliance is the performance of an impact assessment.

An impact assessment evaluates the origin, nature and severity of impacts that the (processing) operations, real or hypothetical, of a specific project entail. The ‘architecture’ for impact assessment typically consists of three main elements, the ‘framework’, the ‘method’ and the ‘template’. A framework constitutes an ‘essential supporting structure’ or organizational arrangement for something, which, in this context, concerns the policy for impact assessment, and defines and describes the conditions and principles thereof. In turn, a method, which is a ‘particular procedure for accomplishing or approaching something’, concerns the practice of impact assessment and defines the consecutive and/or iterative steps to be undertaken to perform such a process. A method corresponds to a framework and can be seen as a practical reflection thereof. These two elements have been identified in D2.2, being public. Lastly, a template for the assessment process can be seen as a practical implementation of a method (i.e. a procedure therefor, comprising consecutive and/or iterative steps) for impact assessment, itself reflecting a framework therefor (i.e. conditions and principles).

Building on the TARES impact assessment (Truthfulness, Authenticity, Respect, Equity and Social Responsibility), elaborated previously in another blogpost, VUB continues to identify legal, ethical and societal impacts that the project technology might entail, and to provide recommendations. This exercise necessitates broadening the picture of the project, so as to include relevant societal concerns, instead of solely focusing on legal matters (e.g. data protection).

To report the first version of the TARES impact assessment, an initial template had been sent out, where all partners had to answer specific questions from a technical and end-user point of view. The process of impact assessment shall be receptive and collaborative, i.e. technology developers work alongside the assessor’s team. During the first version of the TARES impact assessment, several issues were identified, such as that workers are considered to be ‘vulnerable subjects’, and consent to participate in the research project is not deemed to be freely given by data protection authorities. For instance, as a recommendation, an independent observer role to collect consent forms is warranted. The results of the report are confidential, for the time being, due to legitimate secrecy, and are illustrated in D2.3. However, a summary of the process may be published in the future.

The next tasks comprise mostly monitoring, review and update – to be achieved periodically with three deliverables – reports. The continuity of the TARES impact assessment aims at anticipating the risks and at adopting a mitigation strategy with recommendations for the further development of the technology. As the system is integrated, tested and evaluated, VUB will repeat this impact assessment, in three phases every 6 months, reporting after each phase of the project pilots. This means that a similar questionnaire, duly adapted, will be shared with all partners, so as to monitor closely the progress of each phase and the compliance to the mitigation strategy and recommendations; the latter will be adjusted to take into account the responses received. Assessors are currently revising their IAs, because things do change and there is a need to keep up in order to appropriately address upcoming issues. By doing this, the impact assessment aspires to be a ‘living instrument’, that evolves with the project and is able to assess ongoing changes and impacts.

To that end, VUB and specifically d.pia.lab recently announced for public consultation the third policy brief, which proposes a template for a report from the process of data protection impact assessment (DPIA); this reflects the best practice for impact assessment and, at the same time, conforms to the requirements of the General Data Protection Regulation GDPR. By utilizing the template as well as its subsequent modifications after the public consultation, VUB aims to better structure and perform the monitoring and review phase (including updates) of the assessment process in HR-Recycler.

If you need wish to receive further information, do not hesitate to subscribe to the HR-Recycler newsletter here.

Nikos Ioannidis & Sergi Vazquez Maymir

August 2020