INDUMETAL on Human-Robot collaboration improving WEEE handling

By | Blog | No Comments

Human-Robot collaboration improving MW and PC tower handling for their dismantling

Microwave ovens and computer towers are the two cases studied at Indumetal Recycling within the HR-Recycler project that seeks both the best process technique and risk prevention of potential injuries in the disassembly and decontamination processes of PC towers, microwave ovens, emergency lamps and LED and LCD screens.

Image 1. Dismantled microwave oven


Removing the condenser from a microwave oven before carrying out any device management operation is a legal obligation, given that they contain highly toxic substances. Due to the variety of microwave oven designs on the market, this extraction is currently done exclusively by hand. To perform this disassembly operation quickly and effectively, the operator needs to have an extensive experience.

In order to remove the housing that covers the microwave, it is usually sufficient to unscrew a limited number of elements. However, once the housing is unscrewed, it is not an easy task to access the interior of the oven, and therefore, help of a pick or a lever to remove it is needed.

Image 2. Dismantled PC tower


In the case of PC towers, it is mandatory to remove the battery that can be found inside as well as any PCB with a bigger size than ten-centimeter square. To access to these components, it is necessary to remove the external metallic housing. Just like the microwave ovens, there is a wide variety of PC tower designs, so this manipulation is currently done also manually.

Additionally, although it is not compulsory, Indumetal’s operator removes the hard drive and the disk from PC towers, since these components present high value materials that can be recovered if they are treated separately.   .

In these types of processes, the operator must move the equipment repeatedly to be able to access to the inner components. Robots can collaborate in this heavier and more dangerous task, reducing repetitive strain and accidental injuries in humans. However, these maneuvers demonstrate that the experience and knowledge of the operator are key in detecting the different components of WEEE. The dismantling processes currently studied in HR-Recycler, are clear examples where a hybridization between human and mechanical work of a robot would offer numerous benefits.

ROBOTNIK explains aspects involved on the safety signals

By | Blog | No Comments

Industrial robots have come front and center on the international stage as they’ve become widespread in the industrial sector. As they’ve become more powerful, more advanced and more productive, the need for robot safety has increased.

There are a number of ways manufacturers can introduce safety measures in their automated operations. The type and complexity of these safety measures will vary by the robotic application, with the aim to make AGVs safer, there are certain safety rules and standards that these collaborative robots must comply with, in Europe are found in EN ISO 3691-4:2020 “Industrial trucks — Safety requirements and verification — Part 4: Driverless industrial trucks and their systems” and ISO 12100:2010 6.4.3.

For Robotnik as an experimented robot manufacturer and within the collaborative environment of the HR-Recycler project, this aspect is especially important since humans and robots will be working side by side. The solution proposed to routing materials inside a factory has to be done in a safe manner, in this case, the robots designed are the RB-Kairos (mobile robotic manipulator) and the RB-Ares (pallet truck) and as AGV the main aspects will be show the intention of motion, elevate or manipulate. To ensure the correct operation within the complex framework of this project, Robotnik has equipped its robots with sensors and signalers that allow the robot to proceed safely and show its intentions in advance.

This post aims to give to the reader a brief description about what, why and how all the premises of the ISO will be reached.

First of all, what does the normative include? The standards on warning systems say:

  1. When any movement begins after a stop condition of more than 10 seconds, a visible or acoustic warning signal will be activated for at least 2 seconds before the start of the movement.
  2. A visible or acoustic warning signal will be activated during any movement.
  3. If the human detection means are active, the signal will be different.
  4. When robots change their direction from a straight path, a visible indication will be given of the direction to take before the direction changes in case that the robot is driving autonomously.
  5. When the lift is active, there must be special signage.

The solution proposed is a two-steps software that will manage the signals of the robot, explained after the diagram and on yellow cells:

The robot_local_control is a manager node, it has information about the status of the whole robot, that is, status of the elevator, goal active, mission ended, etc. On the right side, a group of nodes that manages the movement of the robot with a level of priority:

  • Robotnik_pad_node: The worker uses a PS4 pad to control the robot and this node will transmit the orders, non autonomous mode.
  • Path planning nodes: like Move_base, it controls the robot and we speak of it as autonomous mode.

Robotnik has installed on its AGVs two ways to alert facility users, acoustic devices or light indicators through the acoustic_safety_driver and leds_driver.

As you can see, there are two steps to link the top and bottom parts, a node to transform the movement into signals to show the intention of the robot and another one to orchestrate the both signal types and manage the requirements of the normative.

The turn_signaling_controller aims to solve the first and the fourth requirements of the normative depending on the robot mode (autonomous or not autonomous).

In non autonomous mode, and as the norm says, the motion depends on an appropriately authorised and trained personnel so it is enough to show that the robot is moving by reading the movement command and checking the velocity applied.

In autonomous mode the robot navigates to a goal point through a path calculated by the planner, furthermore it manages the AGV to avoid obstacles dynamically and for this reason it is important to alert workers every moment. The pipeline goes as follows:

This is a very brief description of the function, it bears the plan in mind and recalculates at the same time that the planner does just to be able to show the most up-to-date prediction of motion.

Last but not least, the robot_signal_manager aims to solve the rest of the problems since it has access to the robot status, it shows a light signaling or an acoustic signal 2 seconds before the motion, it gives priority to the emergency signals (consistent with the behaviour of the robot, red signals means that the robot will be stopped) and the signals that are not exclusive are showed using beacons or acoustic signals.

The occupied zone is one of the non exclusive signals, robots have some extra beacons that blinks on red when there is something on the protective zone (close to the intention of motion of the robot, inside the critic zone) and on yellow when there is something on the warning zone (near the protective zone).

Summarizing, safety is not only stopping the robot or avoiding a crash when human-robot collaboration takes place. With the development of these nodes Robotnik aims not only to decrease the probability of accident or comply with the safety ISO premises, but also to help workers feel more comfortable with the AGV’s decisions and bring human-robot collaboration closer showing clear signals about how the robot will perform.

CERTH explains the visual affordances concept

By | Blog | No Comments

“ The affordances of the environment are what it offers the animal, what it provides or furnishes… It implies the complementarity of the animal and the environment.” – James J. Gibson, 1979

Every object in our world has its own and discrete characteristics that derived from its visual properties and physical attributes.  These features are effective to recognize objects and to classify them into different categories. Thus, they are widely being used by vision recognition systems in the research area. However, these properties are unable to indicate how they can be used by a human. Visual and physical properties are not able to provide any clue about the set of potential actions that can be performed in a human – object interaction.

Affordances describe a possible set of actions that an environment allows to an actor [2]. Thus, dissimilar to the aforementioned object attributes that paint the picture of the object as an independent entity, affordances are capable to imply functional interactions of object parts with humans. In the context of robotic vision, taking into account object affordances is vitally important in order to create efficient autonomous robots that are able to interact with objects and to assist humans in daily tasks.

Affordances can be used in a big variety of applications. Among others, affordances are useful to anticipate and predict future actions because they represent the set of possible actions that may be performed by a human being. Additionally, by defining a set of possible and potential actions affordances provide beneficial clues not only for efficient activity recognition but also for functionality validation of objects. Last but not least affordances can be characterized as an intuition about objects’ value and significance leading to enhance scene understanding and traditional object recognition approaches.

In conclusion, affordances are powerful. They provide those details needed to make computer vision systems able to imitate humans’ object recognition system. Evermore, affordances provide a very effective and unique combination of features that seems to be able to enhance almost every computer vision system.

Illustration 1: Affordances Examples [1]


[1] Zhao, Xue & Cao, Yang & Kang, Yu. (2020). Object affordance detection with relationship-aware network. Neural Computing and Applications.

[2] Mohammed Hassanin, Salman Khan, and Murat Tahtali. 2021. Visual Affordance and Function Understanding: A Survey.

COMAU presents a new paradigm in collaborative robotics

By | Blog | No Comments

A new paradigm in collaborative robotics

Comau introduces to the market in March 2021 its Racer-5 COBOT, a new paradigm in collaborative robotics which meets the growing demand for fast, cost-effective cobots that can be used in restricted spaces and in different application areas. Countering the belief that collaborative robots are slow, Racer-5 COBOT is a 6-axis articulated robot that can work at industrial speed up to 6 m/s. With a 5 kg payload and 809 mm reach, it ensures optimal industrial efficiency while granting the added benefit of safe, barrier-free operations. Furthermore, the cobot can instantly switch from a collaborative mode to full speed when operators are not around, letting its 0.03 mm repeatability and advanced movement fluidity deliver unmatched production rates. Racer-5 COBOT enables systems integrators and end users to automate even the most sophisticated manufacturing processes without sacrificing speed, precision or collaborative intelligence. With this powerful industrial robot operating in dual modes, our customers are able to install a single, high-performance solution rather than having to deploy two distinct robots. With advanced safety features fully certified by TÜV Süd, an independent and globally recognized certification company, the cobot can be used within any high performance line without the need for protective barriers, which effectively reduces safety costs and floorspace requirements. Racer-5 COBOT also features integrated LED lighting to provide real-time confirmation of the workcell status. Finally, electrical and air connectors are located on the forearm to grant greater agility and minimize the risk of damage. All this enables Racer-5 COBOT to ensure higher production quality, better performance, faster cycle times and reduced capital expenditures.

The new Racer-5 COBOT delivers the speed and precision the small payload collaborative robotics market was missing, adding advanced safety features to the standard Racer-5 industrial robot and obtaining a fast, reliable and user-friendly cobot that can be used in any situation where cycle times and accuracy are paramount.

Made entirely in Comau (Turin, Italy), Racer-5 COBOT has a rigid construction that facilitates higher precision and repeatability year after year, making it particularly suitable for assembly, material handling, machine tending, dispensing and pick and place applications within the automotive, electrification and general industry sectors. In addition, the compact cobot can be easily transported and installed almost anywhere, helping the users optimize their processes and protect their investment.

HR-recycler project contributed to the development of this new product that can be considered an exploitable result of the project itself and will be applied, in particular, in the disassembling scenario for WEEE, testing and validating the collaborative features with the integration of specific tools for disassembling, such us grippers, industrial screwers, grinders.

Personal data protection in the European Union

By | Blog | No Comments

Personal data protection in the European Union: questions arising from the data protection impact assessment theory and practice


The European Union (EU)’s General Data Protection Regulation (GDPR) has brought to the fore a plethora of novel solutions aiming at, inter alia, better safeguarding interests of individuals whenever their personal data are being handled. Amongst these novelties is an obligation, imposed on data controllers, to carry out – before these data are handled – a process of data protection impact assessment (DPIA). This process is required to be conducted for data handlings capable of presenting “high risk” to the “rights and freedoms of natural persons” to “ensure the protection of personal data and to demonstrate compliance” with the law (Article 35 GDPR).
The HR-Recycler consortium functions under the TARES framework (Truthfulness, Authenticity, Respect, Equity and Social Responsibility), which comprises, among others, the requirement to conduct a data protection impact assessment pursuant to Article 35 of the GDPR. An impact assessment, in general, implies a proactive approach contributing to informed decision-making by considering potential consequences of the project, direct and indirect, to the rights and freedoms of natural persons before its occurrence.
However, the obligation of DPIA as such has seldom been an object of any judicial or extra-judicial proceedings. Due to the minimalistic contents of its main provisions and occasional vagueness of its terminology, the exact process of DPIA is still uncharted. Academics, practitioners, and policy makers still struggle to identify the modalities of the process of DPIA, in its theory and practice. Uncertainties in this legal requirement have already incited a considerable number of questions.
The legal and ethical partner Vrije Universiteit Brussel (VUB), and specifically d.pia.lab held a workshop some months ago. Several experts were invited, and its subject was to clarify the relationship between the DPIA requirement and (extra-)judicial proceedings. Its aim was to map and subsequently analyse possible legal questions concerning DPIA that might emerge in a set-up of legal proceedings, including preliminary questions to the Court of Justice of the EU (CJEU). A report from this workshop has recently been published, enumerating several questions that could arise out of such obligation.

Indicative list of questions
The questions were posed in a comprehensive and long structure, as if they would be addressed to a higher court of law, which would then propose a uniform interpretation of the rules. This would relate mostly to the scope of Articles 35 and 36 of the GDPR. In their majority, the topics of the questions formulated by the participants had to do with:

  • the method of impact assessment
  • the criteria which make it obligatory to perform or not
  • the concept of the risk to a right
  • the level of transparency of the DPIA
  • the involvement of technical partners
  • the balancing act, between benefits obtained and negative impacts
  • the representativeness of the data subjects and its modalities
  • the probability of infringement, compared to actual infringement
  • the semantics of the proportionality test
  • the clustering of data processing operations
  • the level of objectivity required
  • the concepts of data protection by design and default
  • the value of the DPIA as evidence in the court
  • the extent to which a court can scrutinize the content of the DPIA
  • the role of risk assessment techniques
  • the role of data protection authorities
  • the role of public authorities in conducting a DPIA
  • the role of the European Data Protection Board (EDPB)
  • the development of standards and techniques (e.g., tailored-down templates)
  • the liability arising from publishing the DPIA
  • the resources allocated for the DPIA process by data controllers

Before the conclusion of the workshop, some of the questions were already discussed and a common approach was adopted by looking at the experience of impact assessment in other areas and jurisdictions. Other questions, exclusive to the DPIA, was decided that still need further clarification (e.g., assessing “high risk” to the “rights and freedoms of natural persons”).

Sadako’s update on IBEC integration tests

By | Blog | No Comments

The last week of October 2020, Sadako and IBEC hit an important milestone in the progress of the HR Recycler project as the first integration tests were organized between the two partners. The aim of these tests was to assess the joint functioning of the computer vision modules developed by Sadako with the Human-Robot interaction modules developed by IBEC, in the simulated setting of a worker dismantling an electronic waste object on a Disassembly workbench. More specifically, the functionalities tested were how well the worker can interact with the workbench robot through a predetermined set of gestures, and how well the robot changes its behavior as a function of the human-robot distance. These tests were the object of a post by IBEC on this blog last month; we thought from Sadako that we could add a little insight from the computer vision perspective:

Software integration:

During the tests, production-ready versions of the software were employed, using ROS as an interface both to acquire the images from the Realsense cameras and to output the inferred information later processed by IBEC. A high-level architecture diagram of the software can be seen on the image below:

Both the position and action recognition software get images from the camera through ROS, handle the images and send them to the detectors. The detections are then handled back to the pose and action detection software and sent to IBEC via the ROS topic.



Action recognition software:

Detecting specific actions on a video in real time required the use of a different type of neural network than the one usually used at Sadako Technologies. Indeed, the standard neural networks employed in computer vision extract spatial features from images to detect some specific target object. To infer information from a video feed, temporal features need to be examined in addition to the spatial ones (ie. how the features evolve through time). In 2018, Facebook research published VMZ, a neural network architecture that has the particularity of performing spatial convolutions (examining one area of the image) and temporal convolutions (examining how one singular pixel evolves over time) in parallel on video data. This allows the detection of time-dependent features, such as gestures for example in this particular case. This network uses state of the art deep learning techniques (residual learning, aka ResNets) with some changes and adjustments to the neural network architectures to make it able to detect temporal features.


The other tested functionality was the ability of the robot to change its behavior (i.e movement speed) when the worker gets closer to the robot than some specific distance. To develop this functionality, the Openpose software was used to retrieve the worker’s skeleton joint coordinates on the RGB image. Those joint coordinates were then matched with the depth image given by the camera to locate the worker in space, relative to the camera. However, knowing where a worker is relative to the camera is not enough information to measure the worker-robot distance. To output the worker-robot coordinates, a calibration procedure had to be developed by Sadako where the camera’s position is measured using the RGB and depth information with respect to a marker visible on the scene. The robot’s position with respect to this marker is also measured; crossing both measurements allows the code to return an accurate measurement of the human-robot distance in real time:

The integration tests with IBEC were successful despite being organized against the ticking clock of the rising Covid-19 cases threatening to limit the access to the facilities and the ability to join multiple teams from different companies. Further tests are currently being organized between Sadako and IBEC to test improved versions of both parties software, taking into account the feedback gathered during the first integration test session.

CS GROUP’s HR Building Editor tool for the creation of 3D Factory models

By | Blog | No Comments

The HR-Recycler project deals with activities related to ‘hybrid human-robot recycling activities for electrical and electronic equipment’ operating in an indoor environment. One of the tasks of the project focus on the development of a “hybrid recycling factory for electrical and electronic equipment”. For this aim, CS GROUP is developing a 3D Building Editor which will has the ability to allow people to rapidly design 3D model of their factory. The main objective of this building editor is to provide an easy-to-use 3D modelling tool. The building editor allows to produce multi-level floor 3D models following BIM and uses the widely used Industry Foundation Classes (.ifc) open as a file format for interoperability.

During the project we have worked to provide a tool that allows to design in a few clicks the 3D plan of the factory that contain the essential elements such as walls, windows, floors, stairs…

For robot navigation purpose the tool was designed to enable the creation of “navigation mesh” used to assign a specific state to an area for example to help AGV’s navigation. This option will allow to distinguish between different areas to avoid human and robot collision.

As shown in the Figure 1 and Figure 2 the tool implemented allows to quickly draw a 3D plan, design windows, walls, doors and even specific areas that represents the robot navigation area, while offering two views to the users one in 2D and a second in 3D.

For example (Figure 3) we have implemented a very simple process to draw walls, the user first selects the “Wall” tool in the top toolbar (Figure 2). The user just has to position the first point of the wall and add the following ones. Once created, a wall can be edited unless its position has been secured and locked.

In the same way windows and doors can be easily placed on the 2D view by selecting the right icon in the toolbar. We highlighted in red if an element cannot be placed at the selected location.

Following this methods storeys can be easily added and managed within the building editor by using the “Buildings” menu as shown in Figure 1.

Furthermore, as described above, we have worked to allow the user to create navigation mesh (Figure 4) as soon as the factory floor model is created. Indeed, through the tool the user can draw areas shown in red in the 2D view and in green in the 3D view on the Figure 4 that represent mesh navigation. The robot will thus be able to know the zones where it is authorized to circulate. These zones, once created with the building editor, can be extracted to be interpreted by the FFM and the factory floor orchestrator during the factory setup export.

Once the 3D model completed the user can export it to the factory floor configurator and the Factory floor orchestrator to allow the positioning of the different POIs and other orchestration purposes.

Comau explains the development of a novel ROS interface

By | Blog | No Comments

COMAU and ROS driver for robot operations

Due to the needs to use a common platform based on ROS environment for all the partners of the consortium (and successively to customer or end users that wants to use the same solution),  Comau worked on a novel ROS interface for the robots involved in WEEE  management (classification and disassembling scenarios) that could be extended to the entire range of its robotics portfolio. In particular, the new ROS interface will be able to manage both joint and Cartesian positions from the robot to external world submitting the appropriate topics through TCP/IP channel.

Two different modalities have been developed for the two specific cases related to classification and disassembling scenarios:

  1. For the classification phase the use of the vision system able to detect the parts in the box has been integrated through a customized ROS driver permitting to send the data from vision system (cartesian positions) to the controller as showed in picture 2,
  2. In the disassembling case the needs is to have an high frequency rate in order to correct the robot motion with a feedforward control using the data of a force/torque sensor on the flange of the robot. In this case a frequency of 500 Hz permit to smoothly correct the path of the robot for delicate tasks of the disassembling phase such as  continuous and contact processes (e.g. grinding or cutting). A specific option has been integrated in the ROS architecture, called SENSOR_TRACKING, able to read and write the sensor data in real time (up to 2 ms) and send it to the controller for the motion execution.

IBEC presents a gesture-based communication protocol to enhance human-robot collaboration

By | Blog | No Comments



Enhancing human-robot collaboration in industrial environments via a gesture-based communication protocol.

One of the core principles behind the HR-Recycler project is to develop new ways of enhancing human-robot collaboration in industrial environments. A key requirement for establishing an efficient collaboration between humans and robots is the development of a concrete communication protocol. In particular, human workers require a fast way to communicate with the robot, as well as a mechanism to control it in case it is necessary.

An obvious choice would be to issue voice commands. However, the context of the industrial recycling plant in which the human-robot interaction will take place is noisy, and any communication protocol that relies on sound will face many problems. A communication protocol based on facial recognition is also discarded since the workers need to wear protective masks in these contexts. Using a set of buttons or a tablet to send information to the robot can be a good solution. However, these mechanisms cannot be the only channel of communication since the worker needs a fast and intuitive communication channel on which he can resort even when he is at the workbench or handling some tool.

SADAKO has built a replica of a workbench in their premises in which the gesture recognition was tested.

In order to achieve that, we have developed a non-verbal communication protocol based on gestures that will serve as an input for the robots. We have identified the following messages where gestures will be employed: start (or resume a previously paused action), pause (current action), cancel (current action), sleep (robot goes to idle mode), point (directional point to focus attention), wave (hello), no, yes.

Ismael performing the Vulcan salutation gesture that means “Live long and prosper”. Probably the most popular way to say “hi” be tween two rational agents (or at least the coolest, according to science).

To choose which gestures will represent each of the communicated messages, we need to consider two things: gestures should be easy to remember (so we do not suggest highly complex gestures) but they should not be gestures habitually used by humans. Also, gestures should be easy to remember, so human workers will not need training sessions for remembering the gestures. However, they should not be too simple or gestures that are largely employed by humans during communication to avoid situations where humans spontaneously perform those gestures, for example, while interacting with a co-worker.

Upon detection of the “hammering” action, the robot signals a blue light. This test served to illustrate that the gesture was correctly identified and received by the robot.

For the technical implementation of such a communication system, two partners of the HR-Recycler consortium have joined efforts. On one side, SADAKO has developed a gesture detection algorithm to correctly identify in real-time each of the communication gestures defined in the protocol. On the other side, IBEC has developed the concrete non-verbal communication repertoire, as well as an interaction-control module that integrates the information coming from SADAKO’s gesture recognition software and transforms it in a specific action command that is issued to the robot.

Óscar performs a “wave” gesture that is correctly identified by the gesture recognition software developed by SADAKO.

The first physical integration session between two partners of the HR-Recycler consortium took place last month at SADAKO’s premises to perform the initial tests of the gesture-based HRI communication protocol. There, a team composed by the IBEC and SADAKO groups tested the real-time detection of several of the proposed gestures. They also verified that the interaction manager was receiving the information of the identified gestures and correctly converting it in the corresponding robot commands.

Robotnik explains how to use MoveIt to develop a robotic manipulation application

By | Blog | No Comments


European Commission funded HR-Recycler project aims at developing a hybrid human-robot collaborative environment for the disassembly of electronic waste. Humans and robots will be working collaboratively sharing different manipulation tasks. One of these tasks takes place in the disassembly area where electronic components are sorted by type into their corresponding boxes. To speed up the component sorting task Robotnik is developing a mobile robotic manipulator which needs to pick boxes filled with disassembled components from the workbenches and transport them either to their final destination or to further processing areas. MoveIt is an open source robotic manipulation platform which allows you to develop complex manipulation applications using ROS. Here, a brief summary showing how we used MoveIt functionalities to develop a pick and place application will be presented.

Figure 1: Pick and Place task visual description.

We found MoveIt to be very useful in the early stages of developing a robotic manipulation application. It allowed us to decide on the environment setup, whether our robot is capable of performing the manipulation actions we need it to perform in that setup, how to arrange the setup for the best performance, how to design the components of the workspace the robot has to interact with so that they allow for the correct completion of the manipulation actions needed in the application. Workspace layout is very easy to carry out in MoveIt as it allows you to build the environment using mesh objects previously designed in any cad program and allows your robot to interact with them.

One of MoveIt’s strengths is that it allows you to plan towards any goal position not only taking into account the environment scene by avoiding objects, but also interacting with it by grabbing objects and including them in the planification process. Any MoveIt scene Collision Object can be attached to a desired robot link, MoveIt will then allow collisions between that link and the object, once attached the object will move together with the robot’s link.

Figure 2: MoveIt Planning Scene with collision objects (green) and attached objects (purple).

This functionality helped us determine from the very beginning whether our robot arm was able to reach objects in a table with a certain height, how far away from the table should the robot position to reach the objects properly, is there enough space to perform the arm movements it needs to perform to manipulate objects around the workspace area. It also helped us design the boxes needed for the task, allowing us to decide on the correct box size that will allow the robot arm to perform the necessary manipulation movements given the restricted working area.

However MoveIt’s main use is Motion Planning. MoveIt includes various tools that allow you to perform motion planning to a desired pose with high flexibility, you can adjust the motion planning algorithm to your application to obtain the best performance. This is very useful as it allows you to restrict your robot’s allowed motion to fit very specific criteria, which in an application like ours, with a restricted working space where the robot needs to manipulate objects precisely in an environment shared with humans is very important.

Figure 3: Planning to a desired goal taking into account collisions with scene.

One of the biggest motion requirements we have is the need for the robot arm to maintain the boxes parallel to the ground when manipulating them as they will be filled with objects that need to be carried between working stations. This can be easily taken into account with MoveIt as it allows you to plan using constraints. There are various constraints that can be applied, the ones we found more useful for our application are joint constraints and orientation constraints. Orientation constraints allow you to restrict the desired orientation of a robot link, they are very useful to maintain the robot’s end effector parallel to the ground, needed to manipulate the boxes properly. Joint constraints limit the position of a joint to be within a certain bound, they are very useful to shape the way you want your robot to move, in our application it allowed us to move the arm maintaining a relative position between the elbow and shoulder, performing more natural movements and avoiding potentially dangerous motions.

Figure 4: Motion Planning with joint and orientation constraints vs without.

Another useful MoveIt motion planning tool is that it allows you to plan movements to a goal position both in Cartesian and in Joint Space, allowing you to switch between these two options for different desired trajectory outcomes. Cartesian Space planning is used whenever you want to follow a very precise motion with the end effector link. In our application we made use of these functionality when moving down from the box approach position to  the grab position and back again. Our robot has to carry the boxes with it, and due to limited space on his base area all of the boxes are quite close together, using Cartesian planning we could assure we are maintaining verticality while raising the box from its holder avoiding latching between boxes and unnecessary stops. Joint Space planning is however useful to obtain more natural trajectories when the arm is moving between different grabbing positions making movement smoother.

Figure 5: Motion Planning in Cartesian Space vs Joint Space.

This is just a brief summary of how we used MoveIt to develop a preliminary robotic pick and place manipulation application, there are still lots of different tools that MoveIt has to offer. Some of MoveIt’s most advanced applications include integrating 3D sensors to build a perception layer used for object recognition in pick and place tasks or using deep learning algorithms for grasp pose generation, areas that will be explored in the next steps. Stay tuned for future updates in the development of a robotic manipulation application using MoveIt’s latest implementations.

Down below you will find a short demonstration of the currently developed application running on a Robotnik’s RB-KAIROS mobile manipulator.

Component Sorting manipulation application DEMO