Monthly Archives

December 2020

Sadako’s update on IBEC integration tests

By | Blog | No Comments

The last week of October 2020, Sadako and IBEC hit an important milestone in the progress of the HR Recycler project as the first integration tests were organized between the two partners. The aim of these tests was to assess the joint functioning of the computer vision modules developed by Sadako with the Human-Robot interaction modules developed by IBEC, in the simulated setting of a worker dismantling an electronic waste object on a Disassembly workbench. More specifically, the functionalities tested were how well the worker can interact with the workbench robot through a predetermined set of gestures, and how well the robot changes its behavior as a function of the human-robot distance. These tests were the object of a post by IBEC on this blog last month; we thought from Sadako that we could add a little insight from the computer vision perspective:

Software integration:

During the tests, production-ready versions of the software were employed, using ROS as an interface both to acquire the images from the Realsense cameras and to output the inferred information later processed by IBEC. A high-level architecture diagram of the software can be seen on the image below:

Both the position and action recognition software get images from the camera through ROS, handle the images and send them to the detectors. The detections are then handled back to the pose and action detection software and sent to IBEC via the ROS topic.

 

 

Action recognition software:

Detecting specific actions on a video in real time required the use of a different type of neural network than the one usually used at Sadako Technologies. Indeed, the standard neural networks employed in computer vision extract spatial features from images to detect some specific target object. To infer information from a video feed, temporal features need to be examined in addition to the spatial ones (ie. how the features evolve through time). In 2018, Facebook research published VMZ, a neural network architecture that has the particularity of performing spatial convolutions (examining one area of the image) and temporal convolutions (examining how one singular pixel evolves over time) in parallel on video data. This allows the detection of time-dependent features, such as gestures for example in this particular case. This network uses state of the art deep learning techniques (residual learning, aka ResNets) with some changes and adjustments to the neural network architectures to make it able to detect temporal features.

POSE DETECTION:

The other tested functionality was the ability of the robot to change its behavior (i.e movement speed) when the worker gets closer to the robot than some specific distance. To develop this functionality, the Openpose software was used to retrieve the worker’s skeleton joint coordinates on the RGB image. Those joint coordinates were then matched with the depth image given by the camera to locate the worker in space, relative to the camera. However, knowing where a worker is relative to the camera is not enough information to measure the worker-robot distance. To output the worker-robot coordinates, a calibration procedure had to be developed by Sadako where the camera’s position is measured using the RGB and depth information with respect to a marker visible on the scene. The robot’s position with respect to this marker is also measured; crossing both measurements allows the code to return an accurate measurement of the human-robot distance in real time:

The integration tests with IBEC were successful despite being organized against the ticking clock of the rising Covid-19 cases threatening to limit the access to the facilities and the ability to join multiple teams from different companies. Further tests are currently being organized between Sadako and IBEC to test improved versions of both parties software, taking into account the feedback gathered during the first integration test session.

CS GROUP’s HR Building Editor tool for the creation of 3D Factory models

By | Blog | No Comments

The HR-Recycler project deals with activities related to ‘hybrid human-robot recycling activities for electrical and electronic equipment’ operating in an indoor environment. One of the tasks of the project focus on the development of a “hybrid recycling factory for electrical and electronic equipment”. For this aim, CS GROUP is developing a 3D Building Editor which will has the ability to allow people to rapidly design 3D model of their factory. The main objective of this building editor is to provide an easy-to-use 3D modelling tool. The building editor allows to produce multi-level floor 3D models following BIM and uses the widely used Industry Foundation Classes (.ifc) open as a file format for interoperability.

During the project we have worked to provide a tool that allows to design in a few clicks the 3D plan of the factory that contain the essential elements such as walls, windows, floors, stairs…

For robot navigation purpose the tool was designed to enable the creation of “navigation mesh” used to assign a specific state to an area for example to help AGV’s navigation. This option will allow to distinguish between different areas to avoid human and robot collision.

As shown in the Figure 1 and Figure 2 the tool implemented allows to quickly draw a 3D plan, design windows, walls, doors and even specific areas that represents the robot navigation area, while offering two views to the users one in 2D and a second in 3D.

For example (Figure 3) we have implemented a very simple process to draw walls, the user first selects the “Wall” tool in the top toolbar (Figure 2). The user just has to position the first point of the wall and add the following ones. Once created, a wall can be edited unless its position has been secured and locked.

In the same way windows and doors can be easily placed on the 2D view by selecting the right icon in the toolbar. We highlighted in red if an element cannot be placed at the selected location.

Following this methods storeys can be easily added and managed within the building editor by using the “Buildings” menu as shown in Figure 1.

Furthermore, as described above, we have worked to allow the user to create navigation mesh (Figure 4) as soon as the factory floor model is created. Indeed, through the tool the user can draw areas shown in red in the 2D view and in green in the 3D view on the Figure 4 that represent mesh navigation. The robot will thus be able to know the zones where it is authorized to circulate. These zones, once created with the building editor, can be extracted to be interpreted by the FFM and the factory floor orchestrator during the factory setup export.

Once the 3D model completed the user can export it to the factory floor configurator and the Factory floor orchestrator to allow the positioning of the different POIs and other orchestration purposes.

Comau explains the development of a novel ROS interface

By | Blog | No Comments

COMAU and ROS driver for robot operations

Due to the needs to use a common platform based on ROS environment for all the partners of the consortium (and successively to customer or end users that wants to use the same solution),  Comau worked on a novel ROS interface for the robots involved in WEEE  management (classification and disassembling scenarios) that could be extended to the entire range of its robotics portfolio. In particular, the new ROS interface will be able to manage both joint and Cartesian positions from the robot to external world submitting the appropriate topics through TCP/IP channel.

Two different modalities have been developed for the two specific cases related to classification and disassembling scenarios:

  1. For the classification phase the use of the vision system able to detect the parts in the box has been integrated through a customized ROS driver permitting to send the data from vision system (cartesian positions) to the controller as showed in picture 2,
  2. In the disassembling case the needs is to have an high frequency rate in order to correct the robot motion with a feedforward control using the data of a force/torque sensor on the flange of the robot. In this case a frequency of 500 Hz permit to smoothly correct the path of the robot for delicate tasks of the disassembling phase such as  continuous and contact processes (e.g. grinding or cutting). A specific option has been integrated in the ROS architecture, called SENSOR_TRACKING, able to read and write the sensor data in real time (up to 2 ms) and send it to the controller for the motion execution.