Monthly Archives

May 2021

CVinHRC workshop will be held virtually, as ICCV 2021

By | Future, Workshops | No Comments

CVinHRC | ICCV 2021 – International Conference on Computer Vision

Topics Covered

The technological breakthrough in robotics and the needs of the factories of future (Industry 4.0) bring the robots out of their cages to work in close collaboration with humans, aiming to increase productivity, flexibility and autonomy in production. To enable true and effective human-robot collaboration, the perception system of such collaborative robots should be endorsed with advanced computer vision methods that will transform them into active and effective co-workers.

Recent advances in the field of computer vision are anticipated to resolve several complex tasks that require human-robot collaboration in manufacturing and logistics domains. However, the applicability of existing computer vision techniques in such factories of the future is hindered from the challenges that real, unconstrained industrial environments with cobots impose, such as variability in position and orientation of manipulated objects, deformation and articulation, existence of occlusions, motion, dynamic environments, human presence and more.

In particular, the variability of manufactured parts and the lighting conditions in realistic environments renders robust object recognition and pose estimation challenging, especially when collaborative tasks demand dexterous and delicate grasping of objects. Deep learning can further advance the existing methods to cope with occlusions and other incurred challenges, while also the combination of learning with visual attentional models could reduce the need for data redundancy by selecting most prominent and rich-in-context viewpoints to be memorized, boosting the overall performance of the vision systems. Moreover, close distance collaboration with humans requires accurate SLAM and real time monitoring and modelling of the human body to be applied for robot manipulation and AGV navigation tasks in unconstrained environments, ensuring safety and human faith to the new automation solutions. Alongside, further advanced semantic SLAM methods are needed to endorse cobots with robust long-term autonomy with no or minimal human intervention. What is more, the fusion of deep learning with multimodal perception can offer solutions to complex manufacturing tasks that require powerful vision systems to deal with challenges such as articulated objects and deformable materials handled by the robots. This can be achieved not only by using vision systems as passive observers of the scene, but also with the active involvement of the collaborative robots endorsed with visual searching and view planning capabilities to drastically increase their knowledge for their surroundings.

The goal of this workshop is to bring together researchers from academia and industry in the field of computer vision and enable them to present novel methods and approaches that set the basis for further advanced robotic perception dealing with the significant challenges of human robot collaboration in the factories of future.

We encourage submissions of original and unpublished works that address computer vision for robotic applications in manufacturing and logistics domain, including but not limited to the following:

  • Deep learning for object recognition and pose estimation in manufacturing and logistics
  • 6-DoF object pose estimation for grasping
  • Real time object tracking and visual servoing
  • Vision-based object affordances learning
  • Vision-based manipulation skills modelling and knowledge transfer
  • View planning with robot active vision
  • Human presence modelling, detection and tracking in real factory environments
  • Human-robot workspace modelling for safe manipulation
  • Semantic SLAM and lifelong environment learning
  • Safe AGV navigation based on visual input
  • Multi-AGVs perception and coordination for multiple tasks
  • Visual search for AGVs and manipulators in industrial environments
  • Sensor fusion (Camera, Lidar, Haptic, etc.) for enhanced scene understanding
  • Vision-based attention modeling for collaborative tasks

Invited Speakers :

INTERECYCLING on the future of WEEE recycling

By | Blog | No Comments

Since almost one year, COVID-19 changes all countries and mobility.

Above all more business will need to be reinforced, and ‘’vaccines’’ are given some hope.

Globally many economies and global GDP (gross domestic product) decreased in a way since not seen in 50 years.

Despite that and seeing our recycling industry, global figure points for record sales of IT (Information Technology) and economics to perform digitalization. The 2 biggest manufactures, publish already a growth of sales around 30% in this items.

Facing this, plus better collection  of wastes because of restrictions that reduce the illegal disposals, we see a future for recycling, stable and with growth expectations, where developments and more technical solutions will be extra needed to better perform and maximize the adding value of industry and environment.

 

Ricardo Vidal

TECNALIA combines Neuroscience and Mixed Reality

By | Blog | No Comments

“Workmates of Indumental, Gaiker and Tecnalia interacted with a Mixed Reality environment working with a collaborative robot in WEEE recycling domain”

The romantic idea that emotions are born from the heart has been a curious way of expressing that thoughts and emotions are elements that coexist separately, that is, the brain and the heart seen as unconnected organs.

Today and thanks to Neuroscience (a scientific specialty that is dedicated to the comprehensive study of the nervous system, its functions, structure and other aspects that help explain various characteristics of behaviour and cognitive processes), theories such as “neither emotions they are exclusively of the heart, nor does reason alone reside in the brain” take on special relevance. What Neuroscience shows us is that cognition and emotion are closely linked, they are two sides of the same coin and have their residence in our nervous system.

A group of researchers in Tecnalia have created a laboratory dedicated to studying User Experience (UX) and Human Factors (HF), in order to take into account other emotions as an innovative aspect not considered until now. With this, they intend to provide objectivity to the data in the analysis of interactions and abstract emotional and cognitive processes.

Tecnalia’s Human Factors & User Experience laboratory integrates different devices (dry and wet EEG systems, sensors to measure the galvanic response of the skin and heart rate, eye-tracking glasses, indoor GPS system …) that allow the measurement (accurately and non-invasively) of psycho-physiological signals of a person when exposed to diverse external stimuli. Thus, the UX & HF laboratory goes one step further in the usual process of a user experience study (for example, surveys, focus groups, interviews, thinking aloud, etc.). Instead of asking the user or observing him/her when the person uses a product or service to know how the experience has been, with our laboratory technology we measure what the nervous system tells us by presenting him/her with different stimuli and thus obtaining objective data, avoiding biases derived from the subjective observation of the interviewer, of the subjective assessment of the respondent, and even of their own emotions.

Specifically, Tecnalia researchers have reproduced in a Virtual (Mixed) Reality environment the process of disassembling a PC tower (collaboratively with a robot that helps them in certain tasks), in such a way that the participant in the experiment has actually been able to “feel” that it was physically touching the PC components in the disassembly process. About 50 people from the consortium companies (Indumetal, Gaiker and Tecnalia) have passed through the experiment for two weeks. All of them have been monitored and asked to repeat the process, with slight variations in the behaviour of the collaborative robot (for example, robot malfunction and robot speed).

This Mixed Reality experiment has replicated the layout of a Human-Robot collaborative workstation of a WEEE (Waste Electrical and Electronic Equipment) recycling plant. The participants have been asked to perform the same activities as the workers on the real plant and the virtual robots have interacted with them on the same reality basics. However, during the experiments, sometimes robots will behave differently, and the psychophysiological response of the participant has been registered.

In order to make the experiment participants feel the experience as real as possible, a realistic 3D virtual scenario of a factory was created for viewing through immersive virtual reality glasses, and this virtual scenario was mixed with real tracked elements with which the participants will interact directly in a task of disassembly of electronic components.

The real objects that have been tracked (with small “bubbles”) and are aligned with the virtual reality in real time, are the worktable, the PC (specifically its top cover) that is manipulated and a fan inside the PC.

Once we had achieved this mixed experience (VR scenario plus real tracked objects), the next challenge was to include interactions tracking the hands of the participants. For this purpose, different technological approaches have been tested and compared.

In any case, and after several tests, the experiment with the participants has been carried out with the tracking of the hands/fingers by vision, adding the support of the wrist tracker.

With this experiment, the worlds of Virtual/Mixed reality and Neuroscience have come together, to ensure that in the future our “intelligent” robot mates will be able to adapt to our emotional and cognitive state, thus achieving a more fluid interaction between human and his fellow machines.

GAIKER on assessing the environmental and social benefits of the pilot studies

By | Blog | No Comments

GAIKER will assess the environmental and social benefits of the pilot studies carried out on the project HR-Recycler applying the life cycle perspective

GAIKER continues defining the scenarios of the pilot demonstrations in which the performance of the solutions, developed within the HR-Recycler Project, will be tested and validated in real operation environments. The implementation of technical changes will introduce modifications in the existing recycling operations that will need to be evaluated. The aim of the study will be focus on assessing the environmental and social impacts due to the changes in the WEEE recycling process introduced by the project HR-Recycler compared to the current practice. Therefore, the project’s team is gathering information related to the current processes and will compare it with the one associated to the developed human-robot collaborative processes.The evaluation will be carried out using the life cycle methodology (LCA), to broaden the scope of the assessment and conduct a more holistic analysis. This holistic approach (“From Cradle to Grave”) is what allows to answer the question of how a certain product has integrally interacted with the environment. The environmental LCA methodology focuses primarily on assessing the environmental impacts associated with a product or service throughout its whole life and the social LCA (s-LCA), is a method that can be used to assess the social and sociological aspects of products and services, their actual and potential positive as well as negative impacts along their life cycle.Within the HR-Recycler project, to perform a fair comparative evaluation, it is necessary to include the processes directly involved in the pilots and the current processes and the relevant upstream and downstream processes. This is particularly important in the case of processes downstream as the new processes developed in the project should improve productivity and efficiency while increasing the recycling ratios and reducing the amount of waste generated in the WEEE treatments.The ISO 14040 and ISO 14044 will be the reference methodology to follow in the LCAs to do. In the case of the social LCA, the assessment team will also take in account the new Guidelines for social life cycle assessment of products and organizations, developed by UNEP/SETAC in 2020, which are based in the abovementioned standards.On the other hand, the function of the system to assess is to recover and recycle as many materials, components and products as possible, fulfilling the mandatory requirements of the legislation. Accordingly, it has been defined as the functional unit (FU) 1.0 t of products recovered from WEEE and available to be used as secondary raw material or reused as a component or product.Regarding the limits of the system, the operations of classification, dismantling and sorting of WEEE in each Use Case (UC), as well as the end of life (EoL) of the waste generated in abovementioned operations, have been included.

INDUMETAL on Human-Robot collaboration improving WEEE handling

By | Blog | No Comments

Human-Robot collaboration improving MW and PC tower handling for their dismantling

Microwave ovens and computer towers are the two cases studied at Indumetal Recycling within the HR-Recycler project that seeks both the best process technique and risk prevention of potential injuries in the disassembly and decontamination processes of PC towers, microwave ovens, emergency lamps and LED and LCD screens.

Image 1. Dismantled microwave oven

 

Removing the condenser from a microwave oven before carrying out any device management operation is a legal obligation, given that they contain highly toxic substances. Due to the variety of microwave oven designs on the market, this extraction is currently done exclusively by hand. To perform this disassembly operation quickly and effectively, the operator needs to have an extensive experience.

In order to remove the housing that covers the microwave, it is usually sufficient to unscrew a limited number of elements. However, once the housing is unscrewed, it is not an easy task to access the interior of the oven, and therefore, help of a pick or a lever to remove it is needed.

Image 2. Dismantled PC tower

 

In the case of PC towers, it is mandatory to remove the battery that can be found inside as well as any PCB with a bigger size than ten-centimeter square. To access to these components, it is necessary to remove the external metallic housing. Just like the microwave ovens, there is a wide variety of PC tower designs, so this manipulation is currently done also manually.

Additionally, although it is not compulsory, Indumetal’s operator removes the hard drive and the disk from PC towers, since these components present high value materials that can be recovered if they are treated separately.   .

In these types of processes, the operator must move the equipment repeatedly to be able to access to the inner components. Robots can collaborate in this heavier and more dangerous task, reducing repetitive strain and accidental injuries in humans. However, these maneuvers demonstrate that the experience and knowledge of the operator are key in detecting the different components of WEEE. The dismantling processes currently studied in HR-Recycler, are clear examples where a hybridization between human and mechanical work of a robot would offer numerous benefits.

ROBOTNIK explains aspects involved on the safety signals

By | Blog | No Comments

Industrial robots have come front and center on the international stage as they’ve become widespread in the industrial sector. As they’ve become more powerful, more advanced and more productive, the need for robot safety has increased.

There are a number of ways manufacturers can introduce safety measures in their automated operations. The type and complexity of these safety measures will vary by the robotic application, with the aim to make AGVs safer, there are certain safety rules and standards that these collaborative robots must comply with, in Europe are found in EN ISO 3691-4:2020 “Industrial trucks — Safety requirements and verification — Part 4: Driverless industrial trucks and their systems” and ISO 12100:2010 6.4.3.

For Robotnik as an experimented robot manufacturer and within the collaborative environment of the HR-Recycler project, this aspect is especially important since humans and robots will be working side by side. The solution proposed to routing materials inside a factory has to be done in a safe manner, in this case, the robots designed are the RB-Kairos (mobile robotic manipulator) and the RB-Ares (pallet truck) and as AGV the main aspects will be show the intention of motion, elevate or manipulate. To ensure the correct operation within the complex framework of this project, Robotnik has equipped its robots with sensors and signalers that allow the robot to proceed safely and show its intentions in advance.

This post aims to give to the reader a brief description about what, why and how all the premises of the ISO will be reached.

First of all, what does the normative include? The standards on warning systems say:

  1. When any movement begins after a stop condition of more than 10 seconds, a visible or acoustic warning signal will be activated for at least 2 seconds before the start of the movement.
  2. A visible or acoustic warning signal will be activated during any movement.
  3. If the human detection means are active, the signal will be different.
  4. When robots change their direction from a straight path, a visible indication will be given of the direction to take before the direction changes in case that the robot is driving autonomously.
  5. When the lift is active, there must be special signage.

The solution proposed is a two-steps software that will manage the signals of the robot, explained after the diagram and on yellow cells:

The robot_local_control is a manager node, it has information about the status of the whole robot, that is, status of the elevator, goal active, mission ended, etc. On the right side, a group of nodes that manages the movement of the robot with a level of priority:

  • Robotnik_pad_node: The worker uses a PS4 pad to control the robot and this node will transmit the orders, non autonomous mode.
  • Path planning nodes: like Move_base, it controls the robot and we speak of it as autonomous mode.

Robotnik has installed on its AGVs two ways to alert facility users, acoustic devices or light indicators through the acoustic_safety_driver and leds_driver.

As you can see, there are two steps to link the top and bottom parts, a node to transform the movement into signals to show the intention of the robot and another one to orchestrate the both signal types and manage the requirements of the normative.

The turn_signaling_controller aims to solve the first and the fourth requirements of the normative depending on the robot mode (autonomous or not autonomous).

In non autonomous mode, and as the norm says, the motion depends on an appropriately authorised and trained personnel so it is enough to show that the robot is moving by reading the movement command and checking the velocity applied.

In autonomous mode the robot navigates to a goal point through a path calculated by the planner, furthermore it manages the AGV to avoid obstacles dynamically and for this reason it is important to alert workers every moment. The pipeline goes as follows:

This is a very brief description of the function, it bears the plan in mind and recalculates at the same time that the planner does just to be able to show the most up-to-date prediction of motion.

Last but not least, the robot_signal_manager aims to solve the rest of the problems since it has access to the robot status, it shows a light signaling or an acoustic signal 2 seconds before the motion, it gives priority to the emergency signals (consistent with the behaviour of the robot, red signals means that the robot will be stopped) and the signals that are not exclusive are showed using beacons or acoustic signals.

The occupied zone is one of the non exclusive signals, robots have some extra beacons that blinks on red when there is something on the protective zone (close to the intention of motion of the robot, inside the critic zone) and on yellow when there is something on the warning zone (near the protective zone).

Summarizing, safety is not only stopping the robot or avoiding a crash when human-robot collaboration takes place. With the development of these nodes Robotnik aims not only to decrease the probability of accident or comply with the safety ISO premises, but also to help workers feel more comfortable with the AGV’s decisions and bring human-robot collaboration closer showing clear signals about how the robot will perform.

CERTH explains the visual affordances concept

By | Blog | No Comments

“ The affordances of the environment are what it offers the animal, what it provides or furnishes… It implies the complementarity of the animal and the environment.” – James J. Gibson, 1979

Every object in our world has its own and discrete characteristics that derived from its visual properties and physical attributes.  These features are effective to recognize objects and to classify them into different categories. Thus, they are widely being used by vision recognition systems in the research area. However, these properties are unable to indicate how they can be used by a human. Visual and physical properties are not able to provide any clue about the set of potential actions that can be performed in a human – object interaction.

Affordances describe a possible set of actions that an environment allows to an actor [2]. Thus, dissimilar to the aforementioned object attributes that paint the picture of the object as an independent entity, affordances are capable to imply functional interactions of object parts with humans. In the context of robotic vision, taking into account object affordances is vitally important in order to create efficient autonomous robots that are able to interact with objects and to assist humans in daily tasks.

Affordances can be used in a big variety of applications. Among others, affordances are useful to anticipate and predict future actions because they represent the set of possible actions that may be performed by a human being. Additionally, by defining a set of possible and potential actions affordances provide beneficial clues not only for efficient activity recognition but also for functionality validation of objects. Last but not least affordances can be characterized as an intuition about objects’ value and significance leading to enhance scene understanding and traditional object recognition approaches.

In conclusion, affordances are powerful. They provide those details needed to make computer vision systems able to imitate humans’ object recognition system. Evermore, affordances provide a very effective and unique combination of features that seems to be able to enhance almost every computer vision system.

Illustration 1: Affordances Examples [1]

 

[1] Zhao, Xue & Cao, Yang & Kang, Yu. (2020). Object affordance detection with relationship-aware network. Neural Computing and Applications.

[2] Mohammed Hassanin, Salman Khan, and Murat Tahtali. 2021. Visual Affordance and Function Understanding: A Survey.

COMAU presents a new paradigm in collaborative robotics

By | Blog | No Comments

A new paradigm in collaborative robotics

Comau introduces to the market in March 2021 its Racer-5 COBOT, a new paradigm in collaborative robotics which meets the growing demand for fast, cost-effective cobots that can be used in restricted spaces and in different application areas. Countering the belief that collaborative robots are slow, Racer-5 COBOT is a 6-axis articulated robot that can work at industrial speed up to 6 m/s. With a 5 kg payload and 809 mm reach, it ensures optimal industrial efficiency while granting the added benefit of safe, barrier-free operations. Furthermore, the cobot can instantly switch from a collaborative mode to full speed when operators are not around, letting its 0.03 mm repeatability and advanced movement fluidity deliver unmatched production rates. Racer-5 COBOT enables systems integrators and end users to automate even the most sophisticated manufacturing processes without sacrificing speed, precision or collaborative intelligence. With this powerful industrial robot operating in dual modes, our customers are able to install a single, high-performance solution rather than having to deploy two distinct robots. With advanced safety features fully certified by TÜV Süd, an independent and globally recognized certification company, the cobot can be used within any high performance line without the need for protective barriers, which effectively reduces safety costs and floorspace requirements. Racer-5 COBOT also features integrated LED lighting to provide real-time confirmation of the workcell status. Finally, electrical and air connectors are located on the forearm to grant greater agility and minimize the risk of damage. All this enables Racer-5 COBOT to ensure higher production quality, better performance, faster cycle times and reduced capital expenditures.

The new Racer-5 COBOT delivers the speed and precision the small payload collaborative robotics market was missing, adding advanced safety features to the standard Racer-5 industrial robot and obtaining a fast, reliable and user-friendly cobot that can be used in any situation where cycle times and accuracy are paramount.

Made entirely in Comau (Turin, Italy), Racer-5 COBOT has a rigid construction that facilitates higher precision and repeatability year after year, making it particularly suitable for assembly, material handling, machine tending, dispensing and pick and place applications within the automotive, electrification and general industry sectors. In addition, the compact cobot can be easily transported and installed almost anywhere, helping the users optimize their processes and protect their investment.

HR-recycler project contributed to the development of this new product that can be considered an exploitable result of the project itself and will be applied, in particular, in the disassembling scenario for WEEE, testing and validating the collaborative features with the integration of specific tools for disassembling, such us grippers, industrial screwers, grinders.