Category

Blog

Bayesian view on Robot Motion Planning

By | Blog | No Comments

Bayesian view on Robot Motion Planning

Planning as Inference

High-dimensional motion planning algorithms are crucial to plan a robot trajectory in complex environment. In addition to collision avoidance and constraints handling capabilities, a key performance criterion is the computation time. Although a range of approaches exist for motion planning, the recent work[1,2] has sparked the interest in a potentially transformative approach, according to which robot motion planning can be accomplished through probabilistic inference[3,4].

The planning as inference (PAI) view originated in the artificial and machine learning research. PAI view has been adopted to solve planning and sequential decision-making problems in artificial agents and robotics. The key idea is that the PAI methods compute the posterior distribution over random variables subject to conditional dependencies in a joint distribution. In other words, in the PAI formulation, the planning objectives are represented as probabilistic models. Probabilistic inference is used to compute the posterior distribution over trajectories, given constraints and goals. All the motion objectives such as motion priors, goals and task constraints are fused together to find a posterior distribution over trajectories in a way similar to Bayesian sensor fusion.  This PAI problem formulation enables the utility of the whole approximate inference techniques for a range of planning problems, and it provides certain benefits, such as uncertainty quantification, structured representation and faster convergence.

PAI framework is also closely related to the perception-action generative models originating in cognitive science research. The cognitive generative model provides a unified framework for perception, learning and planning, and it is named as active inference (AIF)[5].  The PAI framework also relates to stochastic optimal control and reinforcement learning.

Min-Sum Message Passing for Planning as Inference

Gaussian Process formulation of continuous-time trajectory[1] offers a fast solution to the motion planning problem via probabilistic inference on factor graph.  However, often the solution converges to in-feasible local minima and the planned trajectory is not collision-free. It fuses all the planning problem objectives which are represented as factors and solves non-linear least square optimization problem via numerical (Gauss-Newton or Levenberg-Marquardt) methods. Although this approach of solving factor graph is fast, the batch non-linear least square optimization approach makes it vulnerable to converging to in-feasible local minima. The approach to combine all the factors of the graph makes it faster than state of the art motion planning algorithms, but it comes at the cost of being more prone to getting stuck in local minima. Graph re-optimization could naively help to get out of the in-feasible local minimain expense of additional computation time

TUM has proposed a message passing algorithm[6] that is more sensitive to obstacles with fast convergence time. We leverage the utility of min-sum message passing algorithm that performs local computations at each node to solve the inference problem on factor graph. We first introduce the notion of compound factor node to transform the factor graph to a linearly structured graph. The decentralized approach of solving each compound node increases sensitivity towards avoiding obstacles for complex planning problems.

PAI offers an interesting view on robot motion planning and how message passing algorithms can be adopted to solve the approximate inference problem[4,6]. However, a major drawback of this approach is its inability to handle hard constraints. Fusion of motion objectives only allow to handle soft constraints in the present problem formulation.  For WEEE disassembly setup in HR-Recycler, this can cause safety issues as the PAI algorithm can generate trajectories that are not collision-free. In order to handle hard constraints, TUM is working on PAI based risk-aware algorithms that can generate safe planning algorithms for WEEE disassembly setup.

Fig. 1: Trajectory generated by min-sum message passing algorithm.

References

[1] M. Mukadam, J. Dong, X. Yan, F. Dellaert, and B. Boots, “Continuous-time Gaussian process motion planning via probabilistic inference,”Int. J. Robotics Res., vol. 37, no. 11, pp. 1319–1340, 2018.

[2] M. Xie and F. Dellaert, “Batch and incremental kinodynamic motion planning using dynamic factor graphs,”CoRR, vol. abs/2005.12514,2020.

[3] H. Attias, “Planning by probabilistic inference,” in Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics, AISTATS 2003, Key West, Florida, USA, January 3-6, 2003

[4] M. Toussaint and C. Goerick, “A bayesian view on motor control and planning,” in From Motor Learning to Interaction Learning in Robots, ser. Studies in Computational Intelligence, O. Sigaud and J. Peters,Eds. Springer, 2010, vol. 264, pp. 227–252.

[5] K. J. Friston, J. Mattout, and J. Kilner, “Action understanding and active inference,” Biol. Cybern., vol. 104, no. 1-2, pp. 137–160, 2011.

[6] S. Bari, V. Gabler, D. Wollherr, “MS2MP:  A min-sum message passing algorithm for motion planning,” ICRA, 2021 [Accepted]

SADAKO on the importance of recordings and labelling for AI-infused vision development

By | Blog | No Comments

Data is gold: the importance of recordings and labelling for AI-infused vision development

Recent publication by Andrew NG * (one of most renamed machine learning and education pioneers) highlights the importance of data for progress in AI. As he explains: “Unlike traditional software, which is powered by code, AI systems are built using both code (including models and algorithms) and data:

AI systems = Code (model/algorithm) + Data”

While historical approaches typically tried to improve the Code (either the model architecture or the algorithm), now we know that “for many practical applications, it’s more effective instead to focus on improving the Data”. Generate bigger and better databases is often the most straightforward way to boost AI results. The so-called “data-centric AI development” is gaining ground. For those who, as we at Sadako technologies, are devoted to generating Neural Networks for vision applications, building high-quality datasets, repeatable and systematic, to ensure excellent, consistent flow of data throughout all stages of a project is a key activity.

Our data generation process has two main steps: the image acquisition and the image labelling. We have carefully taken care of both for the development of the vision systems in the HR-Recycler project, that need to recognize WEEE objects and its components, and human motion and gestures. For image acquisition, we have prepared and performed the following recording campaigns (last one is still ongoing):

-Campaign 1 (organized with CERTH in Ecoreset’s premises)

Figure 1: Images from the July 2019 classification recordings in Ecoreset (left and centre camera)

 

– Campaign 2 (organized with CERTH in Ecoreset’s premises)

Figure 2:  Images from the September 2019 classification recordings in Ecorest

 

– Campaign 3

Figure 3: Images from the December 2019 classification recordings in Indumetal

Figure 4: Sample images from the December 2019 Indumetal recordings. Time increases in the right-hand direction.

– Campaign 4

Figure 5: Sample images from the March 2020 recordings at Sadako’s premises.

 

  • Campaign 5

Figure 6: Sample images from the June 2021 recordings at Indumetal’s premises.

 

Special attention was taken to the choice of hardware, as well as replicating environmental conditions (background, lighting) as close as possible to those found in operation. For human motion detection datasets, a special attention has been given to possible gender o race bias in the data collection that could harm the neural network operational performance.

On the labelling side, our internal labelling team, one of most skilled and experienced image labelling teams in the waste domain, with the help of own proprietary labelling tools, has fulfilled the task to generate multiple homogeneous high-quality annotations for the different categories established in WEEE objects and in human motion and gestures.

Accurate recordings and excellent labelling guarantee a smooth algorithm production and is critical for the system to work properly.

* https://www.deeplearning.ai/the-batch/issue-84/

ECORESET SA on embracing new challenges for resources recovery

By | Blog | No Comments

Embracing new challenges for resources recovery

 

On 31st of July 2020, BIANATT SA merged through absorption with ECORESET SA. The new Organization provides integrated processes for the recovery of raw materials, like ferrous and non-ferrous metals from electronic wastes and end-of-life cables, and produces alternative fuel, from bulky, commercial, and municipal wastes. The increase of the available resources in the scheme provides more possibilities for the implementation of cooperative robotic solutions in the production processes. ECORESET SA houses a strong research team of three industry-experienced engineers, with doctoral degrees. The team remains focused on the execution of HR Recycler and is ready to host the initial hardware and software integration tests, from June to August 2021.

INTERECYCLING on the future of WEEE recycling

By | Blog | No Comments

Since almost one year, COVID-19 changes all countries and mobility.

Above all more business will need to be reinforced, and ‘’vaccines’’ are given some hope.

Globally many economies and global GDP (gross domestic product) decreased in a way since not seen in 50 years.

Despite that and seeing our recycling industry, global figure points for record sales of IT (Information Technology) and economics to perform digitalization. The 2 biggest manufactures, publish already a growth of sales around 30% in this items.

Facing this, plus better collection  of wastes because of restrictions that reduce the illegal disposals, we see a future for recycling, stable and with growth expectations, where developments and more technical solutions will be extra needed to better perform and maximize the adding value of industry and environment.

 

Ricardo Vidal

TECNALIA combines Neuroscience and Mixed Reality

By | Blog | No Comments

“Workmates of Indumental, Gaiker and Tecnalia interacted with a Mixed Reality environment working with a collaborative robot in WEEE recycling domain”

The romantic idea that emotions are born from the heart has been a curious way of expressing that thoughts and emotions are elements that coexist separately, that is, the brain and the heart seen as unconnected organs.

Today and thanks to Neuroscience (a scientific specialty that is dedicated to the comprehensive study of the nervous system, its functions, structure and other aspects that help explain various characteristics of behaviour and cognitive processes), theories such as “neither emotions they are exclusively of the heart, nor does reason alone reside in the brain” take on special relevance. What Neuroscience shows us is that cognition and emotion are closely linked, they are two sides of the same coin and have their residence in our nervous system.

A group of researchers in Tecnalia have created a laboratory dedicated to studying User Experience (UX) and Human Factors (HF), in order to take into account other emotions as an innovative aspect not considered until now. With this, they intend to provide objectivity to the data in the analysis of interactions and abstract emotional and cognitive processes.

Tecnalia’s Human Factors & User Experience laboratory integrates different devices (dry and wet EEG systems, sensors to measure the galvanic response of the skin and heart rate, eye-tracking glasses, indoor GPS system …) that allow the measurement (accurately and non-invasively) of psycho-physiological signals of a person when exposed to diverse external stimuli. Thus, the UX & HF laboratory goes one step further in the usual process of a user experience study (for example, surveys, focus groups, interviews, thinking aloud, etc.). Instead of asking the user or observing him/her when the person uses a product or service to know how the experience has been, with our laboratory technology we measure what the nervous system tells us by presenting him/her with different stimuli and thus obtaining objective data, avoiding biases derived from the subjective observation of the interviewer, of the subjective assessment of the respondent, and even of their own emotions.

Specifically, Tecnalia researchers have reproduced in a Virtual (Mixed) Reality environment the process of disassembling a PC tower (collaboratively with a robot that helps them in certain tasks), in such a way that the participant in the experiment has actually been able to “feel” that it was physically touching the PC components in the disassembly process. About 50 people from the consortium companies (Indumetal, Gaiker and Tecnalia) have passed through the experiment for two weeks. All of them have been monitored and asked to repeat the process, with slight variations in the behaviour of the collaborative robot (for example, robot malfunction and robot speed).

This Mixed Reality experiment has replicated the layout of a Human-Robot collaborative workstation of a WEEE (Waste Electrical and Electronic Equipment) recycling plant. The participants have been asked to perform the same activities as the workers on the real plant and the virtual robots have interacted with them on the same reality basics. However, during the experiments, sometimes robots will behave differently, and the psychophysiological response of the participant has been registered.

In order to make the experiment participants feel the experience as real as possible, a realistic 3D virtual scenario of a factory was created for viewing through immersive virtual reality glasses, and this virtual scenario was mixed with real tracked elements with which the participants will interact directly in a task of disassembly of electronic components.

The real objects that have been tracked (with small “bubbles”) and are aligned with the virtual reality in real time, are the worktable, the PC (specifically its top cover) that is manipulated and a fan inside the PC.

Once we had achieved this mixed experience (VR scenario plus real tracked objects), the next challenge was to include interactions tracking the hands of the participants. For this purpose, different technological approaches have been tested and compared.

In any case, and after several tests, the experiment with the participants has been carried out with the tracking of the hands/fingers by vision, adding the support of the wrist tracker.

With this experiment, the worlds of Virtual/Mixed reality and Neuroscience have come together, to ensure that in the future our “intelligent” robot mates will be able to adapt to our emotional and cognitive state, thus achieving a more fluid interaction between human and his fellow machines.

GAIKER on assessing the environmental and social benefits of the pilot studies

By | Blog | No Comments

GAIKER will assess the environmental and social benefits of the pilot studies carried out on the project HR-Recycler applying the life cycle perspective

GAIKER continues defining the scenarios of the pilot demonstrations in which the performance of the solutions, developed within the HR-Recycler Project, will be tested and validated in real operation environments. The implementation of technical changes will introduce modifications in the existing recycling operations that will need to be evaluated. The aim of the study will be focus on assessing the environmental and social impacts due to the changes in the WEEE recycling process introduced by the project HR-Recycler compared to the current practice. Therefore, the project’s team is gathering information related to the current processes and will compare it with the one associated to the developed human-robot collaborative processes.The evaluation will be carried out using the life cycle methodology (LCA), to broaden the scope of the assessment and conduct a more holistic analysis. This holistic approach (“From Cradle to Grave”) is what allows to answer the question of how a certain product has integrally interacted with the environment. The environmental LCA methodology focuses primarily on assessing the environmental impacts associated with a product or service throughout its whole life and the social LCA (s-LCA), is a method that can be used to assess the social and sociological aspects of products and services, their actual and potential positive as well as negative impacts along their life cycle.Within the HR-Recycler project, to perform a fair comparative evaluation, it is necessary to include the processes directly involved in the pilots and the current processes and the relevant upstream and downstream processes. This is particularly important in the case of processes downstream as the new processes developed in the project should improve productivity and efficiency while increasing the recycling ratios and reducing the amount of waste generated in the WEEE treatments.The ISO 14040 and ISO 14044 will be the reference methodology to follow in the LCAs to do. In the case of the social LCA, the assessment team will also take in account the new Guidelines for social life cycle assessment of products and organizations, developed by UNEP/SETAC in 2020, which are based in the abovementioned standards.On the other hand, the function of the system to assess is to recover and recycle as many materials, components and products as possible, fulfilling the mandatory requirements of the legislation. Accordingly, it has been defined as the functional unit (FU) 1.0 t of products recovered from WEEE and available to be used as secondary raw material or reused as a component or product.Regarding the limits of the system, the operations of classification, dismantling and sorting of WEEE in each Use Case (UC), as well as the end of life (EoL) of the waste generated in abovementioned operations, have been included.

INDUMETAL on Human-Robot collaboration improving WEEE handling

By | Blog | No Comments

Human-Robot collaboration improving MW and PC tower handling for their dismantling

Microwave ovens and computer towers are the two cases studied at Indumetal Recycling within the HR-Recycler project that seeks both the best process technique and risk prevention of potential injuries in the disassembly and decontamination processes of PC towers, microwave ovens, emergency lamps and LED and LCD screens.

Image 1. Dismantled microwave oven

 

Removing the condenser from a microwave oven before carrying out any device management operation is a legal obligation, given that they contain highly toxic substances. Due to the variety of microwave oven designs on the market, this extraction is currently done exclusively by hand. To perform this disassembly operation quickly and effectively, the operator needs to have an extensive experience.

In order to remove the housing that covers the microwave, it is usually sufficient to unscrew a limited number of elements. However, once the housing is unscrewed, it is not an easy task to access the interior of the oven, and therefore, help of a pick or a lever to remove it is needed.

Image 2. Dismantled PC tower

 

In the case of PC towers, it is mandatory to remove the battery that can be found inside as well as any PCB with a bigger size than ten-centimeter square. To access to these components, it is necessary to remove the external metallic housing. Just like the microwave ovens, there is a wide variety of PC tower designs, so this manipulation is currently done also manually.

Additionally, although it is not compulsory, Indumetal’s operator removes the hard drive and the disk from PC towers, since these components present high value materials that can be recovered if they are treated separately.   .

In these types of processes, the operator must move the equipment repeatedly to be able to access to the inner components. Robots can collaborate in this heavier and more dangerous task, reducing repetitive strain and accidental injuries in humans. However, these maneuvers demonstrate that the experience and knowledge of the operator are key in detecting the different components of WEEE. The dismantling processes currently studied in HR-Recycler, are clear examples where a hybridization between human and mechanical work of a robot would offer numerous benefits.

ROBOTNIK explains aspects involved on the safety signals

By | Blog | No Comments

Industrial robots have come front and center on the international stage as they’ve become widespread in the industrial sector. As they’ve become more powerful, more advanced and more productive, the need for robot safety has increased.

There are a number of ways manufacturers can introduce safety measures in their automated operations. The type and complexity of these safety measures will vary by the robotic application, with the aim to make AGVs safer, there are certain safety rules and standards that these collaborative robots must comply with, in Europe are found in EN ISO 3691-4:2020 “Industrial trucks — Safety requirements and verification — Part 4: Driverless industrial trucks and their systems” and ISO 12100:2010 6.4.3.

For Robotnik as an experimented robot manufacturer and within the collaborative environment of the HR-Recycler project, this aspect is especially important since humans and robots will be working side by side. The solution proposed to routing materials inside a factory has to be done in a safe manner, in this case, the robots designed are the RB-Kairos (mobile robotic manipulator) and the RB-Ares (pallet truck) and as AGV the main aspects will be show the intention of motion, elevate or manipulate. To ensure the correct operation within the complex framework of this project, Robotnik has equipped its robots with sensors and signalers that allow the robot to proceed safely and show its intentions in advance.

This post aims to give to the reader a brief description about what, why and how all the premises of the ISO will be reached.

First of all, what does the normative include? The standards on warning systems say:

  1. When any movement begins after a stop condition of more than 10 seconds, a visible or acoustic warning signal will be activated for at least 2 seconds before the start of the movement.
  2. A visible or acoustic warning signal will be activated during any movement.
  3. If the human detection means are active, the signal will be different.
  4. When robots change their direction from a straight path, a visible indication will be given of the direction to take before the direction changes in case that the robot is driving autonomously.
  5. When the lift is active, there must be special signage.

The solution proposed is a two-steps software that will manage the signals of the robot, explained after the diagram and on yellow cells:

The robot_local_control is a manager node, it has information about the status of the whole robot, that is, status of the elevator, goal active, mission ended, etc. On the right side, a group of nodes that manages the movement of the robot with a level of priority:

  • Robotnik_pad_node: The worker uses a PS4 pad to control the robot and this node will transmit the orders, non autonomous mode.
  • Path planning nodes: like Move_base, it controls the robot and we speak of it as autonomous mode.

Robotnik has installed on its AGVs two ways to alert facility users, acoustic devices or light indicators through the acoustic_safety_driver and leds_driver.

As you can see, there are two steps to link the top and bottom parts, a node to transform the movement into signals to show the intention of the robot and another one to orchestrate the both signal types and manage the requirements of the normative.

The turn_signaling_controller aims to solve the first and the fourth requirements of the normative depending on the robot mode (autonomous or not autonomous).

In non autonomous mode, and as the norm says, the motion depends on an appropriately authorised and trained personnel so it is enough to show that the robot is moving by reading the movement command and checking the velocity applied.

In autonomous mode the robot navigates to a goal point through a path calculated by the planner, furthermore it manages the AGV to avoid obstacles dynamically and for this reason it is important to alert workers every moment. The pipeline goes as follows:

This is a very brief description of the function, it bears the plan in mind and recalculates at the same time that the planner does just to be able to show the most up-to-date prediction of motion.

Last but not least, the robot_signal_manager aims to solve the rest of the problems since it has access to the robot status, it shows a light signaling or an acoustic signal 2 seconds before the motion, it gives priority to the emergency signals (consistent with the behaviour of the robot, red signals means that the robot will be stopped) and the signals that are not exclusive are showed using beacons or acoustic signals.

The occupied zone is one of the non exclusive signals, robots have some extra beacons that blinks on red when there is something on the protective zone (close to the intention of motion of the robot, inside the critic zone) and on yellow when there is something on the warning zone (near the protective zone).

Summarizing, safety is not only stopping the robot or avoiding a crash when human-robot collaboration takes place. With the development of these nodes Robotnik aims not only to decrease the probability of accident or comply with the safety ISO premises, but also to help workers feel more comfortable with the AGV’s decisions and bring human-robot collaboration closer showing clear signals about how the robot will perform.

CERTH explains the visual affordances concept

By | Blog | No Comments

“ The affordances of the environment are what it offers the animal, what it provides or furnishes… It implies the complementarity of the animal and the environment.” – James J. Gibson, 1979

Every object in our world has its own and discrete characteristics that derived from its visual properties and physical attributes.  These features are effective to recognize objects and to classify them into different categories. Thus, they are widely being used by vision recognition systems in the research area. However, these properties are unable to indicate how they can be used by a human. Visual and physical properties are not able to provide any clue about the set of potential actions that can be performed in a human – object interaction.

Affordances describe a possible set of actions that an environment allows to an actor [2]. Thus, dissimilar to the aforementioned object attributes that paint the picture of the object as an independent entity, affordances are capable to imply functional interactions of object parts with humans. In the context of robotic vision, taking into account object affordances is vitally important in order to create efficient autonomous robots that are able to interact with objects and to assist humans in daily tasks.

Affordances can be used in a big variety of applications. Among others, affordances are useful to anticipate and predict future actions because they represent the set of possible actions that may be performed by a human being. Additionally, by defining a set of possible and potential actions affordances provide beneficial clues not only for efficient activity recognition but also for functionality validation of objects. Last but not least affordances can be characterized as an intuition about objects’ value and significance leading to enhance scene understanding and traditional object recognition approaches.

In conclusion, affordances are powerful. They provide those details needed to make computer vision systems able to imitate humans’ object recognition system. Evermore, affordances provide a very effective and unique combination of features that seems to be able to enhance almost every computer vision system.

Illustration 1: Affordances Examples [1]

 

[1] Zhao, Xue & Cao, Yang & Kang, Yu. (2020). Object affordance detection with relationship-aware network. Neural Computing and Applications.

[2] Mohammed Hassanin, Salman Khan, and Murat Tahtali. 2021. Visual Affordance and Function Understanding: A Survey.

COMAU presents a new paradigm in collaborative robotics

By | Blog | No Comments

A new paradigm in collaborative robotics

Comau introduces to the market in March 2021 its Racer-5 COBOT, a new paradigm in collaborative robotics which meets the growing demand for fast, cost-effective cobots that can be used in restricted spaces and in different application areas. Countering the belief that collaborative robots are slow, Racer-5 COBOT is a 6-axis articulated robot that can work at industrial speed up to 6 m/s. With a 5 kg payload and 809 mm reach, it ensures optimal industrial efficiency while granting the added benefit of safe, barrier-free operations. Furthermore, the cobot can instantly switch from a collaborative mode to full speed when operators are not around, letting its 0.03 mm repeatability and advanced movement fluidity deliver unmatched production rates. Racer-5 COBOT enables systems integrators and end users to automate even the most sophisticated manufacturing processes without sacrificing speed, precision or collaborative intelligence. With this powerful industrial robot operating in dual modes, our customers are able to install a single, high-performance solution rather than having to deploy two distinct robots. With advanced safety features fully certified by TÜV Süd, an independent and globally recognized certification company, the cobot can be used within any high performance line without the need for protective barriers, which effectively reduces safety costs and floorspace requirements. Racer-5 COBOT also features integrated LED lighting to provide real-time confirmation of the workcell status. Finally, electrical and air connectors are located on the forearm to grant greater agility and minimize the risk of damage. All this enables Racer-5 COBOT to ensure higher production quality, better performance, faster cycle times and reduced capital expenditures.

The new Racer-5 COBOT delivers the speed and precision the small payload collaborative robotics market was missing, adding advanced safety features to the standard Racer-5 industrial robot and obtaining a fast, reliable and user-friendly cobot that can be used in any situation where cycle times and accuracy are paramount.

Made entirely in Comau (Turin, Italy), Racer-5 COBOT has a rigid construction that facilitates higher precision and repeatability year after year, making it particularly suitable for assembly, material handling, machine tending, dispensing and pick and place applications within the automotive, electrification and general industry sectors. In addition, the compact cobot can be easily transported and installed almost anywhere, helping the users optimize their processes and protect their investment.

HR-recycler project contributed to the development of this new product that can be considered an exploitable result of the project itself and will be applied, in particular, in the disassembling scenario for WEEE, testing and validating the collaborative features with the integration of specific tools for disassembling, such us grippers, industrial screwers, grinders.