Category

Blog

CS GROUP elaborate on their developed virtual training system for the WEEE recycling industry

By | Blog | No Comments

The ambitious H2020 HR-Recycler project answers the need to create a new human-robot collaborative environment, where WEEE can be disassembled in a safe manner. To achieve an efficient collaboration in this field 12 European partners have joint forces and are working on various elements required create the hybrid human-robot recycling plant for electrical and electronic equipment of the future.

Within HR-Recycler CS GROUP – France (formerly DIGINEXT) focuses on the development of an effective virtual training system for the WEEE recycling industry. The dedicated tool (Procedure Editor) enables the creation of WEEE disassembly training experiences, based on predefined procedures (see image below), as described in a previous blog post.

Once the procedures created, they can either be implemented on the relevant object or exported in a format useable for VR training of robots and human workers. Taking this into consideration, CS GROUP has therefore implemented an S1000D export, described in more details below.

S1000D is an international specification for the procurement and production of technical publications. It is an XML specification for preparing, managing, and publishing technical information for a product.

S1000D is not a one-size-fits-all solution – it is a many-sizes-fit-many solution; through a combination of business rules, selectable elements and customizable values the standard is tailored to meet the project requirements.

Source: Wikipedia

This type of export can be performed by anyone and results, within HR-Recycler, in two different files, as illustrated in the figure below. The content of each file is as follows:

  • File 1: S1000D xml – containing the required persons, required parts and the procedure.
  • File 2: 3D data xml – containing 3D file URLs, submodel URLs for each part and animation data

Below are included a few specific examples on the level of information provided for each element.

The first image shows the list of people, required to perform a certain procedure along with their person category code. Consequently, the extracted file provides clear information on the type of personnel and competences required to perform a certain task.

Furthermore, in the S1000D Export file it is possible to see the sequence of steps to be performed in order to complete a specific procedure. For example, it shows the:

  • type of personnel/operator required to perform a procedure (highlighted in blue),
  • the action to be performed, e.g. “unscrew” (highlighted in red)
  • the reference of the tool and the associated video sequence (highlighted in green)

For training purposes, it is useful to be able to visualise the different steps of the procedures, that might be complex. The figure below shows the list of animations to be played for each step of the procedure.

This approach has made it possible to easily share information on procedures with relevant partners and enable the VR training of robots and human workers.

CERTH elaborate on AR-Enabled Safety and Human-Robot Collaboration within HR-Recycler

By | Blog | No Comments

AR-Enabled Safety and Human-Robot Collaboration in HR-Recycler

CERTH has developed an AR-enabled system to monitor and communicate with robots in the WEEE recycling environment, through the use of optical and projective AR techniques. Such a system is essential in the harsh and noisy industrial WEEE recycling environment, where noise makes vocal communication difficult, and the scale and clutter of the environment pose challenges to achieving overview of the overall process for workers and supervising personnel.

The AR system introduced helps overcome the above challenges and introduce additional safety aspects to the overall process. At the most fundamental level, projective AR is used to indicate to workers the workspaces and intentions of the robots. For instance, the Autonomous Ground Vehicles (AGV) in HR-Recycler include projectors that highlight on the ground the immediate trajectory of the robot, and luminous strips that indicate the intention of the robot to move towards the left or right direction. Similarly, the workstations are equipped with projectors that indicate the workspace of the collaborative robots, as well as current targets for the robots to drop off components extracted from the devices. Projective AR is a ubiquitous technique that does not require the user to wear any equipment and provides an additional degree of safety in the shop floor.

In addition to projective AR, HR-Recycler has developed an optical AR system, where advanced robot supervision and communication is made possible through the use of an AR Head-Mounted Display (HMD). Through the HMD, the user is able to observe the current state and the plan of the robot or send goals for pick-and-place. The navigation plan and planned arm trajectory are visualized in an intuitive manner, overlaid on top of the actual robot, allowing the user to adjust their actions in accordance to the current robot plan. In addition, through registration of the HMD with the robot reference frame, it is possible to detect whether the user has entered the workspace of the robot, and in this case the robot is paused and a warning is presented to the user.

Overall the developed AR system increases safety within the shop floor and improves the overview of the workers and supervisors, thereby improving overall process efficiency.

TECNALIA describe the challenge of measuring emotions

By | Blog | No Comments

The challenge of measuring emotions

Action-reaction, but how to measure it if it is an emotion?

The central nervous system plays a fundamental role in detecting and understanding emotions, cognitive processes and a series of other psychosocial constructs such as trust. The responses of the nervous system are relatively specific and show different patterns of activation depending on the situations and the emotional state. Any psychological process entails an emotional experience of greater or lesser intensity and of different quality, which is why the emotional reaction is something omnipresent in every psychological process. Likewise, all emotion involves a multidimensional experience with at least three response systems: cognitive (subjective), behavioural (expressive) and physiological (adaptive). Analysing, for example, the different dimensions of anger, it is possible to distinguish between the cognitive process (feeling angry), the behavioural (frowning) and the physiological (heart rate variation, among others).

Based on this premise, there are a whole series of psychophysiological signals that have been used to identify the various emotional processes:

  • Electrical activity of the brain: By means of an electroencephalogram (EEG) it is possible to capture the cortical activity of the brain using electrodes in contact with the scalp. These types of signals offer a lot of information, although their analysis can be very complex since each component (electrode) contains information of a temporal nature (its amplitude), modal (frequency of the wave) and topographic (location in the brain). Although traditionally the acquisition of this signal required precise and specialized instruments, today the necessary equipment is much more accessible.
  • Cerebral blood flow: Functional magnetic resonance imaging (fMRI) is a non-invasive technique that allows the measurement of brain activity. This technique is based on detecting areas of the brain with a higher concentration of blood flow, under the premise that these specific areas have greater activity compared to others with a lower flow. Although it offers great advantages at the research level, it requires bulky equipment and very high cost.
  • Variation in heart rate: It is one of the main physiological signals linked to the feeling of security or danger. At the level of analysis, the period between beats is usually considered, distinguishing between low frequency pulsations if this period is below a threshold (classically 0.15 Hz) or high frequency if said threshold is exceeded. The main techniques to obtain this metric are the electrocardiogram (ECG), which records the activity of the heart, and photoplethysmography (PPG), which uses a controlled light beam to calculate blood flow based on the amount of reflected light.
  • Skin galvanic response (GSR): Also called electrodermal activity, electrodermal activity or EDA. It reflects the electrical conductance of the skin by measuring the potential difference generated between two electrodes, generally located on the phalanges of two continuous fingers of the non-dominant hand. The excitation of the skin, normally produced in stressful situations, causes a dilation of the pores which, in turn, causes a decrease in the electrical resistance of the skin. It is necessary to distinguish its tonic or basal component and its phasic component. The former undergoes slow variations and often reflects unwanted experimental changes (for example, changes in the temperature conditions of the experiment), while the latter reflects rapid changes that are linked to the study stimulus.
  • Muscle activity: Muscle activity can be measured using a pair of electrodes aligned with their kinematic axis (direction in which they expand and contract). The application can be used in a variety of situations. For example, electrodes can be positioned on both sides of the eyes (either vertically or horizontally) to detect eye movements. This specific test, called electrooculography (EOG) offers very useful information when cleaning the encephalographic signal of possible unwanted artifacts. Other generic applications include the detection of involuntary movements using electromyograms (EMG) in response to certain stimuli.
  • Ocular behaviour: Ocular behaviour offers very valuable information on certain aspects, both physiological (for example, dilation and contraction of the pupil, blinking frequency, etc.) and behavioural (for example, drift and gaze fixation). Depending on whether the experimental process is carried out on a fixed platform (such as a computer) or freely available (participant in movement), this information can be extracted using fixed eye tracking instruments (attached to the visualization system) or mobiles (wearable systems in the form of glasses).

In short, analysing and measuring our nervous system to study or even understand our reactions, our emotions, to certain stimuli or situations is much more than a technological challenge. Designing and correctly using elements or devices that analyse our body and its signals is a challenge for science, for technology and for its possible multisectoral applications where the basis is to know the individual and their possible behaviours.

European Union (EU) challenges on artificial intelligence (AI) impact assessment

By | Blog | No Comments

European Union (EU) challenges on artificial intelligence (AI) impact assessment

Nikolaos Ioannidis[1]

Vrije Universiteit Brussel (VUB)

Introduction

The European Commission’s (EC) Proposal for a Regulation laying down harmonized rules on artificial intelligence (AI Act) has drawn extensive attention as being the ‘first ever legal framework on AI, resuming last year’s discussions on the AI White Paper. The aim of the Proposal and the mechanisms it encompasses is the development of an ecosystem of trust through the establishment of a human-centric legal framework for trustworthy AI.

The ecosystem aims to establish a framework for a legally and ethically trustworthy AI, promoting socially valuable AI development, ensuring and respecting fundamental rights, the rule of law, and democracy, allocating and distributing responsibility for wrongs and harms and ensuring meaningful transparency and accountability. Any AI application, including the HR-Recycler project and its associated technology, is expected to be subject to specific legal requirements set by this framework.

For individuals to trust that AI-based products are developed and used in a safe and compliant manner and for businesses to embrace and invest in such technologies, a series of novelties have been introduced in the proposed Act. Those novelties include, but are not limited to, i) the ranking of AI systems, depending on the level of risk stemming from them (unacceptable, high, low, and minimal), as well as ii) the legal requirements for high-risk AI systems.

Risk-based approach

AI applications are categorized based on the estimated risk that they may generate. Accordingly, there are various levels of risk – unacceptable, high, limited, minimal. AI applications of unacceptable risk – and thus prohibited – comprise those which, for instance, deploy subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour, causing physical or psychological harm or systems whose function is the evaluation or classification of the trustworthiness of natural persons based on social behaviour or known or predicted personal or personality characteristics, leading to detrimental or unfavourable treatment, inter alia.

High-risk AI applications are the most problematic because they are permitted to be deployed, under specific circumstances. In this category fall applications, which pertain to biometric identification and categorization of natural persons, management and operation of critical infrastructure, education and vocational training, employment, workers management and access to self-employment, access to and enjoyment of essential private services and public services and benefits, law enforcement, migration, asylum and border control management and administration of justice and democratic processes. The research of HR-Recycler could fall under the scope of the safety-critical domain, in which safety components and machinery are used. For these, the major subsequent obligation is the conformity assessment procedure.

Conformity assessment

The conformity assessment procedure for high-risk AI applications is tied with several legal requirements, which can be summarized as follows: i) introduction of a risk management system (a step forward compared to the data protection impact assessment process (DPIA)), ii) setting up a data governance framework (training, validation and testing data sets), iii) keeping technical documentation (ex-ante and continuous), iv) ensuring record keeping (automatic recording of events – ‘logs’), v) enabling transparency and provision of information (interpretation of the system’s output), vi) ensuring human oversight (effectively overseen by natural persons) and vii) guaranteeing accuracy, robustness and cybersecurity (consistent performance throughout the AI lifecycle).

Challenges and questionable areas

However, it is still not clear to which extent and how to carry the ex-ante conformity assessment, bringing to the fore novel challenges; the proposal itself generates multiple points of debate as per its applicability and interpretation of its provisions, some listed below:

  • Definition of AI (being incredibly broad, comprising virtually all computational techniques)
  • Complex accountability framework (introduction of new stakeholders and roles, cf. GDPR)
  • Legal requirements (clarification on data governance, transparency and human oversight)
  • Risk categorization (fuzzy and simplistic, lack of guidance on necessity and proportionality)
  • Types of harms protected (excluded financial, economic, cultural, societal harms etc.)
  • Manipulative or subliminal AI (being an evolutive notion)
  • Biometric categorization and emotion recognition systems (disputed and debatable impact)
  • Self-assessment regime (lack of legal certainty and effective enforcement)
  • Outsourcing the discourse on fundamental rights (large discretionary power for private actors)
  • Independence of private actors (in need of strengthened ex-ante controls)
  • Lack of established methodology (a new kind of impact assessment, cf. DPIA)
  • Checklist attitude (binary responses to questionnaires)
  • Technocratic approach towards fundamental rights (against their spirit)
  • Standards-setting (role of incumbent organizations such as CEN / CENELEC)
  • Relation with other laws (interplay with the GDPR and LED?)
  • Multidisciplinarity (miscommunication among internal stakeholders)
  • External stakeholder participation (insufficient engagement of them)
  • Societal acceptance of AI applications (scepticism, mistrust, disbelief)

Given the fact that the AI Act proposal is expected to pass a long negotiation phase, in which the European Commission, the European Parliament and the Council will articulate their opinion and reservations, along with numerous feedback submissions by research and policy actors, the final text may differ from the current formulation, hopefully addressing and clarifying the majority of legal gaps identified until this point.

[1] E-mail: Nikolaos.Ioannidis@vub.be.

“Assembly versus Disassembly” explained by partners INT

By | Blog | No Comments

Assembly versus Disassembly

The evolution in the universe of electronic equipments along the last decades has been exponential, turning the recycling processes of those equipments a challenge for the WEEE recycling industries, as it is the case of the Interecycling company.

The development of machines and automation itself, which help the recycling processes, are challenging, since when the development of suitable machinery for the recycling of electronic equipment ends, it has already ended its useful life, that is: the development process of “new” EEE is more rampant than the automation capacity for their recycling, leaving the WEEE recycling companies fixed to automation processes and machinery that becomes obsolete very quickly.

For example, WEEE recyclers have legislation in place to remove certain hazardous components from electronic equipment that cannot enter into the recycling process, such as capacitors, liquid crystals, mercury lamps, etc. This is imperative in the whole universe of recycling companies working in the E.U., so it would be relevant to consider that part of the solution to this paradigm lies in the EEE producing companies developing electronic equipment, which would allow an easier removal of those hazardous components, at an early stage of the disassembly and dismantling process.

Until today there is not much awareness and application of the so called “ecodesign”, in the sense of EEE producing companies, to allocate the hazardous components in a “standard” way, allowing easy access to them at an early stage of the dismantling process. Changing this dynamic would give WEEE recyclers more efficiency and effectiveness in their processes by allowing the removal of these hazardous components at an early stage in the disassembly and/or dismantling process.

In the absence of such upstream changes with EEE companies, WEEE recycling companies are faced with the challenge of removing hazardous components from the equipment at a downstream stage of their processes, in an almost empirical way, due to the plurality of makes and models they recycle, since there are no “rules” that define the disposal of the same hazardous components.

Given this reality, the proposal to reinforce the logic of “ecodesign”, for companies producing EEE, contemplating an easier dismantling of the equipment and the placement of hazardous components in a standard way would allow, for instance, to remove those hazardous components from EEE, in an initial phase and not in a downstream phase of the recycling process. In this way the imperative fulfilment of these indispensable requisites in the recycling process as well as the valorisation of materials would be with an approach to the process in a more effective, concrete and viable manner, with environmental and productive improvements in the recycling companies and society in general.

From the perspective of a recycling company, we launch this challenge because we believe that the approach to the problem can be done with “upstream” help, creating legislation in the E.U. for the companies that produce EEE, originating in this way a solution that would pass by a partnership and commitment between the companies that produce EEE and those that recycle WEEE. We would work in simultaneously, since it is a given that the “upstream” assembly of EEE can and should work in partnership with the “downstream” disassembly of WEEE recyclers.

This is the challenge to European legislators to make a success of this commitment to a better world in which Europe is a pioneer and leads by example.

 

References:

  • https://eur-lex.europa.eu/
  • DIRECTIVE 2006/66/EC OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL;
  • DIRECTIVE 2002/95/EC OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL;
  • DIRECTIVE (EU) 2018/851 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL;
  • DIRECTIVE 2012/19/EU OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL;
  • COMMISSION IMPLEMENTING REGULATION (EU) 2017/699;
  • COMMISSION REGULATION (EU) 2019/2021.

Analysis of the WEEE management scheme by GAIKER

By | Blog | No Comments

GAIKER continues the analysis of the WEEE management scheme to detect which actions of the human-robot collaboration can be more relevant saving time and reducing efforts to workers

 

GAIKER is completing additional analysis of the WEEE management scheme by studying the details associated to the involved classification, dismantling and sorting steps. Every activity is being described based on its specific characteristics, as the necessary time to pick waste equipment from piles and place it on classified batches or the required tools to release joining systems and dismantle selected parts and components. The primary information, that was provided by the end-users, related to aspects as overall material flows inside the plants or to specific disassembly sequences of every equipment type, is being processed to detect which activities can be more relevant to turn from manual to human-robot collaborative. The obtained data will be further used in the project to tune the HR-RECYCLER approach accordingly, to maximize the benefits achieved from the implementation of the human-robot collaboration.The material flow analysis confirmed that 65-75 % of the WEEE arriving to the plants needs selective treatment. It was also stated that, depending on the characteristics of each unit or the layout of the plant and the technologies that incorporates, a person, during one hour of work, can classify 200-500 kg of appliances and generate 80-400 kg of decontaminated devices and separated parts.The study of dismantling sequences, that are mainly focused on the extraction of potentially hazardous components and the recovery of valuable parts, showed that, in all cases, the removal of the external covers demands the unscrewing of joining systems and the grasping of the liberated elements to be sorted. On the other hand, the release of internal components or parts can involve diverse actions as a gentle detach from connectors (case of mercury lamps in flat panel displays and emergency lighting devices), the cutting of cables (case of capacitors in microwave ovens) or the vigorous extraction of batteries (case of PC towers).After completing the analysis, the use of the pallet-truck and the robotic arm in the classification area, the development of the unscrewing action using the collaborative-robot in the dismantling area and the use of the AGV with a built-in robotic arm in the sorting area have been identified as priorities.Additionally, the screening of several videos, showing the current applied procedures, allowed the calculation of processing times for each of the dismantling steps with the following results:

Grab a tool: 1.1 ± 0.3 s (n 13)

Release a tool: 1.0 ± 0.2 (n 12)

Extract the objective components: 14.2 ± 13.8 s (n 9)

Loose a screw with an automatic screwdriver: 2.9 ± 0.8 s (n 21)

Cut connecting cables: 3.9 ± 2.1 s (n 4)

Remove housing with no screws: 4.6 ± 2.0 s (n 9)

Figure 1. Selected sequences during the manual disassembling of WEEE

It has been found that the most time demanding operation is the extraction of a component of interest (e.g., capacitor, battery, lamp) as it involves considerable handling work. In addition, it is also the operation whose duration varies the most because some components require skilful and delicate handling because they contain hazardous substances and/or are fragile.Further manual dismantling trials are planned to be performed at GAIKER in the following months, parallel to the initial human-robot collaborative system tests that are taking place at real scenarios. That additional experiments will help to analyse more deeply the manual tasks that are more time demanding and require more effort to workers and will be used to model a realistic collaborative WEEE recycling process with improved technical performance and better working conditions.

Summary of the Workshop “Working side by side with Robots”

By | Blog | No Comments

Working side by side with robots

Human factors in industrial settings

A workshop organized by HR-Recycler

Friday, June 18,  2021    10:30 – 15:30  ONLINE

The increased integration of autonomous robots in industrial settings requires not only that industrial robots display more “human-like” behaviors for efficient human-robot collaboration (HRC) but also regulations for safety and ethical viewpoint in the working environment. On the one hand, we need to build robots that can act autonomously in a given and sometimes unpredictable environment. On the other hand, we want the same robots to cooperate with humans to reach a common goal in a safe environment. In this workshop, we want to address some of the issues that are relevant for HRC in industrial settings, including its “human factors” such as lack of trust and adaptability to autonomous machines in collaborative tasks, accountability (legal rights and obligations), and especially the sentiment that robots are “stealing jobs.” Whereas some hope that in this HRC endeavor (supposed to boost industrial and economic growth) humans will have a supervisory role, thus decreasing their physical and cognitive efforts, some others are concerned that autonomous robots will have a negative impact on our economic growth and pose a negative challenge for our culture. In this context, this workshop aims to explore Human-Robot Collaboration HRC in the working environment focusing on the analyses of the different aspects of HRC that impact its integration with the human worker in the industry sector.

Workshop Schedule and Talks

 Session 1 Trust and acceptance in HRC

The increased integration of autonomous robots in industrial settings requires that industrial robots display more “human-like” behaviors for efficient human-robot collaboration (HRC)

Session 2 Human Robot mutual adaptation in industrial settings

We need to build robots that can act autonomously in a given and sometimes unpredictable environment. We want the same robots to cooperate with humans to reach a common goal in a safe environment.

Session 3 Ethics and Law applied in different fields of HRC

Regulations for safety and ethical viewpoint in the working environment.

Discussion points

  • If humans have only a supervisory role, will this decrease their physical and cognitive efforts in the industry domain?
  • Will The HRC endeavour boost industrial and economic growth?
  • There is a concern that autonomous robots will have a negative impact on our economic growth and pose a negative challenge for our culture.

 

SESSION TIME SPEAKERS TITLE
 Login & Welcome 10:30-10:40 HR-Recycler organizersAnna Mura IBEC, Sara Sillaurren TECNALIA, Apostolos Axenopoulos CERTH, Alex Papadimitriou CERTH Working side by side with robots.

Human factors in industrial settings

Trust and acceptance in HRC 10:40 Paul Verschure
ICREA IBEC, Barcelona, ES
Empathy in humanoid robots.
11:00 Ismael FT. Freire
IBEC, Barcelona, ES
Towards Morally-driven Human-Robot Collaboration. The HR-Recycler case.
11:20 Monica MalvezziUniversity of Siena, IT Inclusive Robotics for a Better Society: the role of education.
Coffee break 11:40-12:00
   HRC mutual adaptation in industrial settings 12:00 Valeria Villani
University of Modena, IT
The INCLUSIVE System: A General Framework for Adaptive Industrial Automation.
12:20 Fotis Dimeas
Aristotle University of Thessaloniki, GR
New advances in human-robot collaboration for assembly applications.
12:40 Néstor García
Eurecat, Barcelona, ES
Cognitive Robotics and AI for an effective industrial HRI: the Sharework case.
13:00 Daniel CamilleriCyberselves, UK Programming your Robot in more Human Terms.
13:20  Karagiannis Panagiotis Laboratory for Manufacturing Systems and Automation (LMS)University of Patras, GR Human Robot Collaborative cell: Key methods and technologies.
Lunch break 13:40-14:10
 Ethics and Law in HRC 14:10 Matthias Pocs Security Technology Law Research, DE Legal method to standardize technology design, the Rehyb case.
14:30 Nikolaos Ioannidis Vrije University Brussels. BE Ethics for Robots in industrial settings the HR-Recycler case
Discussion 14:50-15:30 END

 

More than 30 participants attended the event Including representatives of  Eu projects SHAREWORK, COLLABORATE, INBOTS, INCLUSIVE, REHYB

 

Organizers

HR-RECYCLER

Anna Mura (IBEC), Sara Sillaurren (TECNALIA), Apostolos Axenopoulos, Alex Papadimitriou (CERTH)

 Web  https://www.hr-recycler.eu/

Twitter: @hr-recycler

  Contact

 Anna Mura Institute for Bioengineering of Catalonia (IBEC), Barcelona. ES amura@ibecbarcelona.eu

 Sara Sillaurren  TECNALIA, Research and Technological Development Centre, ES sara.sillaurren@tecnalia.com

Bayesian view on Robot Motion Planning

By | Blog | No Comments

Bayesian view on Robot Motion Planning

Planning as Inference

High-dimensional motion planning algorithms are crucial to plan a robot trajectory in complex environment. In addition to collision avoidance and constraints handling capabilities, a key performance criterion is the computation time. Although a range of approaches exist for motion planning, the recent work[1,2] has sparked the interest in a potentially transformative approach, according to which robot motion planning can be accomplished through probabilistic inference[3,4].

The planning as inference (PAI) view originated in the artificial and machine learning research. PAI view has been adopted to solve planning and sequential decision-making problems in artificial agents and robotics. The key idea is that the PAI methods compute the posterior distribution over random variables subject to conditional dependencies in a joint distribution. In other words, in the PAI formulation, the planning objectives are represented as probabilistic models. Probabilistic inference is used to compute the posterior distribution over trajectories, given constraints and goals. All the motion objectives such as motion priors, goals and task constraints are fused together to find a posterior distribution over trajectories in a way similar to Bayesian sensor fusion.  This PAI problem formulation enables the utility of the whole approximate inference techniques for a range of planning problems, and it provides certain benefits, such as uncertainty quantification, structured representation and faster convergence.

PAI framework is also closely related to the perception-action generative models originating in cognitive science research. The cognitive generative model provides a unified framework for perception, learning and planning, and it is named as active inference (AIF)[5].  The PAI framework also relates to stochastic optimal control and reinforcement learning.

Min-Sum Message Passing for Planning as Inference

Gaussian Process formulation of continuous-time trajectory[1] offers a fast solution to the motion planning problem via probabilistic inference on factor graph.  However, often the solution converges to in-feasible local minima and the planned trajectory is not collision-free. It fuses all the planning problem objectives which are represented as factors and solves non-linear least square optimization problem via numerical (Gauss-Newton or Levenberg-Marquardt) methods. Although this approach of solving factor graph is fast, the batch non-linear least square optimization approach makes it vulnerable to converging to in-feasible local minima. The approach to combine all the factors of the graph makes it faster than state of the art motion planning algorithms, but it comes at the cost of being more prone to getting stuck in local minima. Graph re-optimization could naively help to get out of the in-feasible local minimain expense of additional computation time

TUM has proposed a message passing algorithm[6] that is more sensitive to obstacles with fast convergence time. We leverage the utility of min-sum message passing algorithm that performs local computations at each node to solve the inference problem on factor graph. We first introduce the notion of compound factor node to transform the factor graph to a linearly structured graph. The decentralized approach of solving each compound node increases sensitivity towards avoiding obstacles for complex planning problems.

PAI offers an interesting view on robot motion planning and how message passing algorithms can be adopted to solve the approximate inference problem[4,6]. However, a major drawback of this approach is its inability to handle hard constraints. Fusion of motion objectives only allow to handle soft constraints in the present problem formulation.  For WEEE disassembly setup in HR-Recycler, this can cause safety issues as the PAI algorithm can generate trajectories that are not collision-free. In order to handle hard constraints, TUM is working on PAI based risk-aware algorithms that can generate safe planning algorithms for WEEE disassembly setup.

Fig. 1: Trajectory generated by min-sum message passing algorithm.

References

[1] M. Mukadam, J. Dong, X. Yan, F. Dellaert, and B. Boots, “Continuous-time Gaussian process motion planning via probabilistic inference,”Int. J. Robotics Res., vol. 37, no. 11, pp. 1319–1340, 2018.

[2] M. Xie and F. Dellaert, “Batch and incremental kinodynamic motion planning using dynamic factor graphs,”CoRR, vol. abs/2005.12514,2020.

[3] H. Attias, “Planning by probabilistic inference,” in Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics, AISTATS 2003, Key West, Florida, USA, January 3-6, 2003

[4] M. Toussaint and C. Goerick, “A bayesian view on motor control and planning,” in From Motor Learning to Interaction Learning in Robots, ser. Studies in Computational Intelligence, O. Sigaud and J. Peters,Eds. Springer, 2010, vol. 264, pp. 227–252.

[5] K. J. Friston, J. Mattout, and J. Kilner, “Action understanding and active inference,” Biol. Cybern., vol. 104, no. 1-2, pp. 137–160, 2011.

[6] S. Bari, V. Gabler, D. Wollherr, “MS2MP:  A min-sum message passing algorithm for motion planning,” ICRA, 2021 [Accepted]

SADAKO on the importance of recordings and labelling for AI-infused vision development

By | Blog | No Comments

Data is gold: the importance of recordings and labelling for AI-infused vision development

Recent publication by Andrew NG * (one of most renamed machine learning and education pioneers) highlights the importance of data for progress in AI. As he explains: “Unlike traditional software, which is powered by code, AI systems are built using both code (including models and algorithms) and data:

AI systems = Code (model/algorithm) + Data”

While historical approaches typically tried to improve the Code (either the model architecture or the algorithm), now we know that “for many practical applications, it’s more effective instead to focus on improving the Data”. Generate bigger and better databases is often the most straightforward way to boost AI results. The so-called “data-centric AI development” is gaining ground. For those who, as we at Sadako technologies, are devoted to generating Neural Networks for vision applications, building high-quality datasets, repeatable and systematic, to ensure excellent, consistent flow of data throughout all stages of a project is a key activity.

Our data generation process has two main steps: the image acquisition and the image labelling. We have carefully taken care of both for the development of the vision systems in the HR-Recycler project, that need to recognize WEEE objects and its components, and human motion and gestures. For image acquisition, we have prepared and performed the following recording campaigns (last one is still ongoing):

-Campaign 1 (organized with CERTH in Ecoreset’s premises)

Figure 1: Images from the July 2019 classification recordings in Ecoreset (left and centre camera)

 

– Campaign 2 (organized with CERTH in Ecoreset’s premises)

Figure 2:  Images from the September 2019 classification recordings in Ecorest

 

– Campaign 3

Figure 3: Images from the December 2019 classification recordings in Indumetal

Figure 4: Sample images from the December 2019 Indumetal recordings. Time increases in the right-hand direction.

– Campaign 4

Figure 5: Sample images from the March 2020 recordings at Sadako’s premises.

 

  • Campaign 5

Figure 6: Sample images from the June 2021 recordings at Indumetal’s premises.

 

Special attention was taken to the choice of hardware, as well as replicating environmental conditions (background, lighting) as close as possible to those found in operation. For human motion detection datasets, a special attention has been given to possible gender o race bias in the data collection that could harm the neural network operational performance.

On the labelling side, our internal labelling team, one of most skilled and experienced image labelling teams in the waste domain, with the help of own proprietary labelling tools, has fulfilled the task to generate multiple homogeneous high-quality annotations for the different categories established in WEEE objects and in human motion and gestures.

Accurate recordings and excellent labelling guarantee a smooth algorithm production and is critical for the system to work properly.

* https://www.deeplearning.ai/the-batch/issue-84/