Highlights
- The researchers at the Chalmers University of Technology, Sweden, have successful results from their latest robot testing to execute complex tasks as humans do.
- The researchers had a unique approach that tracked human movements through laser sensors to execute a specific task and design a code so that robots could replicate them.
Can robots adjust their approach to work to solve intricate tasks?
A research team at the Chalmers University of Technology, Sweden, has developed a new form of Artificial Intelligence (AI), which, by noticing human behavior, can adjust to execute its tasks in an evolving environment. Robots that can work flexibly will be able to work alongside humans to a greater degree is the hope.
“Robots that work in human environments need to be adaptable to the fact that humans are unique and that we might all solve the same task in a different way. An important area in robot development, therefore, is to teach robots how to work alongside humans in dynamic environments,” a Doctoral Student at the Department of Electrical Engineering at the Chalmers University of Technology and the main researcher behind the project, Maximilian Diehl, said.
When a human being executes an easy task, like setting the table, we might have different strategies to tackle the challenge based on the conditions. If a chair creates an obstacle, we may move it or walk around it. We may either use our hands, left or right; we may take a pause and execute any number of unplanned actions.
But robots do not function in a similar pattern. They require exact programming and instructions to accomplish the goal. This technique makes robots very efficient in environments where they consistently follow similar practices, like factory processing lines. But it is essential to design flexible robot working approaches for them to successfully interact with people in sectors like healthcare or customer-facing roles.
The research team at Chalmers University wanted to explore whether it was possible to train a robot to execute tasks similar to the approaches adopted by humans’— to design an “explainable AI” that extracts general instead of particular information during a demo so that it can then design a flexible and adjustable path towards a long-term goal. Explainable AI (XAI) is a type of AI where humans can understand how it reached a particular decision or result.
Training a robot to stack objects under evolving conditions
The research team asked a few people to execute the same task, such as stacking piles of small cubes, 12 times in a VR environment. Every time the specific task was executed with a unique approach, a set of laser sensors tracked the humans’ movements.
Karinne Ramirez-Amaro, added, “When we humans have a task, we divide it into a chain of smaller sub-goals along the way, and every action we perform aims to fulfill an intermediate goal. Instead of teaching the robot an exact imitation of human behavior, we focused on identifying what the goals were, looking at all the actions that the people in the study performed.”
The different approach of the research team meant that the AI concentrated on extracting the intent of the secondary goals and built libraries that included actions for each one. Then, the AI was designed as a planning tool that could be utilized by a TIAGo robot, which is a mobile service robot developed to perform in indoor environments. With the assistance of the tool, the robots could automatically create a plan for a given task of stacking cubes on top of one another, even when the surrounding conditions were changed.
The robot was assigned the task of stacking the cubes and then, based on the circumstances, which changed slightly for every attempt, selected a variety of possible actions to form a sequence that could accomplish the task. The results were extremely successful.
The research team presented its accomplishments at the robot conference IROS 2021, one of the most prestigious robotics conferences in the world. In the upcoming phase of the project, the researchers will investigate how robots can interact with humans and explain what went wrong and why if they fail a task.
Industry and healthcare
The aim is to utilize robots in the industry to assist technicians with tasks that could cause long-term health problems, for instance, tightening bolts/nuts on truck wheels or the tasks such as bringing and gathering medicines or food in the healthcare sector.
“We want to make the job of healthcare professionals easier so that they can focus on tasks which need more attention,” Karinne-Ramirez Amaro said.
“It might still take several years until we see genuinely autonomous and multi-purpose robots, mainly because many individual challenges still need to be addressed, like computer vision, control, and safe interaction with humans. However, we believe that our approach will contribute to speeding up the learning process of robots, allowing the robot to connect all of these aspects and apply them in new situations,” Maximilian Diehl said.
Experts’ view
“With our AI, the robot made plans with a 92% success rate after just a single human demonstration. When the information from all twelve demonstrations was used, the success rate reached up to 100%,” Maximilian Diehl Said.
“In the future, we foresee robots accomplishing some basic household activities, such as setting and clearing the table, placing kitchen utensils in the sink, or helping to organize groceries,” Karinne Ramirez-Amaro, Assistant Professor at the Department of Electrical Engineering, said.