Manufacturers place demands on collaborative robots to be faster and more powerful every year. But, engineers have to keep in mind these collaborative robots need to be safe for the employees working around them. Some may think simply adding fencing can make a collaborative robot safer, but there’s an alternative that often works better.
AI vs. fencing
People often find workarounds for physical barriers they wish to cross. Due to safety, if a work cell is fenced-in and designed so a robot needs to be shut down before a person can enter, it is possible a human worker will try to bypass the safety mechanism instead of interrupting the collaborative robot’s work.
Many fenced-in collaborative robot systems still have force and speed limitations as additional safety precautions. Artificial intelligence could remove the need for these kinds of limitations. Instead, separation monitoring and advanced vision technology could allow the increase of a collaborative robot’s capabilities.
AI systems could make any robot collaborative. Humans could work even closer to collaborative robots than they do now without a threat to their safety. And since collaborative robots could work faster with more force, production cycles could be improved as heavier loads could be moved and manipulated quicker. Most collaborative robot maximum payloads are limited to about 10 kg because of safety concerns. However, that could increase with AI.
Vision systems for collaborative robots
To give AI the information it needs about the work environment, the collaborative robot needs a way to “see.” Machine vision and motion-sensing technology will need to be integrated into automated systems. Multiple vision cameras are needed to overlap and monitor the work cell.
One solution is to perform this scanning with cameras and computer vision software. Infrared flashes at 30 flashes per second are used to map every object near the collaborative robot. The system combines the camera’s data to look for occlusions (obstructions). When an occlusion is detected, it’s assumed a human has entered the work cell. Custom procedures can be followed so the human is not harmed.
Artificial intelligence; not machine learning
The use of AI with these vision systems instead of machine learning is an important distinction. Machine learning is probabilistic – it’s based on probability and subject to chance variation. AI classification makes occlusion analysis more efficient and ensures human safety at all times.
Current safety standards make it clear no statistical approaches for triggering safety measures should be allowed. A statistical approach to safety would program a robot to assess, “There is a 78% chance this human will be injured if action isn’t taken.” At what point does the system act? 50%? 75%? Humans should always be able to count on 100% safe working conditions. AI is the route for this assurance.
This article featured in Control Engineering and originally appeared on the Robotics Online Blog. Robotic Industries Association (RIA) is a part of the Association for Advancing Automation (A3), a CFE Media content partner.