Self-driving cars are likely to be the future of transportation, but safety concerns are hurdles that researchers have to overcome to make fully autonomous vehicles become a reality. To accelerate that timeline, U.S. researchers have developed the first set of “certifiable perception” algorithms, which could help protect the next generation of self-driving vehicles — and the vehicles they share the road with. When robots sense their surroundings, they must use algorithms to make estimations about the environment and their location.
These perception algorithms are designed to be fast, with little guarantee of whether the robot has succeeded in gaining a correct understanding of its surroundings. This is one of the biggest existing problems. Our lab is working to design certified algorithms that can tell you if these estimations are correct.
– Lead researcher
Robot perception begins with the robot capturing an image, such as a self-driving car taking a snapshot of an approaching car. The image goes through a machine-learning system called a neural network, which generates key points within the image about the approaching car’s mirrors, wheels, doors.
From there, lines are drawn that seek to trace the detected key points on the 2D car image to the labelled 3D key points in a 3D car model. The researchers must then solve an optimisation problem to rotate and translate the 3D model to align with the key points on the image. This 3D model will help the robot understand the real-world environment.
Each traced line must be analysed to see if it has created a correct match. Since many key points could be matched incorrectly. The team’s algorithm smooths the non-convex problem to become convex and finds successful matches. If the match is not correct, their algorithm will know how to continue trying until it finds the best solution, known as the global minimum. A certificate is given when there are no better solutions.
These certifiable algorithms have a huge potential impact because tools like self-driving cars must be robust and trustworthy. The goal is to make it so a driver will receive an alert to take over the steering wheel if the perception system has failed.
The 3D model gets morphed to match the 2D image by undergoing a linear combination of previously identified vehicles. For example, the model could shift from being an Audi to a Hyundai as it registers the correct build of the actual car. Identifying the approaching car’s dimensions is key to preventing collisions.
The lead researcher stated that to achieve trustworthy autonomy, it is time to embrace a diverse set of tools to design the next generation of safe perception algorithms. There must always be a failsafe since no human-made system can be perfect. The safety precautions for self-driving cars will take the power of both rigorous theory and computation to revolutionise what it can be successfully unveiled to the public.
U.S. researchers have been developing robotic technologies for various purposes, including to help people with disabilities. As reported by OpenGov Asia, U.S. Researchers have now developed an alternative approach that they believe could offer much more precise control of prosthetic limbs. After inserting small magnetic beads into muscle tissue within the amputated residuum, they can precisely measure the length of a muscle as it contracts, and this feedback can be relayed to a bionic prosthesis within milliseconds.
In a new study appearing today in Science Robotics, the researchers tested their new strategy, called magnetomicrometry (MM), and showed that it can provide fast and accurate muscle measurements in animals. They hope to test the approach in people with amputation within the next few years.