[2503.08558] Can We Detect Failures Without Failure Data? Uncertainty-Aware Runtime Failure Detection for Imitation Learning Policies

PDF view of the paper with the title Can we discover failure without failure data? Detection of operating time failure in the event of uncertainty for counterfeit learning policies, by Chen Show and 9 other authors
PDF view
a summary:Recent years have witnessed the impressive automated manipulation systems that drive progress in traditional learning and obstetric modeling, such as the prevalence and flow approach. With the increase in the performance of the robot policy, the complexity of the horizon and the time for the investigative tasks increases, leading to unexpected and varied failure patterns that are difficult to predict in advance. To enable the spread of the policy worthy of confidence in the critical human environments for safety, the disclosure of the trusted operating time will become important while the policy is concluded. However, most of the current failure methods depend on the previous knowledge of failure and require failure data during training, imposing a major challenge in practical application and expansion. In response to these restrictions, we provide failure, the approach of two -stage units to detect failure to manipulate traditional learning. To accurately determine the failure of successful training data alone, we frame the problem as revealing the serial distribution (OOD). We first distract inputs and outputs in numerical signals that are associated with policy failure and pick up cognitive uncertainty. Next, the failure (CP) is used as a multi -use framework to estimate uncertainty with statistical guarantees. Experimental, we are completely verifying the numerical and post -designated numerical signature candidates on various automated tasks. Our experiences show that the signals learned are often effective, especially when using the density based on the new flow. Moreover, our method of failure discovers more accurately and faster than the modern failure lines (SOTA). These results highlight the potential for failure to enhance the safety and reliability of learning -based automated systems as they advance towards publishing in the real world.
The application date
From: Chen Show [view email]
[v1]
Tuesday, 11 Mar 2025 15:47:12 UTC (16,532 KB)
[v2]
Fri, 25 April 2025 07:12:28 UTC (23,704 KB)
Don’t miss more hot News like this! AI/" target="_blank" rel="noopener">Click here to discover the latest in AI news!
2025-04-28 04:00:00