Autonomous systems are subject to multiple regulatory requirements due to their safety-critical nature. In general, it may not be feasible to guarantee the satisfaction of all requirements under all conditions. In such situations, the system needs to decide how to prioritize among them. Two main factors complicate this decision. First, the priorities among the conflicting requirements may not be fully established. Second, the decision needs to be made under uncertainties arising from both the learning-based components within the system and the unstructured, unpredictable, and non-cooperating nature of the environments. Therefore, establishing the correctness of autonomous systems requires a specification language that captures the unequal importance of the requirements, quantifies the violation of each requirement, and incorporates uncertainties faced by the systems. In this talk, I will discuss our early effort to partially address this problem and the remaining challenges.