Quantifying Faulty Assumptions in Heterogeneous Multi-Agent Systems


We study the problem of analyzing the effects of inconsistencies in perception, intent prediction, and decision making among interacting agents. When accounting for these effects, planning is akin to synthesizing policies in uncertain and potentially partially-observable environments. We consider the case where each agent, in an effort to avoid a difficult planning problem, does not consider the inconsistencies with other agents when computing its policy. In particular, each agent assumes that other agents compute their policies in the same way as it does, i.e., with the same objective and based on the same system model. While finding policies on the composed system model, which accounts for the agent interactions, scales exponentially, we efficiently provide quantifiable performance metrics for them using probabilistic model checking. We showcase our approach using two realistic autonomous vehicle case-studies and implement it in an autonomous vehicle simulator.

2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)