confidence-average.github.io - What If the Input is Expanded in OOD Detection?

Description: Deformable Neural Radiance Fields creates free-viewpoint portraits (nerfies) from casually captured videos.

nerf (195) d-nerf (90) nerfies (89)

Example domain paragraphs

Out-of-distribution (OOD) detection aims to identify OOD inputs from unknown classes, which is important for the reliable deployment of machine learning models in the open world. Various scoring functions are proposed to distinguish it from in-distribution (ID) data. However, existing methods generally focus on excavating the discriminative information from a single input, which implicitly limits its representation dimension.

In this work, we introduce a novel perspective, i.e., employing different common corruptions on the input space, to expand that. We reveal an interesting phenomenon termed confidence mutation , where the confidence of OOD data can decrease significantly under the corruptions, while the ID data shows a higher confidence expectation considering the resistance of semantic features.

Based on that, we formalize a new scoring method, namely, Co nfidence a Ver age (CoVer), which can capture the dynamic differences by simply averaging the scores obtained from different corrupted inputs and the original ones, making the OOD and ID distributions more separable in detection tasks. Extensive experiments and analyses have been conducted to understand and verify the effectiveness of CoVer.

Links to confidence-average.github.io (2)