PE-OWOD: Parameter-Efficient Open-World Detection with Semantic Priors and Virtual Outlier Synthesis

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Open-world object detection requires knowledge of categories and discovery of new objects never seen in training. Full model fine tuning fails often in such dynamic environment. Fully tuned models overfit known classes and lose their sensitivity to unknown objects at high computational cost. To solve this problem, we propose PE-OWOD, a simple and light approach to retrain. Instead of updating the whole network, we lock backbone and encoder to maintain stable visual priors and inject compact Residual adapters only into decoder to adapt tasks. We also introduce VOS, which defines explicit decision boundary for open space with optional semantic initialization. MS-COCO benchmarks show remarkable efficiency advantages: Update less than 27% of models, PE-OWOD achieves 64.7% Unknown Recall (significantly outperform fully tuned baselines), and GPU memory usage is reduced by 86%. These results indicate that effective parameter adaptation is not a constraint; rather, it is a reliable and robust open-world detection strategy.

Article activity feed