We are pleased to announce that our paper “Towards Real Unsupervised Anomaly Detection Via Confident Meta-Learning” has been accepted at the International Conference on Computer Vision (ICCV) 2025. 🎉 Additionally, we have three papers accepted at ICCV 2025 workshops
Main Conference Paper

This work addresses a critical challenge in real-world anomaly detection: the common assumption that training data contains only normal samples. In practice, industrial datasets are frequently contaminated with anomalies, causing significant performance degradation in existing methods.
Our proposed framework, CoMet (Confident Meta-Learning), integrates meta-learning with confidence estimation to achieve robust anomaly detection even when training data is contaminated. By leveraging Model-Agnostic Meta-Learning (MAML) alongside uncertainty quantification, CoMet can identify and mitigate the impact of anomalies during training, bridging the gap between idealized research settings and practical industrial deployment.
Extensive experiments on multiple benchmark datasets demonstrate that CoMet achieves state-of-the-art performance under various contamination scenarios, paving the way for more reliable anomaly detection systems in real-world industrial inspection applications.
Workshop Papers
Robust Anomaly Detection in Industrial Environments via Meta-Learning
Building upon the meta-learning framework, this work specifically addresses robustness challenges encountered in industrial anomaly detection scenarios.
- 📄 Paper
A Contrastive Learning-Guided Confident Meta-Learning for Zero-Shot Anomaly Detection
We introduce contrastive learning principles into our confident meta-learning framework, enabling zero-shot anomaly detection capabilities for previously unseen defect categories.
- 📄 Paper
Diffusion-Based Data Augmentation for Medical Image Segmentation
In collaboration with our research partners, we explore the application of diffusion models for data augmentation in medical imaging, demonstrating the versatility of generative approaches across different computer vision domains.
- 📄 Paper

