🚀 The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2025 🚀
We introduce the Extended Cambridge Landmarks (ECL) dataset, which builds upon the foundation of
the original Cambridge
Landmarks dataset. Our ECL dataset enhances the existing test scenes by
incorporating diverse appearance conditions. For each scene, we crafted three distinct flavors:
Evening, Winter, and Summer. As part of our commitment to the research community, we are excited
to share both our data generation process and the refined code that underpins this valuable
resource.
This data set was introduced in our work: HyperPose: Hypernetwork-Infused
Camera Pose
Localization and an Extended Cambridge Landmarks Dataset. for improving pose estimation
accuracy in varying
domains.
In this work we propose ``HyperPose'' an approach for using hypernetworks in absolute camera pose regressors. The inherent appearance variations in natural scenes, due to environmental conditions, perspective, and lighting, induce a notable domain disparity between the training and test datasets, degrading the precision of contemporary localization networks. To mitigate this, we advocate incorporating hypernetworks into both single-scene and multiscene camera pose regression models. During the inference phase, the hypernetwork dynamically computes adaptive weights for the localization regression heads based on the input image, effectively narrowing the domain gap. We evaluate the HyperPose methodology across multiple established absolute pose regression architectures using indoor and outdoor datasets. In particular, we introduce and share the Extended Cambridge Landmarks (ECL), that is a novel localization dataset, based on the Cambridge Landmarks dataset, showing it in multiple seasons with significantly varying appearance conditions. Our empirical experiments demonstrate that HyperPose yields notable performance enhancements, for both single- and multi-scene architectures. We have made our source code, pre-trained models, and ECL dataset openly available.