Image-Based Relocalization and Alignment for Long-Term Monitoring of Dynamic Underwater Environments
Beverley Gorry · Tobias Fischer · Michael Milford . Alejandro Fontan
UnderLoc is an integrated pipeline that combines Visual Pace Recognition (VPR), feature matching, and image segmentation on video-derived images. This method enables robust identification of revisited areas, estimations of rigid transformations, and downstream analysis of ecosystem changes. Furthermore, we introduce the SQUIDLE+ VPR Benchmark—the first large-scale underwater VPR benchmark designed to leverage an extensive collection of unstructured data from multiple robotic platforms, spanning time intervals from days to years. The dataset encompasses diverse trajectories, arbitrary overlap and diverse seafloor types captured under varying environmental conditions, including differences in depth, lighting, and turbidity.
This repository contains code for the paper "Image-Based Relocalization and Alignment for Long-Term Monitoring of Dynamic Underwater Environments."
We use the package management tool pixi. If you haven't installed pixi yet, run the following command in your terminal:
curl -fsSL https://pixi.sh/install.sh | bash
After installation, restart your terminal or source your shell for the changes to take effect. For more details, refer to the pixi documentation.
Clone the repository and navigate to the project directory:
git clone https://github.com/bev-gorry/underloc.git && cd underloc
COMING SOON
Datasets in the benchmark are stored in a folder named SQUIDLE-VPR-BENCHMARK, which is created by default in the same parent directory as underloc.
- To add a new dataset, visit SQUIDLE+ and add your sequences to a new collection. Export the collection as a CSV file, ensuring that the export options are as follows:
-
Place the CSV file into the folder ~/underloc/datasets/squidle_csv and adjust the corresponding YAML configuration file ~/underloc/arguments/args_dataset.yaml.
-
Run the bash script ./run_dataset.sh to create the dataset in the benchmark. Your dataset will be created with the following structure:
~/SQUIDLE-VPR-BENCHMARK
└── YOUR_DATASET
└── sequence_01
├── rgb
└── img_01
└── img_02
└── ...
├── calibration.yaml
├── groundtruth.csv
├── groundtruth.txt
└── rgb.txt
└── sequence_02
├── ...
└── ...
This dataset structure is designed to replicate VSLAM-LAB, so that the same sequences can be used to test the extensive range of SLAM baselines available.
The following forked repositories are included in UnderLoc to enable evaluation of VPR methods, local feature matching, and semantic segmentation.
Thanks for using our work. You can cite it as:
@misc{gorry2025imagebasedrelocalizationalignmentlongterm,
title={Image-Based Relocalization and Alignment for Long-Term Monitoring of Dynamic Underwater Environments},
author={Beverley Gorry and Tobias Fischer and Michael Milford and Alejandro Fontan},
year={2025},
eprint={2503.04096},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2503.04096},
}
We also encourage citing VPR-methods-evaluation, MegaLoc, LightGlue, and SAM2.
This research was partially supported by funding from ARC Laureate Fellowship FL210100156 to MM and ARC DECRA Fellowship DE240100149 to TF. The authors acknowledge continued support from the Queensland University of Technology (QUT) through the Centre for Robotics.
We would particularly like to acknlowedge the authors of VPR-methods-evaluation, MegaLoc, LightGlue, SAM2, and VSLAM-LAB.