EgoXtreme Dataset

A Dataset for Robust Object Pose Estimation in Egocentric Views
under Extreme Conditions
Taegyoon Yoon1 Yegyu Han1 Seojin Ji1 Jaewoo Park1 Sojeong Kim1 Taein Kwon2* Hyung-Sin Kim1*
1Seoul National University    2VGG, University of Oxford
*Joint supervision; corresponding authors.
CVPR 2026 (Highlight)
Paper Dataset (Train/Val) Dataset (Test) Code
EgoXtreme Teaser

Dataset Overview

EgoXtreme is a novel, large-scale dataset designed for robust egocentric 6D object pose estimation under extreme conditions. Specifically, 8 illumination conditions are used across three scenarios, and smoke is included in specific scenes. These conditions, combined with severe motion blur, make accurate 6D object pose estimation extremely challenging.

Scenarios

EgoXtreme is specifically designed to tackle extreme environmental conditions in egocentric views. The dataset introduces highly challenging factors such as fast motions, diverse illumination changes, and smoke. Below are the sample sequences captured under these practical scenarios.

EgoXtreme Teaser

Maintenance

Maintenance (Smoke)

Sports

Emergency

Emergency (Smoke)

Objects

The EgoXtreme dataset features 13 objects divided into three distinct scenarios. Below are the 3D models used for 6D pose annotation and evaluation.

Maintenance

Drill
Hammer
Wrench
Saw
Brick

Sports

Pingpong
Tennis
Bat
Hockey
Golf

Emergency

Kit
Fire extinguisher
Flashlight

Results

We evaluated recent state-of-the-art 6D object pose estimation models, focusing on RGB-only zero-shot approaches on the EgoXtreme benchmark. As shown below, while existing models perform reasonably well under standard conditions, their performance drops significantly under the extreme factors (e.g., motion blur, low light, and smoke). This highlights the highly challenging nature of our dataset and leaves significant room for future research.

Baseline Evaluation Results

Dataset Download

To download the train and validation dataset, please download the data here.

To download the test dataset (without GT), please download here.


For detailed information of the data format and structure, please check our GitHub repository.

Citation

@inproceedings{egoxtreme2026,
  title={EgoXtreme: A Dataset for Robust Object Pose Estimation in Egocentric Views under Extreme Conditions},
  author={Yoon, Taegyoon and Han, Yegyu and Ji, Seojin and Park, Jaewoo and Kim, Sojeong and Kwon, Taein and Kim, Hyung-Sin},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2026}
}