Initial commit
This commit is contained in:
6
DA-2K.md
6
DA-2K.md
@@ -6,14 +6,14 @@
|
||||
|
||||
DA-2K is proposed in [Depth Anything V2](https://depth-anything-v2.github.io) to evaluate the relative depth estimation capability. It encompasses eight representative scenarios of `indoor`, `outdoor`, `non_real`, `transparent_reflective`, `adverse_style`, `aerial`, `underwater`, and `object`. It consists of 1K diverse high-quality images and 2K precise pair-wise relative depth annotations.
|
||||
|
||||
Please refer to our [paper](https://depth-anything-v2.github.io) for details in constructing this benchmark.
|
||||
Please refer to our [paper](https://arxiv.org/abs/2406.09414) for details in constructing this benchmark.
|
||||
|
||||
|
||||
## Usage
|
||||
|
||||
Please first [download the benchmark]().
|
||||
|
||||
All annotations are stored in [annotations.json](./annotations.json). The annotation file is a JSON object where each key is the path to an image file, and the value is a list of annotations associated with that image. Each annotation describes two points and identifies which point is closer to the camera. The structure is detailed below:
|
||||
All annotations are stored in `annotations.json`. The annotation file is a JSON object where each key is the path to an image file, and the value is a list of annotations associated with that image. Each annotation describes two points and identifies which point is closer to the camera. The structure is detailed below:
|
||||
|
||||
```
|
||||
{
|
||||
@@ -45,7 +45,7 @@ If you find this benchmark useful, please consider citing:
|
||||
@article{depth_anything_v2,
|
||||
title={Depth Anything V2},
|
||||
author={Yang, Lihe and Kang, Bingyi and Huang, Zilong and Zhao, Zhen and Xu, Xiaogang and Feng, Jiashi and Zhao, Hengshuang},
|
||||
journal={arXiv preprint arXiv:},
|
||||
journal={arXiv:2406.09414},
|
||||
year={2024}
|
||||
}
|
||||
```
|
||||
Reference in New Issue
Block a user