Enhance InFlux dataset card: Add metadata, links, and sample usage
Browse filesThis PR significantly enhances the InFlux dataset card by:
- Adding `task_categories: ['depth-estimation']` to the metadata, reflecting the dataset's role in 3D video understanding.
- Including relevant `tags: ['camera-calibration', 'intrinsics', 'video']` to improve discoverability for researchers interested in camera parameters and video analysis.
- Integrating explicit links to the paper ([Hugging Face Papers page](https://huggingface.co/papers/2510.23589)), the [project page](https://influx.cs.princeton.edu/), and the [code repository](https://github.com/princeton-vl/InFlux) directly at the top of the dataset description for easy access.
- Removing the redundant introductory sentence that previously linked to the website and official repository.
- Incorporating a comprehensive "Sample Usage" section, extracted from the GitHub README, which provides clear instructions for installation, generating submission files, submitting predictions to the evaluation server, and optionally making submissions public. This empowers users to quickly engage with the InFlux benchmark.
|
@@ -1,13 +1,21 @@
|
|
| 1 |
---
|
| 2 |
license: cc-by-4.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
---
|
| 4 |
|
| 5 |
# InFlux
|
| 6 |
|
| 7 |
-
We present Intrinsics in Flux (InFlux), a real-world benchmark that provides per-frame ground truth intrinsics annotations for videos with dynamic intrinsics. Compared to prior benchmarks, InFlux captures a wider range of intrinsic variations and scene diversity, featuring 143K+ annotated frames from 386 high-resolution indoor and outdoor videos with dynamic camera intrinsics.
|
| 8 |
|
| 9 |
-
|
| 10 |
|
|
|
|
|
|
|
| 11 |
|
| 12 |
### Viewing `.mp4` Files
|
| 13 |
|
|
@@ -17,14 +25,14 @@ The `.mp4` files in this dataset require VLC media viewer to play locally. Pleas
|
|
| 17 |
|
| 18 |
The dataset includes 386 `.mp4` videos and 2 `.json` files in the root directory:
|
| 19 |
|
| 20 |
-
-
|
| 21 |
-
|
| 22 |
|
| 23 |
-
-
|
| 24 |
-
|
| 25 |
|
| 26 |
-
-
|
| 27 |
-
|
| 28 |
|
| 29 |
---
|
| 30 |
|
|
@@ -41,9 +49,9 @@ The dataset includes 386 `.mp4` videos and 2 `.json` files in the root directory
|
|
| 41 |
}
|
| 42 |
```
|
| 43 |
|
| 44 |
-
-
|
| 45 |
|
| 46 |
-
-
|
| 47 |
|
| 48 |
#### `gt_validation_dict.json`
|
| 49 |
|
|
@@ -68,9 +76,110 @@ The dataset includes 386 `.mp4` videos and 2 `.json` files in the root directory
|
|
| 68 |
```
|
| 69 |
|
| 70 |
**Per-frame keys:**
|
| 71 |
-
-
|
| 72 |
-
-
|
| 73 |
|
| 74 |
-
-
|
| 75 |
|
| 76 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: cc-by-4.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- depth-estimation
|
| 5 |
+
tags:
|
| 6 |
+
- camera-calibration
|
| 7 |
+
- intrinsics
|
| 8 |
+
- video
|
| 9 |
---
|
| 10 |
|
| 11 |
# InFlux
|
| 12 |
|
| 13 |
+
We present Intrinsics in Flux (InFlux), a real-world benchmark that provides per-frame ground truth intrinsics annotations for videos with dynamic intrinsics. Compared to prior benchmarks, InFlux captures a wider range of intrinsic variations and scene diversity, featuring 143K+ annotated frames from 386 high-resolution indoor and outdoor videos with dynamic camera intrinsics.
|
| 14 |
|
| 15 |
+
This dataset is presented in the paper [InFlux: A Benchmark for Self-Calibration of Dynamic Intrinsics of Video Cameras](https://huggingface.co/papers/2510.23589).
|
| 16 |
|
| 17 |
+
Project page: https://influx.cs.princeton.edu/
|
| 18 |
+
Code: https://github.com/princeton-vl/InFlux
|
| 19 |
|
| 20 |
### Viewing `.mp4` Files
|
| 21 |
|
|
|
|
| 25 |
|
| 26 |
The dataset includes 386 `.mp4` videos and 2 `.json` files in the root directory:
|
| 27 |
|
| 28 |
+
- **Dynamic Intrinsics Videos (`.mp4`)**
|
| 29 |
+
Videos with dynamic intrinsics, moving objects, and camera motion.
|
| 30 |
|
| 31 |
+
- **Frame Counts and Split (`video_frame_count_and_split.json`)**
|
| 32 |
+
Maps each video to its number of frames and whether it belongs to the validation or test split.
|
| 33 |
|
| 34 |
+
- **Validation Ground Truth (`gt_validation_dict.json`)**
|
| 35 |
+
Maps validation video frames to ground truth intrinsics values.
|
| 36 |
|
| 37 |
---
|
| 38 |
|
|
|
|
| 49 |
}
|
| 50 |
```
|
| 51 |
|
| 52 |
+
- `frame_count` – Number of frames in the video.
|
| 53 |
|
| 54 |
+
- `split` – Denotes which split the video belongs in. The value will either be "val" or "test".
|
| 55 |
|
| 56 |
#### `gt_validation_dict.json`
|
| 57 |
|
|
|
|
| 76 |
```
|
| 77 |
|
| 78 |
**Per-frame keys:**
|
| 79 |
+
- `intrinsics_gt` – Ground truth intrinsics from look-up table (LUT) interpolation.
|
| 80 |
+
- `fx, fy, cx, cy, k1, k2, p1, p2` denote the intrinsics parameters as specified by the rad-tan Brown-Conrady distortion model.
|
| 81 |
|
| 82 |
+
- `intrinsics_gt_extrapolated` – The same as `intrinsics_gt`, but also provides extrapolated intrinsics if lens metadata is outside of LUT bounds. It contains the same fields as `intrinsics_gt`.
|
| 83 |
|
| 84 |
+
- `lens_metadata` – Raw physical lens parameters `focal_length_mm`, `focus_distance_m`
|
| 85 |
+
|
| 86 |
+
### Sample Usage
|
| 87 |
+
|
| 88 |
+
To evaluate your camera intrinsics prediction method using the InFlux benchmark:
|
| 89 |
+
|
| 90 |
+
#### Installation
|
| 91 |
+
|
| 92 |
+
For basic functionality (submitting results):
|
| 93 |
+
```bash
|
| 94 |
+
conda create --name influx python=3.10
|
| 95 |
+
conda activate influx
|
| 96 |
+
pip install .
|
| 97 |
+
```
|
| 98 |
+
|
| 99 |
+
#### Submission Format
|
| 100 |
+
|
| 101 |
+
First, generate a single submission json with the following format:
|
| 102 |
+
|
| 103 |
+
**`submission.json`**:
|
| 104 |
+
|
| 105 |
+
```json
|
| 106 |
+
{
|
| 107 |
+
"submission_metadata": {
|
| 108 |
+
"method_name": "your_method_name",
|
| 109 |
+
"intrinsics_type": "rad-tan" // or "mei"
|
| 110 |
+
},
|
| 111 |
+
"test_video1": {
|
| 112 |
+
"0": { // Frame index as a string
|
| 113 |
+
"fx": 0.0,
|
| 114 |
+
"fy": 0.0,
|
| 115 |
+
"cx": 0.0,
|
| 116 |
+
"cy": 0.0,
|
| 117 |
+
"k1": 0.0,
|
| 118 |
+
"k2": 0.0,
|
| 119 |
+
"p1": 0.0,
|
| 120 |
+
"p2": 0.0
|
| 121 |
+
},
|
| 122 |
+
"1": {
|
| 123 |
+
"fx": 0.0,
|
| 124 |
+
"fy": 0.0,
|
| 125 |
+
"cx": 0.0,
|
| 126 |
+
"cy": 0.0,
|
| 127 |
+
"k1": 0.0,
|
| 128 |
+
"k2": 0.0,
|
| 129 |
+
"p1": 0.0,
|
| 130 |
+
"p2": 0.0
|
| 131 |
+
}
|
| 132 |
+
// ... continue for all frames in the test video
|
| 133 |
+
},
|
| 134 |
+
"test_video2": {
|
| 135 |
+
// same format for other test videos
|
| 136 |
+
}
|
| 137 |
+
}
|
| 138 |
+
```
|
| 139 |
+
**Notes**:
|
| 140 |
+
- All frame indices must be strings (e.g., "0", "1", "2", …). Do not use leading zeros.
|
| 141 |
+
- `intrinsics_type` must be either "rad-tan" or "mei".
|
| 142 |
+
- If your method uses a different intrinsics type, please contact us at influxbenchmark@gmail.com
|
| 143 |
+
|
| 144 |
+
To generate an example submission json that is formatted correctly but needs values filled in, you can run the following command:
|
| 145 |
+
```bash
|
| 146 |
+
influx-generate-sample \
|
| 147 |
+
--intr-type <rad-tan|mei> \
|
| 148 |
+
--output <path/to/output_file.json>
|
| 149 |
+
```
|
| 150 |
+
|
| 151 |
+
#### Submit Your Results
|
| 152 |
+
|
| 153 |
+
Submit your predictions to the evaluation server using the command below. Replace the placeholders:
|
| 154 |
+
|
| 155 |
+
```bash
|
| 156 |
+
influx-upload \
|
| 157 |
+
--email your_email \
|
| 158 |
+
--path path_to_your_submission_json \
|
| 159 |
+
--method_name your_method_name
|
| 160 |
+
```
|
| 161 |
+
|
| 162 |
+
**Important**: the `--method_name` argument must exactly match the `method_name` specified in the `submission_metadata` section of your JSON file.
|
| 163 |
+
|
| 164 |
+
After submission, a validation function will check your JSON file. To ensure it passes:
|
| 165 |
+
- Avoid special characters or spaces in the **file name** and **method name**.
|
| 166 |
+
- Include **all test videos and frames** in your submission.
|
| 167 |
+
- Provide **all required intrinsics** for the specified `intrinsics_type` for every frame.
|
| 168 |
+
- Ensure that `fx`, `fy`, `cx`, and `cy` values are non-negative.
|
| 169 |
+
|
| 170 |
+
#### Making Your Submission Public
|
| 171 |
+
|
| 172 |
+
To make your submission public, run:
|
| 173 |
+
|
| 174 |
+
```bash
|
| 175 |
+
influx-make-public \
|
| 176 |
+
--id submission_id \
|
| 177 |
+
--email your_email \
|
| 178 |
+
--anonymous False \
|
| 179 |
+
--method_name your_method_name \
|
| 180 |
+
--publication "your publication name" \
|
| 181 |
+
--url_publication "https://your_publication" \
|
| 182 |
+
--url_code "https://your_code" \
|
| 183 |
+
```
|
| 184 |
+
|
| 185 |
+
You may set `"Anonymous"` as the publication name if the work is under review. The `url_publication`, `url_code` fields are optional.
|