Update model card: add pipeline tag, paper and code links
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,12 +1,21 @@
|
|
| 1 |
---
|
| 2 |
-
license: bsd-3-clause
|
| 3 |
language:
|
| 4 |
- en
|
|
|
|
| 5 |
tags:
|
| 6 |
- scene-flow
|
| 7 |
- point-cloud
|
| 8 |
- codebase
|
| 9 |
- 3d-vision
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 10 |
---
|
| 11 |
|
| 12 |
<p align="center">
|
|
@@ -19,8 +28,8 @@ tags:
|
|
| 19 |
|
| 20 |
π If you find [*OpenSceneFlow*](https://github.com/KTH-RPL/OpenSceneFlow) useful to your research, please cite [**our works** π](#cite-us) and give [a star π](https://github.com/KTH-RPL/OpenSceneFlow) as encouragement. (ΰ©Λκ³βΛ)ΰ©β§
|
| 21 |
|
| 22 |
-
[*OpenSceneFlow*](https://github.com/KTH-RPL/OpenSceneFlow) is a codebase for point cloud scene flow estimation.
|
| 23 |
-
Please check the usage on [KTH-RPL/OpenSceneFlow](https://github.com/KTH-RPL/OpenSceneFlow).
|
| 24 |
Here we upload our demo data and checkpoint for the community.
|
| 25 |
|
| 26 |
## π One repository, All methods!
|
|
@@ -37,7 +46,7 @@ Officially:
|
|
| 37 |
- [x] [DeFlow](https://arxiv.org/abs/2401.16122) (Ours π): ICRA 2024
|
| 38 |
|
| 39 |
<details> <summary> Reoriginse to our codebase:</summary>
|
| 40 |
-
|
| 41 |
- [x] [FastFlow3d](https://arxiv.org/abs/2103.01306): RA-L 2021
|
| 42 |
- [x] [ZeroFlow](https://arxiv.org/abs/2305.10424): ICLR 2024, their pre-trained weight can covert into our format easily through [the script](https://github.com/KTH-RPL/OpenSceneFlow/tools/zerof2ours.py).
|
| 43 |
- [x] [NSFP](https://arxiv.org/abs/2111.01253): NeurIPS 2021, faster 3x than original version because of [our CUDA speed up](https://github.com/KTH-RPL/OpenSceneFlow/assets/cuda/README.md), same (slightly better) performance. Done coding, public after review.
|
|
@@ -49,13 +58,13 @@ Officially:
|
|
| 49 |
|
| 50 |
## Notes
|
| 51 |
|
| 52 |
-
The tree of uploaded files:
|
| 53 |
* [ModelName_best].ckpt: means the model evaluated in the public leaderboard page provided by authors or our retrained with the best parameters.
|
| 54 |
* [demo-data-v2.zip](https://huggingface.co/kin-zhang/OpenSceneFlow/blob/main/demo_data.zip): 1.2GB, a mini-dataset for user to quickly run train/val code. Check usage in [this section](https://github.com/KTH-RPL/SeFlow?tab=readme-ov-file#1-run--train).
|
| 55 |
* [waymo_map.tar.gz](https://huggingface.co/kin-zhang/OpenSceneFlow/blob/main/waymo_map.tar.gz): to successfully process waymo data with ground segmentation included to unified h5 file. Check usage in [this README](https://github.com/KTH-RPL/SeFlow/blob/main/dataprocess/README.md#waymo-dataset).
|
| 56 |
* [demo_data.zip](https://huggingface.co/kin-zhang/OpenSceneFlow/blob/main/demo_data.zip): 1st version (will deprecated later) 613Mb, a mini-dataset for user to quickly run train/val code. Check usage in [this section](https://github.com/KTH-RPL/OpenSceneFlow?tab=readme-ov-file#1-run--train).
|
| 57 |
|
| 58 |
-
All test result reports can be found [v2 leaderboard](https://github.com/KTH-RPL/DeFlow/discussions/6)
|
| 59 |
and [v1 leaderboard](https://github.com/KTH-RPL/DeFlow/discussions/2).
|
| 60 |
|
| 61 |
## Cite Us
|
|
@@ -74,8 +83,8 @@ and [v1 leaderboard](https://github.com/KTH-RPL/DeFlow/discussions/2).
|
|
| 74 |
}
|
| 75 |
@inproceedings{zhang2024deflow,
|
| 76 |
author={Zhang, Qingwen and Yang, Yi and Fang, Heng and Geng, Ruoyu and Jensfelt, Patric},
|
| 77 |
-
booktitle={2024 IEEE International Conference on Robotics and Automation (ICRA)},
|
| 78 |
-
title={{DeFlow}: Decoder of Scene Flow Network in Autonomous Driving},
|
| 79 |
year={2024},
|
| 80 |
pages={2105-2111},
|
| 81 |
doi={10.1109/ICRA57147.2024.10610278}
|
|
@@ -105,8 +114,8 @@ And our excellent collaborators works as followings:
|
|
| 105 |
}
|
| 106 |
@article{kim2025flow4d,
|
| 107 |
author={Kim, Jaeyeul and Woo, Jungwan and Shin, Ukcheol and Oh, Jean and Im, Sunghoon},
|
| 108 |
-
journal={IEEE Robotics and Automation Letters},
|
| 109 |
-
title={Flow4D: Leveraging 4D Voxel Network for LiDAR Scene Flow Estimation},
|
| 110 |
year={2025},
|
| 111 |
volume={10},
|
| 112 |
number={4},
|
|
|
|
| 1 |
---
|
|
|
|
| 2 |
language:
|
| 3 |
- en
|
| 4 |
+
license: bsd-3-clause
|
| 5 |
tags:
|
| 6 |
- scene-flow
|
| 7 |
- point-cloud
|
| 8 |
- codebase
|
| 9 |
- 3d-vision
|
| 10 |
+
pipeline_tag: robotics
|
| 11 |
+
---
|
| 12 |
+
|
| 13 |
+
# DeltaFlow: An Efficient Multi-frame Scene Flow Estimation Method
|
| 14 |
+
|
| 15 |
+
This repository contains the model weights for **DeltaFlow**, presented in the paper [DeltaFlow: An Efficient Multi-frame Scene Flow Estimation Method](https://huggingface.co/papers/2508.17054).
|
| 16 |
+
|
| 17 |
+
The code is open-sourced along with trained model weights at the official GitHub repository: [https://github.com/Kin-Zhang/DeltaFlow](https://github.com/Kin-Zhang/DeltaFlow).
|
| 18 |
+
|
| 19 |
---
|
| 20 |
|
| 21 |
<p align="center">
|
|
|
|
| 28 |
|
| 29 |
π If you find [*OpenSceneFlow*](https://github.com/KTH-RPL/OpenSceneFlow) useful to your research, please cite [**our works** π](#cite-us) and give [a star π](https://github.com/KTH-RPL/OpenSceneFlow) as encouragement. (ΰ©Λκ³βΛ)ΰ©β§
|
| 30 |
|
| 31 |
+
[*OpenSceneFlow*](https://github.com/KTH-RPL/OpenSceneFlow) is a codebase for point cloud scene flow estimation.
|
| 32 |
+
Please check the usage on [KTH-RPL/OpenSceneFlow](https://github.com/KTH-RPL/OpenSceneFlow).
|
| 33 |
Here we upload our demo data and checkpoint for the community.
|
| 34 |
|
| 35 |
## π One repository, All methods!
|
|
|
|
| 46 |
- [x] [DeFlow](https://arxiv.org/abs/2401.16122) (Ours π): ICRA 2024
|
| 47 |
|
| 48 |
<details> <summary> Reoriginse to our codebase:</summary>
|
| 49 |
+
|
| 50 |
- [x] [FastFlow3d](https://arxiv.org/abs/2103.01306): RA-L 2021
|
| 51 |
- [x] [ZeroFlow](https://arxiv.org/abs/2305.10424): ICLR 2024, their pre-trained weight can covert into our format easily through [the script](https://github.com/KTH-RPL/OpenSceneFlow/tools/zerof2ours.py).
|
| 52 |
- [x] [NSFP](https://arxiv.org/abs/2111.01253): NeurIPS 2021, faster 3x than original version because of [our CUDA speed up](https://github.com/KTH-RPL/OpenSceneFlow/assets/cuda/README.md), same (slightly better) performance. Done coding, public after review.
|
|
|
|
| 58 |
|
| 59 |
## Notes
|
| 60 |
|
| 61 |
+
The tree of uploaded files:
|
| 62 |
* [ModelName_best].ckpt: means the model evaluated in the public leaderboard page provided by authors or our retrained with the best parameters.
|
| 63 |
* [demo-data-v2.zip](https://huggingface.co/kin-zhang/OpenSceneFlow/blob/main/demo_data.zip): 1.2GB, a mini-dataset for user to quickly run train/val code. Check usage in [this section](https://github.com/KTH-RPL/SeFlow?tab=readme-ov-file#1-run--train).
|
| 64 |
* [waymo_map.tar.gz](https://huggingface.co/kin-zhang/OpenSceneFlow/blob/main/waymo_map.tar.gz): to successfully process waymo data with ground segmentation included to unified h5 file. Check usage in [this README](https://github.com/KTH-RPL/SeFlow/blob/main/dataprocess/README.md#waymo-dataset).
|
| 65 |
* [demo_data.zip](https://huggingface.co/kin-zhang/OpenSceneFlow/blob/main/demo_data.zip): 1st version (will deprecated later) 613Mb, a mini-dataset for user to quickly run train/val code. Check usage in [this section](https://github.com/KTH-RPL/OpenSceneFlow?tab=readme-ov-file#1-run--train).
|
| 66 |
|
| 67 |
+
All test result reports can be found [v2 leaderboard](https://github.com/KTH-RPL/DeFlow/discussions/6)
|
| 68 |
and [v1 leaderboard](https://github.com/KTH-RPL/DeFlow/discussions/2).
|
| 69 |
|
| 70 |
## Cite Us
|
|
|
|
| 83 |
}
|
| 84 |
@inproceedings{zhang2024deflow,
|
| 85 |
author={Zhang, Qingwen and Yang, Yi and Fang, Heng and Geng, Ruoyu and Jensfelt, Patric},
|
| 86 |
+
booktitle={2024 IEEE International Conference on Robotics and Automation (ICRA)},
|
| 87 |
+
title={{DeFlow}: Decoder of Scene Flow Network in Autonomous Driving},
|
| 88 |
year={2024},
|
| 89 |
pages={2105-2111},
|
| 90 |
doi={10.1109/ICRA57147.2024.10610278}
|
|
|
|
| 114 |
}
|
| 115 |
@article{kim2025flow4d,
|
| 116 |
author={Kim, Jaeyeul and Woo, Jungwan and Shin, Ukcheol and Oh, Jean and Im, Sunghoon},
|
| 117 |
+
journal={IEEE Robotics and Automation Letters},
|
| 118 |
+
title={Flow4D: Leveraging 4D Voxel Network for LiDAR Scene Flow Estimation},
|
| 119 |
year={2025},
|
| 120 |
volume={10},
|
| 121 |
number={4},
|