mingyang-wu commited on
Commit
c816b42
·
verified ·
1 Parent(s): 169d3d1

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +101 -0
README.md ADDED
@@ -0,0 +1,101 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Objectron Videos Mirror
2
+
3
+ This repository is a videos-only mirror of the official Objectron dataset, prepared for hosting on Hugging Face.
4
+
5
+ Official source repository:
6
+ - https://github.com/google-research-datasets/Objectron
7
+
8
+ ## Purpose
9
+
10
+ - Provide a clean and upload-friendly copy of Objectron video files.
11
+ - Keep directory layout aligned with official dataset conventions.
12
+ - Simplify distribution for downstream training and research workflows.
13
+
14
+ ## What Is Included
15
+
16
+ This mirror currently contains only video files.
17
+
18
+ Included:
19
+ - `videos/<class>/batch-<i>/<j>/<video>.MOV`
20
+
21
+ Not included in this mirror:
22
+ - annotation protobufs (for example `geometry.pbdata`)
23
+ - AR metadata protobufs
24
+ - tf.records / sequence examples
25
+ - index files and train/test split files
26
+ - parsing/evaluation scripts
27
+
28
+ For full dataset assets and tooling, use the official repository and storage paths.
29
+
30
+ ## Directory Layout
31
+
32
+ The video files follow the official Objectron layout pattern:
33
+
34
+ - `videos/class/batch-i/j/video.MOV`
35
+
36
+ Current class folders may include:
37
+ - `bike`
38
+ - `book`
39
+ - `bottle`
40
+ - `camera`
41
+ - `cereal_box`
42
+ - `chair`
43
+ - `cup`
44
+ - `laptop`
45
+ - `shoe`
46
+
47
+ ## License
48
+
49
+ This repository follows the official Objectron licensing terms.
50
+
51
+ Objectron is released under:
52
+ - Computational Use of Data Agreement 1.0 (C-UDA-1.0)
53
+ - https://github.com/microsoft/Computational-Use-of-Data-Agreement
54
+
55
+ A copy of the license is included in [LICENSE](LICENSE).
56
+
57
+ ## Attribution
58
+
59
+ If you use Objectron data, please cite the official Objectron paper and follow attribution guidance from the official repository:
60
+ - https://github.com/google-research-datasets/Objectron
61
+
62
+ ## Acknowledgment
63
+
64
+ We thank the Objectron team and the official maintainers for providing this dataset and related resources. These contributions were instrumental in the successful completion of our work: [ConsID-Gen](https://mingyang.me/ConsID-Gen/).
65
+
66
+ Objectron is a large-scale, object-centric video dataset with pose annotations and has made important contributions to 3D understanding and related vision research.
67
+
68
+ This repository is only a videos-only mirror for easier access and distribution.
69
+
70
+ ## Disclaimer
71
+
72
+ - This repository is not an official Google release.
73
+ - We cannot guarantee that the number of videos in this mirror exactly matches the counts reported in the original Objectron paper or official storage.
74
+ - The contents here only include video files available from our local download process.
75
+
76
+ ## Citation
77
+
78
+ If you found the original Objectron dataset useful, please cite the official paper.
79
+
80
+ ```bibtex
81
+ @article{objectron2021,
82
+ title={Objectron: A Large Scale Dataset of Object-Centric Videos in the Wild with Pose Annotations},
83
+ author={Adel Ahmadyan, Liangkai Zhang, Artsiom Ablavatski, Jianing Wei, Matthias Grundmann},
84
+ journal={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
85
+ year={2021}
86
+ }
87
+ ```
88
+
89
+ This is not an officially supported Google product. If you have any question, you can email us at objectron@google.com or join our mailing list at objectron@googlegroups.com.
90
+
91
+ ```bibtex
92
+ @misc{wu2026considgenviewconsistentidentitypreservingimagetovideo,
93
+ title={ConsID-Gen: View-Consistent and Identity-Preserving Image-to-Video Generation},
94
+ author={Mingyang Wu and Ashirbad Mishra and Soumik Dey and Shuo Xing and Naveen Ravipati and Hansi Wu and Binbin Li and Zhengzhong Tu},
95
+ year={2026},
96
+ eprint={2602.10113},
97
+ archivePrefix={arXiv},
98
+ primaryClass={cs.CV},
99
+ url={https://arxiv.org/abs/2602.10113},
100
+ }
101
+ ```