Video-Text-to-Text
Transformers
TensorBoard
Safetensors
4DThinker
dynamic-spatial-reasoning
vision-language-model
latent-reasoning
Instructions to use jankin123/4DThinker-3B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use jankin123/4DThinker-3B with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("jankin123/4DThinker-3B", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 05c7be0d89b27e7e3345248e3298a162add9f5b79837ef0d643855cf62033a62
- Size of remote file:
- 11.4 MB
- SHA256:
- 55b76f98b2c2ca7c24fc62a0775fdb2db68a29790eefa5f295ce8c74aef987fc
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.