{ "base_model": "HuggingFaceTB/SmolVLM-500M-Instruct", "tree": [ { "model_id": "HuggingFaceTB/SmolVLM-500M-Instruct", "gated": "False", "card": "---\nlibrary_name: transformers\nlicense: apache-2.0\ndatasets:\n- HuggingFaceM4/the_cauldron\n- HuggingFaceM4/Docmatix\npipeline_tag: image-text-to-text\nlanguage:\n- en\nbase_model:\n- HuggingFaceTB/SmolLM2-360M-Instruct\n- google/siglip-base-patch16-512\n---\n\n\"Image\n\n# SmolVLM-500M\n\nSmolVLM-500M is a tiny multimodal model, member of the SmolVLM family. It accepts arbitrary sequences of image and text inputs to produce text outputs. It's designed for efficiency. SmolVLM can answer questions about images, describe visual content, or transcribe text. Its lightweight architecture makes it suitable for on-device applications while maintaining strong performance on multimodal tasks. It can run inference on one image with 1.23GB of GPU RAM.\n\n## Model Summary\n\n- **Developed by:** Hugging Face \ud83e\udd17\n- **Model type:** Multi-modal model (image+text)\n- **Language(s) (NLP):** English\n- **License:** Apache 2.0\n- **Architecture:** Based on [Idefics3](https://huggingface.co/HuggingFaceM4/Idefics3-8B-Llama3) (see technical summary)\n\n## Resources\n\n- **Demo:** [SmolVLM-256 Demo](https://huggingface.co/spaces/HuggingFaceTB/SmolVLM-256M-Demo)\n- **Blog:** [Blog post](https://huggingface.co/blog/smolvlm)\n\n## Uses\n\nSmolVLM can be used for inference on multimodal (image + text) tasks where the input comprises text queries along with one or more images. Text and images can be interleaved arbitrarily, enabling tasks like image captioning, visual question answering, and storytelling based on visual content. The model does not support image generation.\n\nTo fine-tune SmolVLM on a specific task, you can follow [the fine-tuning tutorial](https://github.com/huggingface/smollm/blob/main/vision/finetuning/Smol_VLM_FT.ipynb).\n\n## Evaluation\n\n\n\"Benchmarks\"\n\n\n### Technical Summary\n\nSmolVLM leverages the lightweight SmolLM2 language model to provide a compact yet powerful multimodal experience. It introduces several changes compared to the larger SmolVLM 2.2B model:\n\n- **Image compression:** We introduce a more radical image compression compared to Idefics3 and SmolVLM-2.2B to enable the model to infer faster and use less RAM.\n- **Visual Token Encoding:** SmolVLM-256 uses 64 visual tokens to encode image patches of size 512\u00d7512. Larger images are divided into patches, each encoded separately, enhancing efficiency without compromising performance.\n- **New special tokens:** We added new special tokens to divide the subimages. This allows for more efficient tokenization of the images.\n- **Smoller vision encoder:** We went from a 400M parameter siglip vision encoder to a much smaller 93M encoder.\n- **Larger image patches:** We are now passing patches of 512x512 to the vision encoder, instead of 384x384 like the larger SmolVLM. This allows the information to be encoded more efficiently.\n\nMore details about the training and architecture are available in our technical report.\n\n### How to get started\n\nYou can use transformers to load, infer and fine-tune SmolVLM.\n\n```python\nimport torch\nfrom PIL import Image\nfrom transformers import AutoProcessor, AutoModelForVision2Seq\nfrom transformers.image_utils import load_image\n\nDEVICE = \"cuda\" if torch.cuda.is_available() else \"cpu\"\n\n# Load images\nimage = load_image(\"https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg\")\n\n# Initialize processor and model\nprocessor = AutoProcessor.from_pretrained(\"HuggingFaceTB/SmolVLM-500M-Instruct\")\nmodel = AutoModelForVision2Seq.from_pretrained(\n \"HuggingFaceTB/SmolVLM-500M-Instruct\",\n torch_dtype=torch.bfloat16,\n _attn_implementation=\"flash_attention_2\" if DEVICE == \"cuda\" else \"eager\",\n).to(DEVICE)\n\n# Create input messages\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\"},\n {\"type\": \"text\", \"text\": \"Can you describe this image?\"}\n ]\n },\n]\n\n# Prepare inputs\nprompt = processor.apply_chat_template(messages, add_generation_prompt=True)\ninputs = processor(text=prompt, images=[image], return_tensors=\"pt\")\ninputs = inputs.to(DEVICE)\n\n# Generate outputs\ngenerated_ids = model.generate(**inputs, max_new_tokens=500)\ngenerated_texts = processor.batch_decode(\n generated_ids,\n skip_special_tokens=True,\n)\n\nprint(generated_texts[0])\n\"\"\"\nAssistant: The image depicts a cityscape featuring a prominent landmark, the Statue of Liberty, prominently positioned on Liberty Island. The statue is a green, humanoid figure with a crown atop its head and is situated on a small island surrounded by water. The statue is characterized by its large, detailed structure, with a statue of a woman holding a torch above her head and a tablet in her left hand. The statue is surrounded by a small, rocky island, which is partially visible in the foreground.\nIn the background, the cityscape is dominated by numerous high-rise buildings, which are densely packed and vary in height. The buildings are primarily made of glass and steel, reflecting the sunlight and creating a bright, urban skyline. The skyline is filled with various architectural styles, including modern skyscrapers and older, more traditional buildings.\nThe water surrounding the island is calm, with a few small boats visible, indicating that the area is likely a popular tourist destination. The water is a deep blue, suggesting that it is a large body of water, possibly a river or a large lake.\nIn the foreground, there is a small strip of land with trees and grass, which adds a touch of natural beauty to the urban landscape. The trees are green, indicating that it is likely spring or summer.\nThe image captures a moment of tranquility and reflection, as the statue and the cityscape come together to create a harmonious and picturesque scene. The statue's presence in the foreground draws attention to the city's grandeur, while the calm water and natural elements in the background provide a sense of peace and serenity.\nIn summary, the image showcases the Statue of Liberty, a symbol of freedom and democracy, set against a backdrop of a bustling cityscape. The statue is a prominent and iconic representation of human achievement, while the cityscape is a testament to human ingenuity and progress. The image captures the beauty and complexity of urban life, with the statue serving as a symbol of hope and freedom, while the cityscape provides a glimpse into the modern world.\n\"\"\"\n```\n\n\n### Model optimizations\n\n**Precision**: For better performance, load and run the model in half-precision (`torch.bfloat16`) if your hardware supports it.\n\n```python\nfrom transformers import AutoModelForVision2Seq\nimport torch\n\nmodel = AutoModelForVision2Seq.from_pretrained(\n \"HuggingFaceTB/SmolVLM-Instruct\",\n torch_dtype=torch.bfloat16\n).to(\"cuda\")\n```\n\nYou can also load SmolVLM with 4/8-bit quantization using bitsandbytes, torchao or Quanto. Refer to [this page](https://huggingface.co/docs/transformers/en/main_classes/quantization) for other options.\n\n```python\nfrom transformers import AutoModelForVision2Seq, BitsAndBytesConfig\nimport torch\n\nquantization_config = BitsAndBytesConfig(load_in_8bit=True)\nmodel = AutoModelForVision2Seq.from_pretrained(\n \"HuggingFaceTB/SmolVLM-Instruct\",\n quantization_config=quantization_config,\n)\n```\n\n**Vision Encoder Efficiency**: Adjust the image resolution by setting `size={\"longest_edge\": N*512}` when initializing the processor, where N is your desired value. The default `N=4` works well, which results in input images of\nsize 2048\u00d72048. Decreasing N can save GPU memory and is appropriate for lower-resolution images. This is also useful if you want to fine-tune on videos.\n\n\n## Misuse and Out-of-scope Use\n\nSmolVLM is not intended for high-stakes scenarios or critical decision-making processes that affect an individual's well-being or livelihood. The model may produce content that appears factual but may not be accurate. Misuse includes, but is not limited to:\n\n- Prohibited Uses:\n - Evaluating or scoring individuals (e.g., in employment, education, credit)\n - Critical automated decision-making\n - Generating unreliable factual content\n- Malicious Activities:\n - Spam generation\n - Disinformation campaigns\n - Harassment or abuse\n - Unauthorized surveillance\n\n### License\n\nSmolVLM is built upon [SigLIP](https://huggingface.co/google/siglip-base-patch16-512) as image encoder and [SmolLM2](https://huggingface.co/HuggingFaceTB/SmolLM2-360M-Instruct) for text decoder part.\n\nWe release the SmolVLM checkpoints under the Apache 2.0 license.\n\n## Training Details\n\n### Training Data\n\nThe training data comes from [The Cauldron](https://huggingface.co/datasets/HuggingFaceM4/the_cauldron) and [Docmatix](https://huggingface.co/datasets/HuggingFaceM4/Docmatix) datasets, with emphasis on document understanding (25%) and image captioning (18%), while maintaining balanced coverage across other crucial capabilities like visual reasoning, chart comprehension, and general instruction following.\n\"Example\n\n# Citation information\nYou can cite us in the following way:\n```bibtex\n@article{marafioti2025smolvlm,\n title={SmolVLM: Redefining small and efficient multimodal models}, \n author={Andr\u00e9s Marafioti and Orr Zohar and Miquel Farr\u00e9 and Merve Noyan and Elie Bakouch and Pedro Cuenca and Cyril Zakka and Loubna Ben Allal and Anton Lozhkov and Nouamane Tazi and Vaibhav Srivastav and Joshua Lochner and Hugo Larcher and Mathieu Morlon and Lewis Tunstall and Leandro von Werra and Thomas Wolf},\n journal={arXiv preprint arXiv:2504.05299},\n year={2025}\n}\n```\n\n", "metadata": "\"N/A\"", "depth": 0, "children": [ "vidore/ColSmolVLM-Instruct-500M-base", "carles-mzms/tyrynzysmegalodon", "lvxiangyu11/smolvlm-instruct-trl-sft-ChartQA", "hasan-farooq/SmolVLM-500M-Instruct-vqav2", "hasan-farooq/SmolVLM-500M-Instruct-vqav3", "hasan-farooq/SmolVLM-500M-Instruct-med-vqav1", "aadhibest/smolvlm-500m-instruct-13-03-2025", "chiaky21/SmolVLM-500M-Instruct-vqav2", "racineai/Flantier-SmolVLM-500M-dse", "Soundappan123/smolvlm-dpo", "BIOMEDICA/BMC-smolvlm1-500M", "Pantelismak/smolvlm_cxr", "JoseferEins/SmolVLM-500M-Instruct-fer0" ], "children_count": 13, "adapters": [ "VishalD1234/SmolVLM-500M-Instruct-vqav2", "sasikaran04/SmolVLM-500M-Instruct-vqav2", "Hirai-Labs/FT-SmolVLM-500M-Instruct-ALPR", "revitotan/FT-SmolVLM-500M-Instruct-Helmet", "dkhanh/SmolVLM-500M-Instruct-earths", "dkhanh/SmolVLM-500M-Instruct-earth-v0", "dkhanh/SmolVLM-500M-Instruct-earth-v1", "dkhanh/SmolVLM-500M-Instruct-earths-v1", "samlucas/smolvlm_500m-parking_occupancy-PKLot-instruct-without-context-without-expert", "samlucas/smolvlm_500m-parking_occupancy-PKLot-instruct-with-context-without-expert", "samlucas/smolvlm_500m-parking_occupancy-PKLot-instruct-with-context-with-expert", "samlucas/smolvlm_500m-parking_occupancy-PKLot-instruct-without-context-with-expert", "samlucas/smolvlm_500m-parking_occupancy-CNRPark-instruct-with-context-with-expert", "samlucas/smolvlm_500m-parking_occupancy-CNRPark-instruct-with-context-without-expert", "samlucas/smolvlm_500m-parking_occupancy-CNRPark-instruct-without-context-with-expert", "samlucas/smolvlm_500m-parking_occupancy-CNRPark-instruct-without-context-without-expert", "bilal1998/SmolVLM-500M-Instruct-vqav2" ], "adapters_count": 17, "quantized": [ "HuggingFaceTB/SmolVLM2-500M-Video-Instruct", "moot20/SmolVLM-500M-Instruct-MLX-4bits", "moot20/SmolVLM-500M-Instruct-MLX-6bits", "moot20/SmolVLM-500M-Instruct-MLX-8bits", "moot20/SmolVLM-500M-Instruct-MLX", "ggml-org/SmolVLM-500M-Instruct-GGUF", "mradermacher/SmolVLM-500M-Instruct-GGUF", "mradermacher/SmolVLM-500M-Instruct-i1-GGUF", "VyoJ/SmolVLM-500M-Instruct-be-GGUF" ], "quantized_count": 9, "merges": [], "merges_count": 0, "total_derivatives": 39, "spaces": [], "spaces_count": 0, "parents": [], "base_model": "HuggingFaceTB/SmolVLM-500M-Instruct", "base_model_relation": "base" }, { "model_id": "vidore/ColSmolVLM-Instruct-500M-base", "gated": "False", "card": "---\nlicense: mit\nlibrary_name: colpali\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\nlanguage:\n- en\ntags:\n- colsmolvlm\n- vidore-experimental\n- vidore\n---\n# ColSmolVLM-500M-Instruct: Visual Retriever based on SmolVLM-500M-Instruct with ColBERT strategy\n\nColSmolVLM is a model based on a novel model architecture and training strategy based on Vision Language Models (VLMs) to efficiently index documents from their visual features.\nIt is a SmolVLM extension that generates [ColBERT](https://arxiv.org/abs/2004.12832)- style multi-vector representations of text and images. \nIt was introduced in the paper [ColPali: Efficient Document Retrieval with Vision Language Models](https://arxiv.org/abs/2407.01449) and first released in [this repository](https://github.com/ManuelFay/colpali)\n\nThis version is the untrained base version to guarantee deterministic projection layer initialization.\n\n\n## License\n\nColSmol's vision language backbone model (ColSmolVLM) is under `apache2.0` license. The adapters attached to the model are under MIT license.\n\n## Contact\n\n- Manuel Faysse: manuel.faysse@illuin.tech\n- Hugues Sibille: hugues.sibille@illuin.tech\n- Tony Wu: tony.wu@illuin.tech\n\n## Citation\n\nIf you use any datasets or models from this organization in your research, please cite the original dataset as follows:\n\n```bibtex\n@misc{faysse2024colpaliefficientdocumentretrieval,\n title={ColPali: Efficient Document Retrieval with Vision Language Models}, \n author={Manuel Faysse and Hugues Sibille and Tony Wu and Bilel Omrani and Gautier Viaud and C\u00e9line Hudelot and Pierre Colombo},\n year={2024},\n eprint={2407.01449},\n archivePrefix={arXiv},\n primaryClass={cs.IR},\n url={https://arxiv.org/abs/2407.01449}, \n}\n```", "metadata": "\"N/A\"", "depth": 1, "children": [ "vidore/colSmol-500M", "thoddnn/colSmol-500M", "ingenio/IndoColSmol-500M" ], "children_count": 3, "adapters": [ "Oysiyl/colSmol-500M_ufo" ], "adapters_count": 1, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 4, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM-500M-Instruct" ], "base_model": "vidore/ColSmolVLM-Instruct-500M-base", "base_model_relation": "base" }, { "model_id": "carles-mzms/tyrynzysmegalodon", "gated": "False", "card": "---\nlicense: cc-by-3.0\nlanguage:\n- es\n- pa\n- en\n- ca\n- fr\n- it\nbase_model:\n- HuggingFaceTB/SmolVLM-500M-Instruct\n---", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM-500M-Instruct" ], "base_model": "carles-mzms/tyrynzysmegalodon", "base_model_relation": "base" }, { "model_id": "lvxiangyu11/smolvlm-instruct-trl-sft-ChartQA", "gated": "False", "card": "---\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\nlibrary_name: transformers\nmodel_name: smolvlm-instruct-trl-sft-ChartQA\ntags:\n- generated_from_trainer\n- trl\n- sft\nlicence: license\n---\n\n# Model Card for smolvlm-instruct-trl-sft-ChartQA\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM-500M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct).\nIt has been trained using [TRL](https://github.com/huggingface/trl).\n\n## Quick start\n\n```python\nfrom transformers import pipeline\n\nquestion = \"If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?\"\ngenerator = pipeline(\"text-generation\", model=\"lvxiangyu11/smolvlm-instruct-trl-sft-ChartQA\", device=\"cuda\")\noutput = generator([{\"role\": \"user\", \"content\": question}], max_new_tokens=128, return_full_text=False)[0]\nprint(output[\"generated_text\"])\n```\n\n## Training procedure\n\n \n\n\nThis model was trained with SFT.\n\n### Framework versions\n\n- TRL: 0.14.0\n- Transformers: 4.48.2\n- Pytorch: 2.5.1+cu121\n- Datasets: 3.2.0\n- Tokenizers: 0.21.0\n\n## Citations\n\n\n\nCite TRL as:\n \n```bibtex\n@misc{vonwerra2022trl,\n\ttitle = {{TRL: Transformer Reinforcement Learning}},\n\tauthor = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou\u00e9dec},\n\tyear = 2020,\n\tjournal = {GitHub repository},\n\tpublisher = {GitHub},\n\thowpublished = {\\url{https://github.com/huggingface/trl}}\n}\n```", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM-500M-Instruct" ], "base_model": "lvxiangyu11/smolvlm-instruct-trl-sft-ChartQA", "base_model_relation": "base" }, { "model_id": "hasan-farooq/SmolVLM-500M-Instruct-vqav2", "gated": "False", "card": "---\nlibrary_name: transformers\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM-500M-Instruct-vqav2\n results: []\n---\n\n\n\n# SmolVLM-500M-Instruct-vqav2\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM-500M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 16\n- optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 3\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.48.2\n- Pytorch 2.5.1+cu124\n- Datasets 3.2.0\n- Tokenizers 0.21.0\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM-500M-Instruct" ], "base_model": "hasan-farooq/SmolVLM-500M-Instruct-vqav2", "base_model_relation": "base" }, { "model_id": "hasan-farooq/SmolVLM-500M-Instruct-vqav3", "gated": "False", "card": "---\nlibrary_name: transformers\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM-500M-Instruct-vqav3\n results: []\n---\n\n\n\n# SmolVLM-500M-Instruct-vqav3\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM-500M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 10\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.48.2\n- Pytorch 2.5.1+cu124\n- Datasets 3.2.0\n- Tokenizers 0.21.0\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM-500M-Instruct" ], "base_model": "hasan-farooq/SmolVLM-500M-Instruct-vqav3", "base_model_relation": "base" }, { "model_id": "hasan-farooq/SmolVLM-500M-Instruct-med-vqav1", "gated": "False", "card": "---\nlibrary_name: transformers\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM-500M-Instruct-med-vqav1\n results: []\n---\n\n\n\n# SmolVLM-500M-Instruct-med-vqav1\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM-500M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.3924\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 8\n- optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 2\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss |\n|:-------------:|:------:|:----:|:---------------:|\n| 1.0375 | 0.4454 | 100 | 0.4305 |\n| 0.4064 | 0.8909 | 200 | 0.4024 |\n| 0.3378 | 1.3341 | 300 | 0.3941 |\n| 0.3348 | 1.7795 | 400 | 0.3924 |\n\n\n### Framework versions\n\n- Transformers 4.48.2\n- Pytorch 2.5.1+cu124\n- Datasets 3.3.0\n- Tokenizers 0.21.0\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM-500M-Instruct" ], "base_model": "hasan-farooq/SmolVLM-500M-Instruct-med-vqav1", "base_model_relation": "base" }, { "model_id": "aadhibest/smolvlm-500m-instruct-13-03-2025", "gated": "False", "card": "---\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\nlibrary_name: transformers\nmodel_name: smolvlm-500m-instruct-13-03-2025\ntags:\n- generated_from_trainer\n- trl\n- sft\nlicence: license\n---\n\n# Model Card for smolvlm-500m-instruct-13-03-2025\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM-500M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct).\nIt has been trained using [TRL](https://github.com/huggingface/trl).\n\n## Quick start\n\n```python\nfrom transformers import pipeline\n\nquestion = \"If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?\"\ngenerator = pipeline(\"text-generation\", model=\"aadhibest/smolvlm-500m-instruct-13-03-2025\", device=\"cuda\")\noutput = generator([{\"role\": \"user\", \"content\": question}], max_new_tokens=128, return_full_text=False)[0]\nprint(output[\"generated_text\"])\n```\n\n## Training procedure\n\n \n\n\nThis model was trained with SFT.\n\n### Framework versions\n\n- TRL: 0.15.0\n- Transformers: 4.49.0\n- Pytorch: 2.6.0+cu118\n- Datasets: 3.3.1\n- Tokenizers: 0.21.0\n\n## Citations\n\n\n\nCite TRL as:\n \n```bibtex\n@misc{vonwerra2022trl,\n\ttitle = {{TRL: Transformer Reinforcement Learning}},\n\tauthor = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou\u00e9dec},\n\tyear = 2020,\n\tjournal = {GitHub repository},\n\tpublisher = {GitHub},\n\thowpublished = {\\url{https://github.com/huggingface/trl}}\n}\n```", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM-500M-Instruct" ], "base_model": "aadhibest/smolvlm-500m-instruct-13-03", "base_model_relation": "finetune" }, { "model_id": "chiaky21/SmolVLM-500M-Instruct-vqav2", "gated": "False", "card": "---\nlibrary_name: transformers\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM-500M-Instruct-vqav2\n results: []\n---\n\n\n\n# SmolVLM-500M-Instruct-vqav2\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM-500M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 25\n- num_epochs: 5\n\n### Framework versions\n\n- Transformers 4.49.0\n- Pytorch 2.4.1+cu124\n- Datasets 3.4.1\n- Tokenizers 0.21.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM-500M-Instruct" ], "base_model": "chiaky21/SmolVLM-500M-Instruct-vqav2", "base_model_relation": "base" }, { "model_id": "racineai/Flantier-SmolVLM-500M-dse", "gated": "False", "card": "---\nlicense: apache-2.0\ndatasets:\n- racineai/OGC_2_vdr-visRAG-colpali\nlanguage:\n- fr\n- en\n- de\n- es\n- it\nbase_model:\n- HuggingFaceTB/SmolVLM-500M-Instruct\n---\n\n# Flantier-SmolVLM-500M-dse\n\nA lightweight multimodal vision-language model specialized for technical document retrieval.\n\n## Overview\n\nFlantier-SmolVLM-500M-dse (Document Screenshot Embedding) is a 500M parameter vision-language model designed for efficient retrieval of technical documentation. It directly encodes document screenshots into embeddings, preserving all information including text, images, and layout without requiring separate content extraction.\n\n## Key Features\n\n- **Efficient Retrieval**: Generates document and query embeddings for semantic similarity search\n- **Multimodal Understanding**: Processes text, diagrams, charts, and tables in their original layout\n- **Lightweight Architecture**: Only 500M parameters, runs on consumer GPUs\n- **No Preprocessing Required**: Directly works with document screenshots\n\n## Installation\n\n```bash\npip install transformers accelerate pillow\n```\n\n## Usage Example\n\n```python\nfrom PIL import Image\nimport torch\nfrom transformers import AutoProcessor, AutoModelForVision2Seq\n\n# Load model and processor\nprocessor = AutoProcessor.from_pretrained(\"racineai/Flantier-SmolVLM-500M-dse\")\nmodel = AutoModelForVision2Seq.from_pretrained(\n \"racineai/Flantier-SmolVLM-500M-dse\",\n torch_dtype=torch.bfloat16,\n device_map=\"auto\"\n)\n\n# Load document image\ndocument_image = Image.open(\"technical_document.jpg\")\n\n# Process for document embedding\ndoc_messages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\"},\n {\"type\": \"text\", \"text\": \"What is shown in this image?\"}\n ]\n },\n]\ndoc_prompt = processor.apply_chat_template(doc_messages, add_generation_prompt=True)\ndoc_inputs = processor(text=doc_prompt, images=[document_image], return_tensors=\"pt\").to(model.device)\n\n# Generate document embedding\nwith torch.no_grad():\n doc_outputs = model(**doc_inputs, output_hidden_states=True, return_dict=True)\n doc_embedding = doc_outputs.hidden_states[-1][:, -1] # Last token embedding\n doc_embedding = torch.nn.functional.normalize(doc_embedding, p=2, dim=-1)\n\n# Process query embedding\nquery = \"What are the specifications of this component?\"\nquery_messages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"text\", \"text\": query}\n ]\n },\n]\nquery_prompt = processor.apply_chat_template(query_messages, add_generation_prompt=True)\nquery_inputs = processor(text=query_prompt, return_tensors=\"pt\").to(model.device)\n\n# Generate query embedding\nwith torch.no_grad():\n query_outputs = model(**query_inputs, output_hidden_states=True, return_dict=True)\n query_embedding = query_outputs.hidden_states[-1][:, -1] # Last token embedding\n query_embedding = torch.nn.functional.normalize(query_embedding, p=2, dim=-1)\n\n# Calculate similarity\nsimilarity = torch.nn.functional.cosine_similarity(query_embedding, doc_embedding)\nprint(f\"Similarity score: {similarity.item():.4f}\")\n```\n\n## Applications\n\n- **Technical Document Retrieval**: Find relevant documents based on technical queries\n- **Technical Support Systems**: Match user questions to relevant documentation\n- **Engineering Knowledge Management**: Index and search technical specifications, diagrams, and reports\n\n## Training Methodology\n\nThis model was trained using the Document Screenshot Embedding (DSE) approach, which treats document screenshots as a unified input format. This eliminates the need for content extraction preprocessing while preserving all visual and textual information in documents.\n\n## Citation\n\n```\n@misc{flantier-smolvlm-dse,\n author = {racine.ai},\n title = {Flantier-SmolVLM-500M-dse: A Lightweight Document Screenshot Embedding Model},\n year = {2025},\n publisher = {Hugging Face},\n url = {https://huggingface.co/racineai/Flantier-SmolVLM-500M-dse}\n}\n```\n\n## License\n\nThis model is released under the Apache 2.0 license.", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM-500M-Instruct" ], "base_model": "racineai/Flantier-SmolVLM-500M-dse", "base_model_relation": "base" }, { "model_id": "Soundappan123/smolvlm-dpo", "gated": "False", "card": "---\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\nlibrary_name: transformers\nmodel_name: smolvlm-dpo\ntags:\n- generated_from_trainer\n- trl\n- dpo\nlicence: license\n---\n\n# Model Card for smolvlm-dpo\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM-500M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct).\nIt has been trained using [TRL](https://github.com/huggingface/trl).\n\n## Quick start\n\n```python\nfrom transformers import pipeline\n\nquestion = \"If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?\"\ngenerator = pipeline(\"text-generation\", model=\"Soundappan123/smolvlm-dpo\", device=\"cuda\")\noutput = generator([{\"role\": \"user\", \"content\": question}], max_new_tokens=128, return_full_text=False)[0]\nprint(output[\"generated_text\"])\n```\n\n## Training procedure\n\n \n\n\nThis model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).\n\n### Framework versions\n\n- TRL: 0.17.0\n- Transformers: 4.51.3\n- Pytorch: 2.7.0\n- Datasets: 3.5.1\n- Tokenizers: 0.21.1\n\n## Citations\n\nCite DPO as:\n\n```bibtex\n@inproceedings{rafailov2023direct,\n title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},\n author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},\n year = 2023,\n booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},\n url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},\n editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},\n}\n```\n\nCite TRL as:\n \n```bibtex\n@misc{vonwerra2022trl,\n\ttitle = {{TRL: Transformer Reinforcement Learning}},\n\tauthor = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\\'e}dec},\n\tyear = 2020,\n\tjournal = {GitHub repository},\n\tpublisher = {GitHub},\n\thowpublished = {\\url{https://github.com/huggingface/trl}}\n}\n```", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM-500M-Instruct" ], "base_model": "Soundappan123/smolvlm-dpo", "base_model_relation": "base" }, { "model_id": "BIOMEDICA/BMC-smolvlm1-500M", "gated": "False", "card": "---\ndatasets:\n- BIOMEDICA/biomedica_webdataset_24M\nlanguage:\n- en\nbase_model:\n- HuggingFaceTB/SmolVLM-Instruct-500M\n---\n\n\n
\n \"Pull\n
\n\n\n\nBMC-SmolVLM1 is a family of lightweight biomedical vision-language models (ranging from 256M to 2.2B parameters) based on SmolVLM. These models are designed for efficient multimodal understanding in the biomedical domain. Please ensure you are using a GPU runtime to run this notebook.\n\n\nColab Tutorial: [![Colab Tutorial](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1Bg_pdLsXfHVX0U8AESL7TaiBQLDy2G7j?usp=sharing)\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM-500M-Instruct" ], "base_model": "BIOMEDICA/BMC-smolvlm1", "base_model_relation": "finetune" }, { "model_id": "Pantelismak/smolvlm_cxr", "gated": "False", "card": "---\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\nlibrary_name: transformers\nmodel_name: smolvlm_cxr\ntags:\n- generated_from_trainer\n- trl\n- sft\nlicence: license\n---\n\n# Model Card for smolvlm_cxr\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM-500M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct).\nIt has been trained using [TRL](https://github.com/huggingface/trl).\n\n## Quick start\n\n```python\nfrom transformers import pipeline\n\nquestion = \"If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?\"\ngenerator = pipeline(\"text-generation\", model=\"Pantelismak/smolvlm_cxr\", device=\"cuda\")\noutput = generator([{\"role\": \"user\", \"content\": question}], max_new_tokens=128, return_full_text=False)[0]\nprint(output[\"generated_text\"])\n```\n\n## Training procedure\n\n \n\n\nThis model was trained with SFT.\n\n### Framework versions\n\n- TRL: 0.17.0\n- Transformers: 4.51.3\n- Pytorch: 2.6.0+cu124\n- Datasets: 3.6.0\n- Tokenizers: 0.21.1\n\n## Citations\n\n\n\nCite TRL as:\n \n```bibtex\n@misc{vonwerra2022trl,\n\ttitle = {{TRL: Transformer Reinforcement Learning}},\n\tauthor = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\\'e}dec},\n\tyear = 2020,\n\tjournal = {GitHub repository},\n\tpublisher = {GitHub},\n\thowpublished = {\\url{https://github.com/huggingface/trl}}\n}\n```", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM-500M-Instruct" ], "base_model": "Pantelismak/smolvlm_cxr", "base_model_relation": "base" }, { "model_id": "JoseferEins/SmolVLM-500M-Instruct-fer0", "gated": "unknown", "card": "---\nlibrary_name: transformers\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\ntags:\n- fine-tuned\n- vision-language\n- emotion-recognition\nmodel-index:\n- name: SmolVLM-500M-Instruct-fer0\n results: []\n---\n\n# SmolVLM-500M-Instruct-fer0\n\nFine-tuned version of [SmolVLM-500M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct) on a subset of AffectNet (emotion recognition), with text labels transcribed via GPT-4o-mini.\n\n\nThis is just priliminary, we'll update soon with proper evalutation and info.\n## Example\n\n**Image input** \n![image](https://cdn-uploads.huggingface.co/production/uploads/6433b05aea46c00990443927/1I9PtODn5Iv-ThvTAugOW.png)\n\n**Predictions:** \n- *Base model*: A woman with blonde hair is looking to the side with a hand on her chin.\n- *This model*: The expression conveys a sense of contemplation or concern. The furrowed brow and slightly parted lips suggest a deep thought or worry. The hand on the chin indicates a hint of introspection, hinting at a possible emotional state of unease or contemplation.\n\n\n## Training Summary\n\n- **Loss values**: \n\n| Step | Training Loss |\n|-------|----------------|\n| 25 | 2.80 |\n| 50 | 0.82 |\n| 75 | 0.48 |\n| 100 | 0.43 |\n\n- **Hyperparameters**: \n - Learning rate: 1e-4 \n - Batch size: 4 (grad. accum. \u00d74) \n - Epochs: 1 \n - Optimizer: 8-bit AdamW \n - Scheduler: linear (warmup 50 steps) \n - Seed: 42\n\n## Frameworks\n\n- Transformers 4.50.0 \n- PyTorch 2.3.1+cu121 \n- Datasets 3.6.0 \n- Tokenizers 0.21.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM-500M-Instruct" ], "base_model": null, "base_model_relation": null }, { "model_id": "VishalD1234/SmolVLM-500M-Instruct-vqav2", "gated": "False", "card": "---\nlibrary_name: peft\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM-500M-Instruct-vqav2\n results: []\n---\n\n\n\n# SmolVLM-500M-Instruct-vqav2\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM-500M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 3\n- eval_batch_size: 3\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 12\n- optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 2\n\n### Training results\n\n\n\n### Framework versions\n\n- PEFT 0.14.0\n- Transformers 4.48.1\n- Pytorch 2.5.1+cu121\n- Datasets 3.2.0\n- Tokenizers 0.21.0", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM-500M-Instruct" ], "base_model": "VishalD1234/SmolVLM-500M-Instruct-vqav2", "base_model_relation": "base" }, { "model_id": "sasikaran04/SmolVLM-500M-Instruct-vqav2", "gated": "False", "card": "---\nlibrary_name: peft\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM-500M-Instruct-vqav2\n results: []\n---\n\n\n\n# SmolVLM-500M-Instruct-vqav2\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM-500M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 3\n- eval_batch_size: 3\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 12\n- optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 5\n\n### Training results\n\n\n\n### Framework versions\n\n- PEFT 0.14.0\n- Transformers 4.48.1\n- Pytorch 2.5.1+cu121\n- Datasets 3.2.0\n- Tokenizers 0.21.0", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM-500M-Instruct" ], "base_model": "sasikaran04/SmolVLM-500M-Instruct-vqav2", "base_model_relation": "base" }, { "model_id": "Hirai-Labs/FT-SmolVLM-500M-Instruct-ALPR", "gated": "False", "card": "---\nlibrary_name: peft\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: FT-SmolVLM-500M-Instruct-ALPR\n results: []\n---\n\n\n\n# FT-SmolVLM-500M-Instruct-ALPR\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM-500M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 4\n- optimizer: Use paged_adamw_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 10\n\n### Training results\n\n\n\n### Framework versions\n\n- PEFT 0.14.0\n- Transformers 4.47.0\n- Pytorch 2.5.1+cu121\n- Datasets 3.2.0\n- Tokenizers 0.21.0", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM-500M-Instruct" ], "base_model": "Hirai-Labs/FT-SmolVLM-500M-Instruct-ALPR", "base_model_relation": "base" }, { "model_id": "revitotan/FT-SmolVLM-500M-Instruct-Helmet", "gated": "False", "card": "---\nlibrary_name: peft\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: FT-SmolVLM-500M-Instruct-Helmet\n results: []\n---\n\n\n\n[\"Visualize](https://wandb.ai/revitopradipa-muhammadiyah-university-of-surakarta/HelmetVLM/runs/lg1n8bj5)\n# FT-SmolVLM-500M-Instruct-Helmet\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM-500M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 4\n- optimizer: Use paged_adamw_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 10\n\n### Framework versions\n\n- PEFT 0.14.0\n- Transformers 4.47.0\n- Pytorch 2.5.1+cu121\n- Datasets 3.3.1\n- Tokenizers 0.21.0", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM-500M-Instruct" ], "base_model": "revitotan/FT-SmolVLM-500M-Instruct-Helmet", "base_model_relation": "base" }, { "model_id": "dkhanh/SmolVLM-500M-Instruct-earths", "gated": "False", "card": "---\nlibrary_name: peft\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM-500M-Instruct-earths\n results: []\n---\n\n\n\n# SmolVLM-500M-Instruct-earths\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM-500M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 8\n- optimizer: Use paged_adamw_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- num_epochs: 3\n\n### Training results\n\n\n\n### Framework versions\n\n- PEFT 0.15.1\n- Transformers 4.51.3\n- Pytorch 2.6.0+cu124\n- Datasets 3.5.0\n- Tokenizers 0.21.1", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM-500M-Instruct" ], "base_model": "dkhanh/SmolVLM-500M-Instruct-earths", "base_model_relation": "base" }, { "model_id": "dkhanh/SmolVLM-500M-Instruct-earth-v0", "gated": "False", "card": "---\nlibrary_name: peft\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM-500M-Instruct-earth-v0\n results: []\n---\n\n\n\n# SmolVLM-500M-Instruct-earth-v0\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM-500M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 4e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 8\n- optimizer: Use paged_adamw_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- num_epochs: 4\n\n### Training results\n\n\n\n### Framework versions\n\n- PEFT 0.13.2\n- Transformers 4.51.3\n- Pytorch 2.6.0+cu124\n- Datasets 3.5.0\n- Tokenizers 0.21.1", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM-500M-Instruct" ], "base_model": "dkhanh/SmolVLM-500M-Instruct-earth-v0", "base_model_relation": "base" }, { "model_id": "dkhanh/SmolVLM-500M-Instruct-earth-v1", "gated": "False", "card": "---\nlibrary_name: peft\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM-500M-Instruct-earth-v1\n results: []\n---\n\n\n\n# SmolVLM-500M-Instruct-earth-v1\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM-500M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 16\n- optimizer: Use paged_adamw_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 5\n\n### Training results\n\n\n\n### Framework versions\n\n- PEFT 0.15.2\n- Transformers 4.51.3\n- Pytorch 2.7.0+cu126\n- Datasets 3.5.0\n- Tokenizers 0.21.1", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM-500M-Instruct" ], "base_model": "dkhanh/SmolVLM-500M-Instruct-earth-v1", "base_model_relation": "base" }, { "model_id": "dkhanh/SmolVLM-500M-Instruct-earths-v1", "gated": "False", "card": "---\nlibrary_name: peft\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM-500M-Instruct-earths-v1\n results: []\n---\n\n\n\n# SmolVLM-500M-Instruct-earths-v1\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM-500M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 16\n- optimizer: Use paged_adamw_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n\n### Training results\n\n\n\n### Framework versions\n\n- PEFT 0.15.2\n- Transformers 4.51.3\n- Pytorch 2.7.0+cu126\n- Datasets 3.5.0\n- Tokenizers 0.21.1", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM-500M-Instruct" ], "base_model": "dkhanh/SmolVLM-500M-Instruct-earths-v1", "base_model_relation": "base" }, { "model_id": "samlucas/smolvlm_500m-parking_occupancy-PKLot-instruct-without-context-without-expert", "gated": "False", "card": "---\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\nlibrary_name: peft\n---\n\n# Model Card for Model ID\n\n\n\n\n\n## Model Details\n\n### Model Description\n\n\n\n\n\n- **Developed by:** [More Information Needed]\n- **Funded by [optional]:** [More Information Needed]\n- **Shared by [optional]:** [More Information Needed]\n- **Model type:** [More Information Needed]\n- **Language(s) (NLP):** [More Information Needed]\n- **License:** [More Information Needed]\n- **Finetuned from model [optional]:** [More Information Needed]\n\n### Model Sources [optional]\n\n\n\n- **Repository:** [More Information Needed]\n- **Paper [optional]:** [More Information Needed]\n- **Demo [optional]:** [More Information Needed]\n\n## Uses\n\n\n\n### Direct Use\n\n\n\n[More Information Needed]\n\n### Downstream Use [optional]\n\n\n\n[More Information Needed]\n\n### Out-of-Scope Use\n\n\n\n[More Information Needed]\n\n## Bias, Risks, and Limitations\n\n\n\n[More Information Needed]\n\n### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.\n\n## How to Get Started with the Model\n\nUse the code below to get started with the model.\n\n[More Information Needed]\n\n## Training Details\n\n### Training Data\n\n\n\n[More Information Needed]\n\n### Training Procedure\n\n\n\n#### Preprocessing [optional]\n\n[More Information Needed]\n\n\n#### Training Hyperparameters\n\n- **Training regime:** [More Information Needed] \n\n#### Speeds, Sizes, Times [optional]\n\n\n\n[More Information Needed]\n\n## Evaluation\n\n\n\n### Testing Data, Factors & Metrics\n\n#### Testing Data\n\n\n\n[More Information Needed]\n\n#### Factors\n\n\n\n[More Information Needed]\n\n#### Metrics\n\n\n\n[More Information Needed]\n\n### Results\n\n[More Information Needed]\n\n#### Summary\n\n\n\n## Model Examination [optional]\n\n\n\n[More Information Needed]\n\n## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).\n\n- **Hardware Type:** [More Information Needed]\n- **Hours used:** [More Information Needed]\n- **Cloud Provider:** [More Information Needed]\n- **Compute Region:** [More Information Needed]\n- **Carbon Emitted:** [More Information Needed]\n\n## Technical Specifications [optional]\n\n### Model Architecture and Objective\n\n[More Information Needed]\n\n### Compute Infrastructure\n\n[More Information Needed]\n\n#### Hardware\n\n[More Information Needed]\n\n#### Software\n\n[More Information Needed]\n\n## Citation [optional]\n\n\n\n**BibTeX:**\n\n[More Information Needed]\n\n**APA:**\n\n[More Information Needed]\n\n## Glossary [optional]\n\n\n\n[More Information Needed]\n\n## More Information [optional]\n\n[More Information Needed]\n\n## Model Card Authors [optional]\n\n[More Information Needed]\n\n## Model Card Contact\n\n[More Information Needed]\n### Framework versions\n\n- PEFT 0.15.2", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM-500M-Instruct" ], "base_model": "samlucas/smolvlm_500m-parking_occupancy-PKLot-instruct-without-context-without-expert", "base_model_relation": "base" }, { "model_id": "samlucas/smolvlm_500m-parking_occupancy-PKLot-instruct-with-context-without-expert", "gated": "False", "card": "---\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\nlibrary_name: peft\n---\n\n# Model Card for Model ID\n\n\n\n\n\n## Model Details\n\n### Model Description\n\n\n\n\n\n- **Developed by:** [More Information Needed]\n- **Funded by [optional]:** [More Information Needed]\n- **Shared by [optional]:** [More Information Needed]\n- **Model type:** [More Information Needed]\n- **Language(s) (NLP):** [More Information Needed]\n- **License:** [More Information Needed]\n- **Finetuned from model [optional]:** [More Information Needed]\n\n### Model Sources [optional]\n\n\n\n- **Repository:** [More Information Needed]\n- **Paper [optional]:** [More Information Needed]\n- **Demo [optional]:** [More Information Needed]\n\n## Uses\n\n\n\n### Direct Use\n\n\n\n[More Information Needed]\n\n### Downstream Use [optional]\n\n\n\n[More Information Needed]\n\n### Out-of-Scope Use\n\n\n\n[More Information Needed]\n\n## Bias, Risks, and Limitations\n\n\n\n[More Information Needed]\n\n### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.\n\n## How to Get Started with the Model\n\nUse the code below to get started with the model.\n\n[More Information Needed]\n\n## Training Details\n\n### Training Data\n\n\n\n[More Information Needed]\n\n### Training Procedure\n\n\n\n#### Preprocessing [optional]\n\n[More Information Needed]\n\n\n#### Training Hyperparameters\n\n- **Training regime:** [More Information Needed] \n\n#### Speeds, Sizes, Times [optional]\n\n\n\n[More Information Needed]\n\n## Evaluation\n\n\n\n### Testing Data, Factors & Metrics\n\n#### Testing Data\n\n\n\n[More Information Needed]\n\n#### Factors\n\n\n\n[More Information Needed]\n\n#### Metrics\n\n\n\n[More Information Needed]\n\n### Results\n\n[More Information Needed]\n\n#### Summary\n\n\n\n## Model Examination [optional]\n\n\n\n[More Information Needed]\n\n## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).\n\n- **Hardware Type:** [More Information Needed]\n- **Hours used:** [More Information Needed]\n- **Cloud Provider:** [More Information Needed]\n- **Compute Region:** [More Information Needed]\n- **Carbon Emitted:** [More Information Needed]\n\n## Technical Specifications [optional]\n\n### Model Architecture and Objective\n\n[More Information Needed]\n\n### Compute Infrastructure\n\n[More Information Needed]\n\n#### Hardware\n\n[More Information Needed]\n\n#### Software\n\n[More Information Needed]\n\n## Citation [optional]\n\n\n\n**BibTeX:**\n\n[More Information Needed]\n\n**APA:**\n\n[More Information Needed]\n\n## Glossary [optional]\n\n\n\n[More Information Needed]\n\n## More Information [optional]\n\n[More Information Needed]\n\n## Model Card Authors [optional]\n\n[More Information Needed]\n\n## Model Card Contact\n\n[More Information Needed]\n### Framework versions\n\n- PEFT 0.15.2", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM-500M-Instruct" ], "base_model": "samlucas/smolvlm_500m-parking_occupancy-PKLot-instruct-with-context-without-expert", "base_model_relation": "base" }, { "model_id": "samlucas/smolvlm_500m-parking_occupancy-PKLot-instruct-with-context-with-expert", "gated": "False", "card": "---\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\nlibrary_name: peft\n---\n\n# Model Card for Model ID\n\n\n\n\n\n## Model Details\n\n### Model Description\n\n\n\n\n\n- **Developed by:** [More Information Needed]\n- **Funded by [optional]:** [More Information Needed]\n- **Shared by [optional]:** [More Information Needed]\n- **Model type:** [More Information Needed]\n- **Language(s) (NLP):** [More Information Needed]\n- **License:** [More Information Needed]\n- **Finetuned from model [optional]:** [More Information Needed]\n\n### Model Sources [optional]\n\n\n\n- **Repository:** [More Information Needed]\n- **Paper [optional]:** [More Information Needed]\n- **Demo [optional]:** [More Information Needed]\n\n## Uses\n\n\n\n### Direct Use\n\n\n\n[More Information Needed]\n\n### Downstream Use [optional]\n\n\n\n[More Information Needed]\n\n### Out-of-Scope Use\n\n\n\n[More Information Needed]\n\n## Bias, Risks, and Limitations\n\n\n\n[More Information Needed]\n\n### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.\n\n## How to Get Started with the Model\n\nUse the code below to get started with the model.\n\n[More Information Needed]\n\n## Training Details\n\n### Training Data\n\n\n\n[More Information Needed]\n\n### Training Procedure\n\n\n\n#### Preprocessing [optional]\n\n[More Information Needed]\n\n\n#### Training Hyperparameters\n\n- **Training regime:** [More Information Needed] \n\n#### Speeds, Sizes, Times [optional]\n\n\n\n[More Information Needed]\n\n## Evaluation\n\n\n\n### Testing Data, Factors & Metrics\n\n#### Testing Data\n\n\n\n[More Information Needed]\n\n#### Factors\n\n\n\n[More Information Needed]\n\n#### Metrics\n\n\n\n[More Information Needed]\n\n### Results\n\n[More Information Needed]\n\n#### Summary\n\n\n\n## Model Examination [optional]\n\n\n\n[More Information Needed]\n\n## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).\n\n- **Hardware Type:** [More Information Needed]\n- **Hours used:** [More Information Needed]\n- **Cloud Provider:** [More Information Needed]\n- **Compute Region:** [More Information Needed]\n- **Carbon Emitted:** [More Information Needed]\n\n## Technical Specifications [optional]\n\n### Model Architecture and Objective\n\n[More Information Needed]\n\n### Compute Infrastructure\n\n[More Information Needed]\n\n#### Hardware\n\n[More Information Needed]\n\n#### Software\n\n[More Information Needed]\n\n## Citation [optional]\n\n\n\n**BibTeX:**\n\n[More Information Needed]\n\n**APA:**\n\n[More Information Needed]\n\n## Glossary [optional]\n\n\n\n[More Information Needed]\n\n## More Information [optional]\n\n[More Information Needed]\n\n## Model Card Authors [optional]\n\n[More Information Needed]\n\n## Model Card Contact\n\n[More Information Needed]\n### Framework versions\n\n- PEFT 0.15.2", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM-500M-Instruct" ], "base_model": "samlucas/smolvlm_500m-parking_occupancy-PKLot-instruct-with-context-with-expert", "base_model_relation": "base" }, { "model_id": "samlucas/smolvlm_500m-parking_occupancy-PKLot-instruct-without-context-with-expert", "gated": "False", "card": "---\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\nlibrary_name: peft\n---\n\n# Model Card for Model ID\n\n\n\n\n\n## Model Details\n\n### Model Description\n\n\n\n\n\n- **Developed by:** [More Information Needed]\n- **Funded by [optional]:** [More Information Needed]\n- **Shared by [optional]:** [More Information Needed]\n- **Model type:** [More Information Needed]\n- **Language(s) (NLP):** [More Information Needed]\n- **License:** [More Information Needed]\n- **Finetuned from model [optional]:** [More Information Needed]\n\n### Model Sources [optional]\n\n\n\n- **Repository:** [More Information Needed]\n- **Paper [optional]:** [More Information Needed]\n- **Demo [optional]:** [More Information Needed]\n\n## Uses\n\n\n\n### Direct Use\n\n\n\n[More Information Needed]\n\n### Downstream Use [optional]\n\n\n\n[More Information Needed]\n\n### Out-of-Scope Use\n\n\n\n[More Information Needed]\n\n## Bias, Risks, and Limitations\n\n\n\n[More Information Needed]\n\n### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.\n\n## How to Get Started with the Model\n\nUse the code below to get started with the model.\n\n[More Information Needed]\n\n## Training Details\n\n### Training Data\n\n\n\n[More Information Needed]\n\n### Training Procedure\n\n\n\n#### Preprocessing [optional]\n\n[More Information Needed]\n\n\n#### Training Hyperparameters\n\n- **Training regime:** [More Information Needed] \n\n#### Speeds, Sizes, Times [optional]\n\n\n\n[More Information Needed]\n\n## Evaluation\n\n\n\n### Testing Data, Factors & Metrics\n\n#### Testing Data\n\n\n\n[More Information Needed]\n\n#### Factors\n\n\n\n[More Information Needed]\n\n#### Metrics\n\n\n\n[More Information Needed]\n\n### Results\n\n[More Information Needed]\n\n#### Summary\n\n\n\n## Model Examination [optional]\n\n\n\n[More Information Needed]\n\n## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).\n\n- **Hardware Type:** [More Information Needed]\n- **Hours used:** [More Information Needed]\n- **Cloud Provider:** [More Information Needed]\n- **Compute Region:** [More Information Needed]\n- **Carbon Emitted:** [More Information Needed]\n\n## Technical Specifications [optional]\n\n### Model Architecture and Objective\n\n[More Information Needed]\n\n### Compute Infrastructure\n\n[More Information Needed]\n\n#### Hardware\n\n[More Information Needed]\n\n#### Software\n\n[More Information Needed]\n\n## Citation [optional]\n\n\n\n**BibTeX:**\n\n[More Information Needed]\n\n**APA:**\n\n[More Information Needed]\n\n## Glossary [optional]\n\n\n\n[More Information Needed]\n\n## More Information [optional]\n\n[More Information Needed]\n\n## Model Card Authors [optional]\n\n[More Information Needed]\n\n## Model Card Contact\n\n[More Information Needed]\n### Framework versions\n\n- PEFT 0.15.2", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM-500M-Instruct" ], "base_model": "samlucas/smolvlm_500m-parking_occupancy-PKLot-instruct-without-context-with-expert", "base_model_relation": "base" }, { "model_id": "samlucas/smolvlm_500m-parking_occupancy-CNRPark-instruct-with-context-with-expert", "gated": "False", "card": "---\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\nlibrary_name: peft\n---\n\n# Model Card for Model ID\n\n\n\n\n\n## Model Details\n\n### Model Description\n\n\n\n\n\n- **Developed by:** [More Information Needed]\n- **Funded by [optional]:** [More Information Needed]\n- **Shared by [optional]:** [More Information Needed]\n- **Model type:** [More Information Needed]\n- **Language(s) (NLP):** [More Information Needed]\n- **License:** [More Information Needed]\n- **Finetuned from model [optional]:** [More Information Needed]\n\n### Model Sources [optional]\n\n\n\n- **Repository:** [More Information Needed]\n- **Paper [optional]:** [More Information Needed]\n- **Demo [optional]:** [More Information Needed]\n\n## Uses\n\n\n\n### Direct Use\n\n\n\n[More Information Needed]\n\n### Downstream Use [optional]\n\n\n\n[More Information Needed]\n\n### Out-of-Scope Use\n\n\n\n[More Information Needed]\n\n## Bias, Risks, and Limitations\n\n\n\n[More Information Needed]\n\n### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.\n\n## How to Get Started with the Model\n\nUse the code below to get started with the model.\n\n[More Information Needed]\n\n## Training Details\n\n### Training Data\n\n\n\n[More Information Needed]\n\n### Training Procedure\n\n\n\n#### Preprocessing [optional]\n\n[More Information Needed]\n\n\n#### Training Hyperparameters\n\n- **Training regime:** [More Information Needed] \n\n#### Speeds, Sizes, Times [optional]\n\n\n\n[More Information Needed]\n\n## Evaluation\n\n\n\n### Testing Data, Factors & Metrics\n\n#### Testing Data\n\n\n\n[More Information Needed]\n\n#### Factors\n\n\n\n[More Information Needed]\n\n#### Metrics\n\n\n\n[More Information Needed]\n\n### Results\n\n[More Information Needed]\n\n#### Summary\n\n\n\n## Model Examination [optional]\n\n\n\n[More Information Needed]\n\n## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).\n\n- **Hardware Type:** [More Information Needed]\n- **Hours used:** [More Information Needed]\n- **Cloud Provider:** [More Information Needed]\n- **Compute Region:** [More Information Needed]\n- **Carbon Emitted:** [More Information Needed]\n\n## Technical Specifications [optional]\n\n### Model Architecture and Objective\n\n[More Information Needed]\n\n### Compute Infrastructure\n\n[More Information Needed]\n\n#### Hardware\n\n[More Information Needed]\n\n#### Software\n\n[More Information Needed]\n\n## Citation [optional]\n\n\n\n**BibTeX:**\n\n[More Information Needed]\n\n**APA:**\n\n[More Information Needed]\n\n## Glossary [optional]\n\n\n\n[More Information Needed]\n\n## More Information [optional]\n\n[More Information Needed]\n\n## Model Card Authors [optional]\n\n[More Information Needed]\n\n## Model Card Contact\n\n[More Information Needed]\n### Framework versions\n\n- PEFT 0.15.2", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM-500M-Instruct" ], "base_model": "samlucas/smolvlm_500m-parking_occupancy-CNRPark-instruct-with-context-with-expert", "base_model_relation": "base" }, { "model_id": "samlucas/smolvlm_500m-parking_occupancy-CNRPark-instruct-with-context-without-expert", "gated": "False", "card": "---\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\nlibrary_name: peft\n---\n\n# Model Card for Model ID\n\n\n\n\n\n## Model Details\n\n### Model Description\n\n\n\n\n\n- **Developed by:** [More Information Needed]\n- **Funded by [optional]:** [More Information Needed]\n- **Shared by [optional]:** [More Information Needed]\n- **Model type:** [More Information Needed]\n- **Language(s) (NLP):** [More Information Needed]\n- **License:** [More Information Needed]\n- **Finetuned from model [optional]:** [More Information Needed]\n\n### Model Sources [optional]\n\n\n\n- **Repository:** [More Information Needed]\n- **Paper [optional]:** [More Information Needed]\n- **Demo [optional]:** [More Information Needed]\n\n## Uses\n\n\n\n### Direct Use\n\n\n\n[More Information Needed]\n\n### Downstream Use [optional]\n\n\n\n[More Information Needed]\n\n### Out-of-Scope Use\n\n\n\n[More Information Needed]\n\n## Bias, Risks, and Limitations\n\n\n\n[More Information Needed]\n\n### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.\n\n## How to Get Started with the Model\n\nUse the code below to get started with the model.\n\n[More Information Needed]\n\n## Training Details\n\n### Training Data\n\n\n\n[More Information Needed]\n\n### Training Procedure\n\n\n\n#### Preprocessing [optional]\n\n[More Information Needed]\n\n\n#### Training Hyperparameters\n\n- **Training regime:** [More Information Needed] \n\n#### Speeds, Sizes, Times [optional]\n\n\n\n[More Information Needed]\n\n## Evaluation\n\n\n\n### Testing Data, Factors & Metrics\n\n#### Testing Data\n\n\n\n[More Information Needed]\n\n#### Factors\n\n\n\n[More Information Needed]\n\n#### Metrics\n\n\n\n[More Information Needed]\n\n### Results\n\n[More Information Needed]\n\n#### Summary\n\n\n\n## Model Examination [optional]\n\n\n\n[More Information Needed]\n\n## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).\n\n- **Hardware Type:** [More Information Needed]\n- **Hours used:** [More Information Needed]\n- **Cloud Provider:** [More Information Needed]\n- **Compute Region:** [More Information Needed]\n- **Carbon Emitted:** [More Information Needed]\n\n## Technical Specifications [optional]\n\n### Model Architecture and Objective\n\n[More Information Needed]\n\n### Compute Infrastructure\n\n[More Information Needed]\n\n#### Hardware\n\n[More Information Needed]\n\n#### Software\n\n[More Information Needed]\n\n## Citation [optional]\n\n\n\n**BibTeX:**\n\n[More Information Needed]\n\n**APA:**\n\n[More Information Needed]\n\n## Glossary [optional]\n\n\n\n[More Information Needed]\n\n## More Information [optional]\n\n[More Information Needed]\n\n## Model Card Authors [optional]\n\n[More Information Needed]\n\n## Model Card Contact\n\n[More Information Needed]\n### Framework versions\n\n- PEFT 0.15.2", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM-500M-Instruct" ], "base_model": "samlucas/smolvlm_500m-parking_occupancy-CNRPark-instruct-with-context-without-expert", "base_model_relation": "base" }, { "model_id": "samlucas/smolvlm_500m-parking_occupancy-CNRPark-instruct-without-context-with-expert", "gated": "False", "card": "---\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\nlibrary_name: peft\n---\n\n# Model Card for Model ID\n\n\n\n\n\n## Model Details\n\n### Model Description\n\n\n\n\n\n- **Developed by:** [More Information Needed]\n- **Funded by [optional]:** [More Information Needed]\n- **Shared by [optional]:** [More Information Needed]\n- **Model type:** [More Information Needed]\n- **Language(s) (NLP):** [More Information Needed]\n- **License:** [More Information Needed]\n- **Finetuned from model [optional]:** [More Information Needed]\n\n### Model Sources [optional]\n\n\n\n- **Repository:** [More Information Needed]\n- **Paper [optional]:** [More Information Needed]\n- **Demo [optional]:** [More Information Needed]\n\n## Uses\n\n\n\n### Direct Use\n\n\n\n[More Information Needed]\n\n### Downstream Use [optional]\n\n\n\n[More Information Needed]\n\n### Out-of-Scope Use\n\n\n\n[More Information Needed]\n\n## Bias, Risks, and Limitations\n\n\n\n[More Information Needed]\n\n### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.\n\n## How to Get Started with the Model\n\nUse the code below to get started with the model.\n\n[More Information Needed]\n\n## Training Details\n\n### Training Data\n\n\n\n[More Information Needed]\n\n### Training Procedure\n\n\n\n#### Preprocessing [optional]\n\n[More Information Needed]\n\n\n#### Training Hyperparameters\n\n- **Training regime:** [More Information Needed] \n\n#### Speeds, Sizes, Times [optional]\n\n\n\n[More Information Needed]\n\n## Evaluation\n\n\n\n### Testing Data, Factors & Metrics\n\n#### Testing Data\n\n\n\n[More Information Needed]\n\n#### Factors\n\n\n\n[More Information Needed]\n\n#### Metrics\n\n\n\n[More Information Needed]\n\n### Results\n\n[More Information Needed]\n\n#### Summary\n\n\n\n## Model Examination [optional]\n\n\n\n[More Information Needed]\n\n## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).\n\n- **Hardware Type:** [More Information Needed]\n- **Hours used:** [More Information Needed]\n- **Cloud Provider:** [More Information Needed]\n- **Compute Region:** [More Information Needed]\n- **Carbon Emitted:** [More Information Needed]\n\n## Technical Specifications [optional]\n\n### Model Architecture and Objective\n\n[More Information Needed]\n\n### Compute Infrastructure\n\n[More Information Needed]\n\n#### Hardware\n\n[More Information Needed]\n\n#### Software\n\n[More Information Needed]\n\n## Citation [optional]\n\n\n\n**BibTeX:**\n\n[More Information Needed]\n\n**APA:**\n\n[More Information Needed]\n\n## Glossary [optional]\n\n\n\n[More Information Needed]\n\n## More Information [optional]\n\n[More Information Needed]\n\n## Model Card Authors [optional]\n\n[More Information Needed]\n\n## Model Card Contact\n\n[More Information Needed]\n### Framework versions\n\n- PEFT 0.15.2", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM-500M-Instruct" ], "base_model": "samlucas/smolvlm_500m-parking_occupancy-CNRPark-instruct-without-context-with-expert", "base_model_relation": "base" }, { "model_id": "samlucas/smolvlm_500m-parking_occupancy-CNRPark-instruct-without-context-without-expert", "gated": "False", "card": "---\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\nlibrary_name: peft\n---\n\n# Model Card for Model ID\n\n\n\n\n\n## Model Details\n\n### Model Description\n\n\n\n\n\n- **Developed by:** [More Information Needed]\n- **Funded by [optional]:** [More Information Needed]\n- **Shared by [optional]:** [More Information Needed]\n- **Model type:** [More Information Needed]\n- **Language(s) (NLP):** [More Information Needed]\n- **License:** [More Information Needed]\n- **Finetuned from model [optional]:** [More Information Needed]\n\n### Model Sources [optional]\n\n\n\n- **Repository:** [More Information Needed]\n- **Paper [optional]:** [More Information Needed]\n- **Demo [optional]:** [More Information Needed]\n\n## Uses\n\n\n\n### Direct Use\n\n\n\n[More Information Needed]\n\n### Downstream Use [optional]\n\n\n\n[More Information Needed]\n\n### Out-of-Scope Use\n\n\n\n[More Information Needed]\n\n## Bias, Risks, and Limitations\n\n\n\n[More Information Needed]\n\n### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.\n\n## How to Get Started with the Model\n\nUse the code below to get started with the model.\n\n[More Information Needed]\n\n## Training Details\n\n### Training Data\n\n\n\n[More Information Needed]\n\n### Training Procedure\n\n\n\n#### Preprocessing [optional]\n\n[More Information Needed]\n\n\n#### Training Hyperparameters\n\n- **Training regime:** [More Information Needed] \n\n#### Speeds, Sizes, Times [optional]\n\n\n\n[More Information Needed]\n\n## Evaluation\n\n\n\n### Testing Data, Factors & Metrics\n\n#### Testing Data\n\n\n\n[More Information Needed]\n\n#### Factors\n\n\n\n[More Information Needed]\n\n#### Metrics\n\n\n\n[More Information Needed]\n\n### Results\n\n[More Information Needed]\n\n#### Summary\n\n\n\n## Model Examination [optional]\n\n\n\n[More Information Needed]\n\n## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).\n\n- **Hardware Type:** [More Information Needed]\n- **Hours used:** [More Information Needed]\n- **Cloud Provider:** [More Information Needed]\n- **Compute Region:** [More Information Needed]\n- **Carbon Emitted:** [More Information Needed]\n\n## Technical Specifications [optional]\n\n### Model Architecture and Objective\n\n[More Information Needed]\n\n### Compute Infrastructure\n\n[More Information Needed]\n\n#### Hardware\n\n[More Information Needed]\n\n#### Software\n\n[More Information Needed]\n\n## Citation [optional]\n\n\n\n**BibTeX:**\n\n[More Information Needed]\n\n**APA:**\n\n[More Information Needed]\n\n## Glossary [optional]\n\n\n\n[More Information Needed]\n\n## More Information [optional]\n\n[More Information Needed]\n\n## Model Card Authors [optional]\n\n[More Information Needed]\n\n## Model Card Contact\n\n[More Information Needed]\n### Framework versions\n\n- PEFT 0.15.2", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM-500M-Instruct" ], "base_model": "samlucas/smolvlm_500m-parking_occupancy-CNRPark-instruct-without-context-without-expert", "base_model_relation": "base" }, { "model_id": "bilal1998/SmolVLM-500M-Instruct-vqav2", "gated": "unknown", "card": "---\nlibrary_name: peft\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM-500M-Instruct-vqav2\n results: []\n---\n\n\n\n# SmolVLM-500M-Instruct-vqav2\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM-500M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct) on the None dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 8\n- eval_batch_size: 16\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 32\n- optimizer: Use paged_adamw_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 100\n\n### Training results\n\n\n\n### Framework versions\n\n- PEFT 0.15.2\n- Transformers 4.52.4\n- Pytorch 2.7.1+cu126\n- Datasets 3.6.0\n- Tokenizers 0.21.1", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM-500M-Instruct" ], "base_model": null, "base_model_relation": null }, { "model_id": "HuggingFaceTB/SmolVLM2-500M-Video-Instruct", "gated": "False", "card": "---\nlibrary_name: transformers\nlicense: apache-2.0\ndatasets:\n- HuggingFaceM4/the_cauldron\n- HuggingFaceM4/Docmatix\n- lmms-lab/LLaVA-OneVision-Data\n- lmms-lab/M4-Instruct-Data\n- HuggingFaceFV/finevideo\n- MAmmoTH-VL/MAmmoTH-VL-Instruct-12M\n- lmms-lab/LLaVA-Video-178K\n- orrzohar/Video-STaR\n- Mutonix/Vript\n- TIGER-Lab/VISTA-400K\n- Enxin/MovieChat-1K_train\n- ShareGPT4Video/ShareGPT4Video\npipeline_tag: image-text-to-text\nlanguage:\n- en\nbase_model:\n- HuggingFaceTB/SmolVLM-500M-Instruct\n---\n\n\"Image\n\n# SmolVLM2-500M-Video\n\nSmolVLM2-500M-Video is a lightweight multimodal model designed to analyze video content. The model processes videos, images, and text inputs to generate text outputs - whether answering questions about media files, comparing visual content, or transcribing text from images. Despite its compact size, requiring only 1.8GB of GPU RAM for video inference, it delivers robust performance on complex multimodal tasks. This efficiency makes it particularly well-suited for on-device applications where computational resources may be limited.\n## Model Summary\n\n- **Developed by:** Hugging Face \ud83e\udd17\n- **Model type:** Multi-modal model (image/multi-image/video/text)\n- **Language(s) (NLP):** English\n- **License:** Apache 2.0\n- **Architecture:** Based on [Idefics3](https://huggingface.co/HuggingFaceM4/Idefics3-8B-Llama3) (see technical summary)\n\n## Resources\n\n- **Demo:** [Video Highlight Generator](https://huggingface.co/spaces/HuggingFaceTB/SmolVLM2-HighlightGenerator)\n- **Blog:** [Blog post](https://huggingface.co/blog/smolvlm2)\n\n## Uses\n\nSmolVLM2 can be used for inference on multimodal (video / image / text) tasks where the input consists of text queries along with video or one or more images. Text and media files can be interleaved arbitrarily, enabling tasks like captioning, visual question answering, and storytelling based on visual content. The model does not support image or video generation.\n\nTo fine-tune SmolVLM2 on a specific task, you can follow [the fine-tuning tutorial](https://github.com/huggingface/smollm/blob/main/vision/finetuning/Smol_VLM_FT.ipynb).\n\n## Evaluation \n\nWe evaluated the performance of the SmolVLM2 family on the following scientific benchmarks:\n\n| Size | Video-MME | MLVU | MVBench |\n|----------|-----------------|----------|---------------|\n| 2.2B | 52.1 | 55.2 | 46.27 |\n| 500M | 42.2 | 47.3 | 39.73 |\n| 256M | 33.7 | 40.6 | 32.7 |\n\n\n### How to get started\n\nYou can use transformers to load, infer and fine-tune SmolVLM. Make sure you have num2words, flash-attn and latest transformers installed.\nYou can load the model as follows.\n\n```python\nfrom transformers import AutoProcessor, AutoModelForImageTextToText\nimport torch\n\nmodel_path = \"HuggingFaceTB/SmolVLM2-500M-Video-Instruct\"\nprocessor = AutoProcessor.from_pretrained(model_path)\nmodel = AutoModelForImageTextToText.from_pretrained(\n model_path,\n torch_dtype=torch.bfloat16,\n _attn_implementation=\"flash_attention_2\"\n).to(\"cuda\")\n```\n\n#### Simple Inference\n\nYou preprocess your inputs directly using chat templates and directly passing them \n\n```python\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\", \"url\": \"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg\"},\n {\"type\": \"text\", \"text\": \"Can you describe this image?\"},\n ]\n },\n]\n\ninputs = processor.apply_chat_template(\n messages,\n add_generation_prompt=True,\n tokenize=True,\n return_dict=True,\n return_tensors=\"pt\",\n).to(model.device, dtype=torch.bfloat16)\n\ngenerated_ids = model.generate(**inputs, do_sample=False, max_new_tokens=64)\ngenerated_texts = processor.batch_decode(\n generated_ids,\n skip_special_tokens=True,\n)\nprint(generated_texts[0])\n```\n\n#### Video Inference\n\nTo use SmolVLM2 for video inference, make sure you have decord installed. \n\n```python\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"video\", \"path\": \"path_to_video.mp4\"},\n {\"type\": \"text\", \"text\": \"Describe this video in detail\"}\n ]\n },\n]\n\ninputs = processor.apply_chat_template(\n messages,\n add_generation_prompt=True,\n tokenize=True,\n return_dict=True,\n return_tensors=\"pt\",\n).to(model.device, dtype=torch.bfloat16)\n\ngenerated_ids = model.generate(**inputs, do_sample=False, max_new_tokens=64)\ngenerated_texts = processor.batch_decode(\n generated_ids,\n skip_special_tokens=True,\n)\n\nprint(generated_texts[0])\n```\n#### Multi-image Interleaved Inference\n\nYou can interleave multiple media with text using chat templates.\n\n```python\nimport torch\n\n\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"text\", \"text\": \"What is the similarity between these two images?\"},\n {\"type\": \"image\", \"url\": \"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg\"},\n {\"type\": \"image\", \"url\": \"https://huggingface.co/datasets/huggingface/documentation-images/resolve/0052a70beed5bf71b92610a43a52df6d286cd5f3/diffusers/rabbit.jpg\"}, \n ]\n },\n]\n\ninputs = processor.apply_chat_template(\n messages,\n add_generation_prompt=True,\n tokenize=True,\n return_dict=True,\n return_tensors=\"pt\",\n).to(model.device, dtype=torch.bfloat16)\n\ngenerated_ids = model.generate(**inputs, do_sample=False, max_new_tokens=64)\ngenerated_texts = processor.batch_decode(\n generated_ids,\n skip_special_tokens=True,\n)\nprint(generated_texts[0])\n```\n\n\n### Model optimizations\n\n## Misuse and Out-of-scope Use\n\nSmolVLM is not intended for high-stakes scenarios or critical decision-making processes that affect an individual's well-being or livelihood. The model may produce content that appears factual but may not be accurate. Misuse includes, but is not limited to:\n\n- Prohibited Uses:\n - Evaluating or scoring individuals (e.g., in employment, education, credit)\n - Critical automated decision-making\n - Generating unreliable factual content\n- Malicious Activities:\n - Spam generation\n - Disinformation campaigns\n - Harassment or abuse\n - Unauthorized surveillance\n\n### License\n\nSmolVLM2 is built upon [SigLIP](https://huggingface.co/google/siglip-base-patch16-512) as image encoder and [SmolLM2](https://huggingface.co/HuggingFaceTB/SmolLM2-360M-Instruct) for text decoder part.\n\nWe release the SmolVLM2 checkpoints under the Apache 2.0 license.\n\n## Citation information\nYou can cite us in the following way:\n```bibtex\n@article{marafioti2025smolvlm,\n title={SmolVLM: Redefining small and efficient multimodal models}, \n author={Andr\u00e9s Marafioti and Orr Zohar and Miquel Farr\u00e9 and Merve Noyan and Elie Bakouch and Pedro Cuenca and Cyril Zakka and Loubna Ben Allal and Anton Lozhkov and Nouamane Tazi and Vaibhav Srivastav and Joshua Lochner and Hugo Larcher and Mathieu Morlon and Lewis Tunstall and Leandro von Werra and Thomas Wolf},\n journal={arXiv preprint arXiv:2504.05299},\n year={2025}\n}\n```\n\n## Training Data\nSmolVLM2 used 3.3M samples for training originally from ten different datasets: [LlaVa Onevision](https://huggingface.co/datasets/lmms-lab/LLaVA-OneVision-Data), [M4-Instruct](https://huggingface.co/datasets/lmms-lab/M4-Instruct-Data), [Mammoth](https://huggingface.co/datasets/MAmmoTH-VL/MAmmoTH-VL-Instruct-12M), [LlaVa Video 178K](https://huggingface.co/datasets/lmms-lab/LLaVA-Video-178K), [FineVideo](https://huggingface.co/datasets/HuggingFaceFV/finevideo), [VideoStar](https://huggingface.co/datasets/orrzohar/Video-STaR), [VRipt](https://huggingface.co/datasets/Mutonix/Vript), [Vista-400K](https://huggingface.co/datasets/TIGER-Lab/VISTA-400K), [MovieChat](https://huggingface.co/datasets/Enxin/MovieChat-1K_train) and [ShareGPT4Video](https://huggingface.co/datasets/ShareGPT4Video/ShareGPT4Video).\nIn the following plots we give a general overview of the samples across modalities and the source of those samples.\n\n\n## Data Split per modality\n\n| Data Type | Percentage |\n|--------------|------------|\n| Image | 34.4% |\n| Text | 20.2% |\n| Video | 33.0% |\n| Multi-image | 12.3% |\n\n\n## Granular dataset slices per modality\n\n### Text Datasets\n| Dataset | Percentage |\n|--------------------------------------------|------------|\n| llava-onevision/magpie_pro_ft3_80b_mt | 6.8% |\n| llava-onevision/magpie_pro_ft3_80b_tt | 6.8% |\n| llava-onevision/magpie_pro_qwen2_72b_tt | 5.8% |\n| llava-onevision/mathqa | 0.9% |\n\n### Multi-image Datasets\n| Dataset | Percentage |\n|--------------------------------------------|------------|\n| m4-instruct-data/m4_instruct_multiimage | 10.4% |\n| mammoth/multiimage-cap6 | 1.9% |\n\n### Image Datasets\n| Dataset | Percentage |\n|--------------------------------------------|------------|\n| llava-onevision/other | 17.4% |\n| llava-onevision/vision_flan | 3.9% |\n| llava-onevision/mavis_math_metagen | 2.6% |\n| llava-onevision/mavis_math_rule_geo | 2.5% |\n| llava-onevision/sharegpt4o | 1.7% |\n| llava-onevision/sharegpt4v_coco | 1.5% |\n| llava-onevision/image_textualization | 1.3% |\n| llava-onevision/sharegpt4v_llava | 0.9% |\n| llava-onevision/mapqa | 0.9% |\n| llava-onevision/qa | 0.8% |\n| llava-onevision/textocr | 0.8% |\n\n### Video Datasets\n| Dataset | Percentage |\n|--------------------------------------------|------------|\n| llava-video-178k/1-2m | 7.3% |\n| llava-video-178k/2-3m | 7.0% |\n| other-video/combined | 5.7% |\n| llava-video-178k/hound | 4.4% |\n| llava-video-178k/0-30s | 2.4% |\n| video-star/starb | 2.2% |\n| vista-400k/combined | 2.2% |\n| vript/long | 1.0% |\n| ShareGPT4Video/all | 0.8% |\n", "metadata": "\"N/A\"", "depth": 1, "children": [ "mfarre/SmolVLM2-500M-Video-Instruct-emotions", "merve/SmolVLM2-500M-Video-Instruct-emotions", "merve/SmolVLM2-500M-Video-Instruct-videofeedback", "merve/SmolVLM2-500M-Video-Instruct-video-feedback", "AeonOmniverse/SmolVLM2-500M-Video-Instruct-video-feedback", "mpnikhil/SmolVLM2-500M-Video-Instruct-mpnikhil1", "Karthick2020/SmolVLM2-500M-Video-Instruct-video-feedback", "unreservedusername/SmolVLM2-500M-Video-Instruct-video-feedback", "badger-lord/SmolVLM2-500M-Video-Instruct-video-feedback", "sevimcengiz/SmolVLM2-500M-Video-Instruct-video-feedback", "Arnav0400/SmolVLM2-500M-Video-Instruct-video-feedback", "superenghb/SmolVLM2-500M-Video-Instruct-video-feedback", "mosherosen/SmolVLM2-500M-Video-Instruct-video-feedback", "lukesutor/SmolVLM-500M-ActivityTracking", "mlevytskyi/SmolVLM2-500M-Video-Instruct-video-feedback", "AFZAL0008/SmolVLM2-500M-Video-Instruct-video-feedback", "mlevytskyi/SmolVLM2-500M-Video-Instruct-coco-kaggle", "liuhuanjim013/SmolVLM2-500M-Video-Instruct-video-feedback", "MRIII0917/SmolVLM2-500M-Video-Instruct-video-feedback", "huggingFaceOfNabil/SmolVLM2-500M-Video-Instruct-dense", "rainorangelemon2/smolvlm-instruct-trl-sft-ChartQA", "rainorangelemon2/smolgemma-waymo-stage-1", "rainorangelemon2/smolgemma-waymo-stage-2" ], "children_count": 23, "adapters": [ "GKC96/SmolVLM2-500M-Video-Instruct-video-qna", "xco2/smolvlm2-500M-illustration-description" ], "adapters_count": 2, "quantized": [ "ggml-org/SmolVLM2-500M-Video-Instruct-GGUF", "mradermacher/SmolVLM2-500M-Video-Instruct-GGUF", "mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF", "second-state/SmolVLM2-500M-Video-Instruct-GGUF", "gaianet/SmolVLM2-500M-Video-Instruct-GGUF", "DevQuasar/HuggingFaceTB.SmolVLM2-500M-Video-Instruct-GGUF", "AXERA-TECH/SmolVLM2-500M-Video-Instruct" ], "quantized_count": 7, "merges": [], "merges_count": 0, "total_derivatives": 32, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM-500M-Instruct" ], "base_model": "HuggingFaceTB/SmolVLM2-500M-Video-Instruct", "base_model_relation": "base" }, { "model_id": "moot20/SmolVLM-500M-Instruct-MLX-4bits", "gated": "False", "card": "---\nlibrary_name: transformers\nlicense: apache-2.0\ndatasets:\n- HuggingFaceM4/the_cauldron\n- HuggingFaceM4/Docmatix\npipeline_tag: image-text-to-text\nlanguage:\n- en\nbase_model:\n- HuggingFaceTB/SmolVLM-500M-Instruct\nbase_model_relation: quantized\ntags:\n- mlx\n---\n\n# moot20/SmolVLM-500M-Instruct-MLX-4bits\nThis model was converted to MLX format from [`HuggingFaceTB/SmolVLM-500M-Instruct`]() using mlx-vlm version **0.1.12**.\nRefer to the [original model card](https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct) for more details on the model.\n## Use with mlx\n\n```bash\npip install -U mlx-vlm\n```\n\n```bash\npython -m mlx_vlm.generate --model moot20/SmolVLM-500M-Instruct-MLX-4bits --max-tokens 100 --temp 0.0 --prompt \"Describe this image.\" --image \n```\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM-500M-Instruct" ], "base_model": "moot20/SmolVLM-500M-Instruct-MLX", "base_model_relation": "finetune" }, { "model_id": "moot20/SmolVLM-500M-Instruct-MLX-6bits", "gated": "False", "card": "---\nlibrary_name: transformers\nlicense: apache-2.0\ndatasets:\n- HuggingFaceM4/the_cauldron\n- HuggingFaceM4/Docmatix\npipeline_tag: image-text-to-text\nlanguage:\n- en\nbase_model:\n- HuggingFaceTB/SmolVLM-500M-Instruct\nbase_model_relation: quantized\ntags:\n- mlx\n---\n\n# moot20/SmolVLM-500M-Instruct-MLX-6bits\nThis model was converted to MLX format from [`HuggingFaceTB/SmolVLM-500M-Instruct`]() using mlx-vlm version **0.1.12**.\nRefer to the [original model card](https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct) for more details on the model.\n## Use with mlx\n\n```bash\npip install -U mlx-vlm\n```\n\n```bash\npython -m mlx_vlm.generate --model moot20/SmolVLM-500M-Instruct-MLX-6bits --max-tokens 100 --temp 0.0 --prompt \"Describe this image.\" --image \n```\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM-500M-Instruct" ], "base_model": "moot20/SmolVLM-500M-Instruct-MLX", "base_model_relation": "finetune" }, { "model_id": "moot20/SmolVLM-500M-Instruct-MLX-8bits", "gated": "False", "card": "---\nlibrary_name: transformers\nlicense: apache-2.0\ndatasets:\n- HuggingFaceM4/the_cauldron\n- HuggingFaceM4/Docmatix\npipeline_tag: image-text-to-text\nlanguage:\n- en\nbase_model:\n- HuggingFaceTB/SmolVLM-500M-Instruct\nbase_model_relation: quantized\ntags:\n- mlx\n---\n\n# moot20/SmolVLM-500M-Instruct-MLX-8bits\nThis model was converted to MLX format from [`HuggingFaceTB/SmolVLM-500M-Instruct`]() using mlx-vlm version **0.1.12**.\nRefer to the [original model card](https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct) for more details on the model.\n## Use with mlx\n\n```bash\npip install -U mlx-vlm\n```\n\n```bash\npython -m mlx_vlm.generate --model moot20/SmolVLM-500M-Instruct-MLX-8bits --max-tokens 100 --temp 0.0 --prompt \"Describe this image.\" --image \n```\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM-500M-Instruct" ], "base_model": "moot20/SmolVLM-500M-Instruct-MLX", "base_model_relation": "finetune" }, { "model_id": "moot20/SmolVLM-500M-Instruct-MLX", "gated": "unknown", "card": "---\nlibrary_name: transformers\nlicense: apache-2.0\ndatasets:\n- HuggingFaceM4/the_cauldron\n- HuggingFaceM4/Docmatix\npipeline_tag: image-text-to-text\nlanguage:\n- en\nbase_model:\n- HuggingFaceTB/SmolVLM-500M-Instruct\nbase_model_relation: quantized\ntags:\n- mlx\n---\n\n# moot20/SmolVLM-500M-Instruct-MLX\nThis model was converted to MLX format from [`HuggingFaceTB/SmolVLM-500M-Instruct`]() using mlx-vlm version **0.1.12**.\nRefer to the [original model card](https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct) for more details on the model.\n## Use with mlx\n\n```bash\npip install -U mlx-vlm\n```\n\n```bash\npython -m mlx_vlm.generate --model moot20/SmolVLM-500M-Instruct-MLX --max-tokens 100 --temp 0.0 --prompt \"Describe this image.\" --image \n```\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM-500M-Instruct" ], "base_model": null, "base_model_relation": null }, { "model_id": "ggml-org/SmolVLM-500M-Instruct-GGUF", "gated": "False", "card": "---\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\n---\n\n# SmolVLM-500M-Instruct\n\nOriginal model: https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct\n\nFor more info, please refer to this PR: https://github.com/ggml-org/llama.cpp/pull/13050\n\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM-500M-Instruct" ], "base_model": "ggml-org/SmolVLM-500M-Instruct-GGUF", "base_model_relation": "base" }, { "model_id": "mradermacher/SmolVLM-500M-Instruct-GGUF", "gated": "False", "card": "---\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\ndatasets:\n- HuggingFaceM4/the_cauldron\n- HuggingFaceM4/Docmatix\nlanguage:\n- en\nlibrary_name: transformers\nlicense: apache-2.0\nquantized_by: mradermacher\n---\n## About\n\n\n\n\n\n\nstatic quants of https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct\n\n\nweighted/imatrix quants are available at https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-i1-GGUF\n## Usage\n\nIf you are unsure how to use GGUF files, refer to one of [TheBloke's\nREADMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for\nmore details, including on how to concatenate multi-part files.\n\n## Provided Quants\n\n(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)\n\n| Link | Type | Size/GB | Notes |\n|:-----|:-----|--------:|:------|\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-GGUF/resolve/main/SmolVLM-500M-Instruct.Q2_K.gguf) | Q2_K | 0.3 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-GGUF/resolve/main/SmolVLM-500M-Instruct.Q3_K_S.gguf) | Q3_K_S | 0.3 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-GGUF/resolve/main/SmolVLM-500M-Instruct.IQ4_XS.gguf) | IQ4_XS | 0.4 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-GGUF/resolve/main/SmolVLM-500M-Instruct.Q3_K_M.gguf) | Q3_K_M | 0.4 | lower quality |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-GGUF/resolve/main/SmolVLM-500M-Instruct.Q3_K_L.gguf) | Q3_K_L | 0.4 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-GGUF/resolve/main/SmolVLM-500M-Instruct.Q4_K_S.gguf) | Q4_K_S | 0.4 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-GGUF/resolve/main/SmolVLM-500M-Instruct.Q4_K_M.gguf) | Q4_K_M | 0.4 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-GGUF/resolve/main/SmolVLM-500M-Instruct.Q5_K_S.gguf) | Q5_K_S | 0.4 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-GGUF/resolve/main/SmolVLM-500M-Instruct.Q5_K_M.gguf) | Q5_K_M | 0.4 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-GGUF/resolve/main/SmolVLM-500M-Instruct.Q6_K.gguf) | Q6_K | 0.5 | very good quality |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-GGUF/resolve/main/SmolVLM-500M-Instruct.Q8_0.gguf) | Q8_0 | 0.5 | fast, best quality |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-GGUF/resolve/main/SmolVLM-500M-Instruct.f16.gguf) | f16 | 0.9 | 16 bpw, overkill |\n\nHere is a handy graph by ikawrakow comparing some lower-quality quant\ntypes (lower is better):\n\n![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)\n\nAnd here are Artefact2's thoughts on the matter:\nhttps://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9\n\n## FAQ / Model Request\n\nSee https://huggingface.co/mradermacher/model_requests for some answers to\nquestions you might have and/or if you want some other model quantized.\n\n## Thanks\n\nI thank my company, [nethype GmbH](https://www.nethype.de/), for letting\nme use its servers and providing upgrades to my workstation to enable\nthis work in my free time.\n\n\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM-500M-Instruct" ], "base_model": "mradermacher/SmolVLM-500M-Instruct-GGUF", "base_model_relation": "base" }, { "model_id": "mradermacher/SmolVLM-500M-Instruct-i1-GGUF", "gated": "False", "card": "---\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\ndatasets:\n- HuggingFaceM4/the_cauldron\n- HuggingFaceM4/Docmatix\nlanguage:\n- en\nlibrary_name: transformers\nlicense: apache-2.0\nquantized_by: mradermacher\n---\n## About\n\n\n\n\n\n\nweighted/imatrix quants of https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct\n\n\nstatic quants are available at https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-GGUF\n## Usage\n\nIf you are unsure how to use GGUF files, refer to one of [TheBloke's\nREADMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for\nmore details, including on how to concatenate multi-part files.\n\n## Provided Quants\n\n(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)\n\n| Link | Type | Size/GB | Notes |\n|:-----|:-----|--------:|:------|\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-i1-GGUF/resolve/main/SmolVLM-500M-Instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 0.3 | for the desperate |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-i1-GGUF/resolve/main/SmolVLM-500M-Instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 0.3 | mostly desperate |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-i1-GGUF/resolve/main/SmolVLM-500M-Instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.3 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-i1-GGUF/resolve/main/SmolVLM-500M-Instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.3 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-i1-GGUF/resolve/main/SmolVLM-500M-Instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 0.3 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-i1-GGUF/resolve/main/SmolVLM-500M-Instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 0.3 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-i1-GGUF/resolve/main/SmolVLM-500M-Instruct.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.3 | very low quality |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-i1-GGUF/resolve/main/SmolVLM-500M-Instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.3 | lower quality |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-i1-GGUF/resolve/main/SmolVLM-500M-Instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 0.3 | beats Q3_K* |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-i1-GGUF/resolve/main/SmolVLM-500M-Instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.3 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-i1-GGUF/resolve/main/SmolVLM-500M-Instruct.i1-Q2_K.gguf) | i1-Q2_K | 0.3 | IQ3_XXS probably better |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-i1-GGUF/resolve/main/SmolVLM-500M-Instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.3 | IQ3_XS probably better |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-i1-GGUF/resolve/main/SmolVLM-500M-Instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 0.4 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-i1-GGUF/resolve/main/SmolVLM-500M-Instruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.4 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-i1-GGUF/resolve/main/SmolVLM-500M-Instruct.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.4 | prefer IQ4_XS |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-i1-GGUF/resolve/main/SmolVLM-500M-Instruct.i1-Q4_0.gguf) | i1-Q4_0 | 0.4 | fast, low quality |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-i1-GGUF/resolve/main/SmolVLM-500M-Instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.4 | IQ3_S probably better |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-i1-GGUF/resolve/main/SmolVLM-500M-Instruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.4 | IQ3_M probably better |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-i1-GGUF/resolve/main/SmolVLM-500M-Instruct.i1-Q4_1.gguf) | i1-Q4_1 | 0.4 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-i1-GGUF/resolve/main/SmolVLM-500M-Instruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.4 | optimal size/speed/quality |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-i1-GGUF/resolve/main/SmolVLM-500M-Instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.4 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-i1-GGUF/resolve/main/SmolVLM-500M-Instruct.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.4 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-i1-GGUF/resolve/main/SmolVLM-500M-Instruct.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.4 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-i1-GGUF/resolve/main/SmolVLM-500M-Instruct.i1-Q6_K.gguf) | i1-Q6_K | 0.5 | practically like static Q6_K |\n\nHere is a handy graph by ikawrakow comparing some lower-quality quant\ntypes (lower is better):\n\n![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)\n\nAnd here are Artefact2's thoughts on the matter:\nhttps://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9\n\n## FAQ / Model Request\n\nSee https://huggingface.co/mradermacher/model_requests for some answers to\nquestions you might have and/or if you want some other model quantized.\n\n## Thanks\n\nI thank my company, [nethype GmbH](https://www.nethype.de/), for letting\nme use its servers and providing upgrades to my workstation to enable\nthis work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.\n\n\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM-500M-Instruct" ], "base_model": "mradermacher/SmolVLM-500M-Instruct-i1-GGUF", "base_model_relation": "base" }, { "model_id": "VyoJ/SmolVLM-500M-Instruct-be-GGUF", "gated": "unknown", "card": "---\nlicense: apache-2.0\nlanguage:\n- en\nbase_model:\n- ggml-org/SmolVLM-500M-Instruct-GGUF\n- HuggingFaceTB/SmolVLM-500M-Instruct\npipeline_tag: image-text-to-text\nlibrary_name: transformers\n---\n\n# Model Information\n\nSmolVLM-500M is a tiny multimodal model by HuggingFace. It was converted to the GGUF format by ggml-org.\n\nI converted it to a big-endian format and uploaded for use on IBM z/OS machines.\n\n**Model developer**: HuggingFace\n\n**Model Architecture**: Based on Idefics3\n\n**License**: Apache 2.0\n\nFor more details on the model, please go to Meta's original [model card](https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct)", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM-500M-Instruct" ], "base_model": null, "base_model_relation": null }, { "model_id": "vidore/colSmol-500M", "gated": "False", "card": "---\nlicense: mit\nlibrary_name: colpali\nbase_model: vidore/ColSmolVLM-Instruct-500M\nlanguage:\n- en\ntags:\n- colsmolvlm\n- vidore-experimental\n- vidore\npipeline_tag: visual-document-retrieval\n---\n# ColSmolVLM-Instruct-500M: Visual Retriever based on SmolVLM-Instruct-500M with ColBERT strategy\n\n### This is a version trained with batch_size 32 for 3 epochs\n\nColSmolVLM is a model based on a novel model architecture and training strategy based on Vision Language Models (VLMs) to efficiently index documents from their visual features.\nIt is a SmolVLM extension that generates [ColBERT](https://arxiv.org/abs/2004.12832)- style multi-vector representations of text and images. \nIt was introduced in the paper [ColPali: Efficient Document Retrieval with Vision Language Models](https://arxiv.org/abs/2407.01449) and first released in [this repository](https://github.com/ManuelFay/colpali)\n\n

\n\n## Version specificity\n\nThis version is trained with the commit b983e40 of the Colpali repository. (main branch from the repo)\n\nData is the same as the ColPali data described in the paper.\n\n\n## Model Training\n\n### Dataset\nOur training dataset of 127,460 query-page pairs is comprised of train sets of openly available academic datasets (63%) and a synthetic dataset made up of pages from web-crawled PDF documents and augmented with VLM-generated (Claude-3 Sonnet) pseudo-questions (37%). \nOur training set is fully English by design, enabling us to study zero-shot generalization to non-English languages. We explicitly verify no multi-page PDF document is used both [*ViDoRe*](https://huggingface.co/collections/vidore/vidore-benchmark-667173f98e70a1c0fa4db00d) and in the train set to prevent evaluation contamination. \nA validation set is created with 2% of the samples to tune hyperparameters.\n\n*Note: Multilingual data is present in the pretraining corpus of the language model and most probably in the multimodal training.*\n\n### Parameters\n\nUnless specified otherwise, we train models in `bfloat16` format, use low-rank adapters ([LoRA](https://arxiv.org/abs/2106.09685)) \nwith `alpha=32` and `r=32` on the transformer layers from the language model, \nas well as the final randomly initialized projection layer, and use a `paged_adamw_8bit` optimizer. \nWe train on a 4 GPU setup with data parallelism, a learning rate of 5e-4 with linear decay with 2.5% warmup steps, and a batch size of 8.\n\n## Usage\n\nMake sure `colpali-engine` is installed from source or with a version superior to 0.3.5 (main branch from the repo currently).\n`transformers` version must be > 4.46.2.\n\n```bash\npip install git+https://github.com/illuin-tech/colpali\n```\n\n```python\nimport torch\nfrom PIL import Image\n\nfrom colpali_engine.models import ColIdefics3, ColIdefics3Processor\n\nmodel = ColIdefics3.from_pretrained(\n \"vidore/colSmol-500M\",\n torch_dtype=torch.bfloat16,\n device_map=\"cuda:0\",\n attn_implementation=\"flash_attention_2\" # or eager\n ).eval()\nprocessor = ColIdefics3Processor.from_pretrained(\"vidore/colSmol-500M\")\n\n# Your inputs\nimages = [\n Image.new(\"RGB\", (32, 32), color=\"white\"),\n Image.new(\"RGB\", (16, 16), color=\"black\"),\n]\nqueries = [\n \"Is attention really all you need?\",\n \"What is the amount of bananas farmed in Salvador?\",\n]\n\n# Process the inputs\nbatch_images = processor.process_images(images).to(model.device)\nbatch_queries = processor.process_queries(queries).to(model.device)\n\n# Forward pass\nwith torch.no_grad():\n image_embeddings = model(**batch_images)\n query_embeddings = model(**batch_queries)\n\nscores = processor.score_multi_vector(query_embeddings, image_embeddings)\n```\n\n\n## Limitations\n\n - **Focus**: The model primarily focuses on PDF-type documents and high-ressources languages, potentially limiting its generalization to other document types or less represented languages.\n - **Support**: The model relies on multi-vector retreiving derived from the ColBERT late interaction mechanism, which may require engineering efforts to adapt to widely used vector retrieval frameworks that lack native multi-vector support.\n\n## License\n\nColQwen2's vision language backbone model (Qwen2-VL) is under `apache2.0` license. The adapters attached to the model are under MIT license.\n\n## Contact\n\n- Manuel Faysse: manuel.faysse@illuin.tech\n- Hugues Sibille: hugues.sibille@illuin.tech\n- Tony Wu: tony.wu@illuin.tech\n\n## Citation\n\nIf you use any datasets or models from this organization in your research, please cite the original dataset as follows:\n\n```bibtex\n@misc{faysse2024colpaliefficientdocumentretrieval,\n title={ColPali: Efficient Document Retrieval with Vision Language Models}, \n author={Manuel Faysse and Hugues Sibille and Tony Wu and Bilel Omrani and Gautier Viaud and C\u00e9line Hudelot and Pierre Colombo},\n year={2024},\n eprint={2407.01449},\n archivePrefix={arXiv},\n primaryClass={cs.IR},\n url={https://arxiv.org/abs/2407.01449}, \n}\n```", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "vidore/ColSmolVLM-Instruct-500M-base" ], "base_model": "vidore/colSmol", "base_model_relation": "finetune" }, { "model_id": "thoddnn/colSmol-500M", "gated": "False", "card": "---\nlicense: mit\nlibrary_name: colpali\nbase_model: vidore/ColSmolVLM-Instruct-500M\nlanguage:\n- en\ntags:\n- colsmolvlm\n- vidore-experimental\n- vidore\npipeline_tag: visual-document-retrieval\n---\n# ColSmolVLM-Instruct-500M: Visual Retriever based on SmolVLM-Instruct-500M with ColBERT strategy\n\n### This is a version trained with batch_size 32 for 3 epochs\n\nColSmolVLM is a model based on a novel model architecture and training strategy based on Vision Language Models (VLMs) to efficiently index documents from their visual features.\nIt is a SmolVLM extension that generates [ColBERT](https://arxiv.org/abs/2004.12832)- style multi-vector representations of text and images. \nIt was introduced in the paper [ColPali: Efficient Document Retrieval with Vision Language Models](https://arxiv.org/abs/2407.01449) and first released in [this repository](https://github.com/ManuelFay/colpali)\n\n

\n\n## Version specificity\n\nThis version is trained with the commit b983e40 of the Colpali repository. (main branch from the repo)\n\nData is the same as the ColPali data described in the paper.\n\n\n## Model Training\n\n### Dataset\nOur training dataset of 127,460 query-page pairs is comprised of train sets of openly available academic datasets (63%) and a synthetic dataset made up of pages from web-crawled PDF documents and augmented with VLM-generated (Claude-3 Sonnet) pseudo-questions (37%). \nOur training set is fully English by design, enabling us to study zero-shot generalization to non-English languages. We explicitly verify no multi-page PDF document is used both [*ViDoRe*](https://huggingface.co/collections/vidore/vidore-benchmark-667173f98e70a1c0fa4db00d) and in the train set to prevent evaluation contamination. \nA validation set is created with 2% of the samples to tune hyperparameters.\n\n*Note: Multilingual data is present in the pretraining corpus of the language model and most probably in the multimodal training.*\n\n### Parameters\n\nUnless specified otherwise, we train models in `bfloat16` format, use low-rank adapters ([LoRA](https://arxiv.org/abs/2106.09685)) \nwith `alpha=32` and `r=32` on the transformer layers from the language model, \nas well as the final randomly initialized projection layer, and use a `paged_adamw_8bit` optimizer. \nWe train on a 4 GPU setup with data parallelism, a learning rate of 5e-4 with linear decay with 2.5% warmup steps, and a batch size of 8.\n\n## Usage\n\nMake sure `colpali-engine` is installed from source or with a version superior to 0.3.5 (main branch from the repo currently).\n`transformers` version must be > 4.46.2.\n\n```bash\npip install git+https://github.com/illuin-tech/colpali\n```\n\n```python\nimport torch\nfrom PIL import Image\n\nfrom colpali_engine.models import ColIdefics3, ColIdefics3Processor\n\nmodel = ColIdefics3.from_pretrained(\n \"vidore/colSmol-500M\",\n torch_dtype=torch.bfloat16,\n device_map=\"cuda:0\",\n attn_implementation=\"flash_attention_2\" # or eager\n ).eval()\nprocessor = ColIdefics3Processor.from_pretrained(\"vidore/colSmol-500M\")\n\n# Your inputs\nimages = [\n Image.new(\"RGB\", (32, 32), color=\"white\"),\n Image.new(\"RGB\", (16, 16), color=\"black\"),\n]\nqueries = [\n \"Is attention really all you need?\",\n \"What is the amount of bananas farmed in Salvador?\",\n]\n\n# Process the inputs\nbatch_images = processor.process_images(images).to(model.device)\nbatch_queries = processor.process_queries(queries).to(model.device)\n\n# Forward pass\nwith torch.no_grad():\n image_embeddings = model(**batch_images)\n query_embeddings = model(**batch_queries)\n\nscores = processor.score_multi_vector(query_embeddings, image_embeddings)\n```\n\n\n## Limitations\n\n - **Focus**: The model primarily focuses on PDF-type documents and high-ressources languages, potentially limiting its generalization to other document types or less represented languages.\n - **Support**: The model relies on multi-vector retreiving derived from the ColBERT late interaction mechanism, which may require engineering efforts to adapt to widely used vector retrieval frameworks that lack native multi-vector support.\n\n## License\n\nColQwen2's vision language backbone model (Qwen2-VL) is under `apache2.0` license. The adapters attached to the model are under MIT license.\n\n## Contact\n\n- Manuel Faysse: manuel.faysse@illuin.tech\n- Hugues Sibille: hugues.sibille@illuin.tech\n- Tony Wu: tony.wu@illuin.tech\n\n## Citation\n\nIf you use any datasets or models from this organization in your research, please cite the original dataset as follows:\n\n```bibtex\n@misc{faysse2024colpaliefficientdocumentretrieval,\n title={ColPali: Efficient Document Retrieval with Vision Language Models}, \n author={Manuel Faysse and Hugues Sibille and Tony Wu and Bilel Omrani and Gautier Viaud and C\u00e9line Hudelot and Pierre Colombo},\n year={2024},\n eprint={2407.01449},\n archivePrefix={arXiv},\n primaryClass={cs.IR},\n url={https://arxiv.org/abs/2407.01449}, \n}\n```", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "vidore/ColSmolVLM-Instruct-500M-base" ], "base_model": "thoddnn/colSmol", "base_model_relation": "finetune" }, { "model_id": "ingenio/IndoColSmol-500M", "gated": "False", "card": "---\nlibrary_name: transformers\nlicense: mit\nbase_model: vidore/ColSmolVLM-Instruct-500M-base\ntags:\n- colpali\n- generated_from_trainer\nmodel-index:\n- name: IndoColSmol-500M\n results: []\n---\n\n\n\n# IndoColSmol-500M\n\nThis model is a fine-tuned version of [vidore/ColSmolVLM-Instruct-500M-base](https://huggingface.co/vidore/ColSmolVLM-Instruct-500M-base) on the ingenio/indodvqa_dataset dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.3641\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 2\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss |\n|:-------------:|:------:|:----:|:---------------:|\n| No log | 0.0099 | 1 | 0.4474 |\n| 0.4523 | 0.3960 | 40 | 0.4055 |\n| 0.3996 | 0.7921 | 80 | 0.3804 |\n| 0.3637 | 1.1881 | 120 | 0.3687 |\n| 0.345 | 1.5842 | 160 | 0.3627 |\n| 0.3466 | 1.9802 | 200 | 0.3630 |\n\n\n### Framework versions\n\n- Transformers 4.51.3\n- Pytorch 2.6.0+cu124\n- Datasets 3.5.1\n- Tokenizers 0.21.1\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "vidore/ColSmolVLM-Instruct-500M-base" ], "base_model": "ingenio/IndoColSmol", "base_model_relation": "finetune" }, { "model_id": "Oysiyl/colSmol-500M_ufo", "gated": "False", "card": "---\nlibrary_name: peft\nlicense: mit\nbase_model: vidore/ColSmolVLM-Instruct-500M-base\ntags:\n- generated_from_trainer\nmodel-index:\n- name: colSmol-500M_ufo\n results: []\n---\n\n\n\n# colSmol-500M_ufo\n\nThis model is a fine-tuned version of [vidore/ColSmolVLM-Instruct-500M-base](https://huggingface.co/vidore/ColSmolVLM-Instruct-500M-base) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0878\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 100\n- num_epochs: 1\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss |\n|:-------------:|:------:|:----:|:---------------:|\n| 0.1306 | 0.1636 | 80 | 0.1418 |\n| 0.0751 | 0.3272 | 160 | 0.1086 |\n| 0.0823 | 0.4908 | 240 | 0.0912 |\n| 0.0513 | 0.6544 | 320 | 0.0887 |\n| 0.0475 | 0.8180 | 400 | 0.0865 |\n| 0.0572 | 0.9816 | 480 | 0.0878 |\n\n\n### Framework versions\n\n- PEFT 0.15.2\n- Transformers 4.51.3\n- Pytorch 2.6.0+cu124\n- Datasets 3.3.1\n- Tokenizers 0.21.0", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "vidore/ColSmolVLM-Instruct-500M-base" ], "base_model": "Oysiyl/colSmol-500M_ufo", "base_model_relation": "base" }, { "model_id": "mfarre/SmolVLM2-500M-Video-Instruct-emotions", "gated": "False", "card": "---\nlibrary_name: transformers\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM2-500M-Video-Instruct-emotions\n results: []\n---\n\n\n\n# SmolVLM2-500M-Video-Instruct-emotions\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Use adamw_hf with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 1\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.49.0.dev0\n- Pytorch 2.6.0+cu124\n- Tokenizers 0.21.0\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM2-500M-Video-Instruct" ], "base_model": "mfarre/SmolVLM2-500M-Video-Instruct-emotions", "base_model_relation": "base" }, { "model_id": "merve/SmolVLM2-500M-Video-Instruct-emotions", "gated": "False", "card": "---\nlibrary_name: transformers\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM2-500M-Video-Instruct-emotions\n results: []\n---\n\n\n\n# SmolVLM2-500M-Video-Instruct-emotions\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 1\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.50.0.dev0\n- Pytorch 2.5.1+cu124\n- Datasets 3.3.1\n- Tokenizers 0.21.0\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM2-500M-Video-Instruct" ], "base_model": "merve/SmolVLM2-500M-Video-Instruct-emotions", "base_model_relation": "base" }, { "model_id": "merve/SmolVLM2-500M-Video-Instruct-videofeedback", "gated": "False", "card": "---\nlibrary_name: transformers\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM2-500M-Video-Instruct-videofeedback\n results: []\n---\n\n\n\n# SmolVLM2-500M-Video-Instruct-videofeedback\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 1\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.50.0.dev0\n- Pytorch 2.5.1+cu124\n- Datasets 3.3.1\n- Tokenizers 0.21.0\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM2-500M-Video-Instruct" ], "base_model": "merve/SmolVLM2-500M-Video-Instruct-videofeedback", "base_model_relation": "base" }, { "model_id": "merve/SmolVLM2-500M-Video-Instruct-video-feedback", "gated": "False", "card": "---\nlibrary_name: transformers\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM2-500M-Video-Instruct-video-feedback\n results: []\n---\n\n\n\n# SmolVLM2-500M-Video-Instruct-video-feedback\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 1\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.50.0.dev0\n- Pytorch 2.5.1+cu124\n- Datasets 3.3.1\n- Tokenizers 0.21.0\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM2-500M-Video-Instruct" ], "base_model": "merve/SmolVLM2-500M-Video-Instruct-video-feedback", "base_model_relation": "base" }, { "model_id": "AeonOmniverse/SmolVLM2-500M-Video-Instruct-video-feedback", "gated": "False", "card": "---\nlibrary_name: transformers\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM2-500M-Video-Instruct-video-feedback\n results: []\n---\n\n\n\n# SmolVLM2-500M-Video-Instruct-video-feedback\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 1\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.51.0.dev0\n- Pytorch 2.6.0+cu124\n- Datasets 3.4.1\n- Tokenizers 0.21.1\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM2-500M-Video-Instruct" ], "base_model": "AeonOmniverse/SmolVLM2-500M-Video-Instruct-video-feedback", "base_model_relation": "base" }, { "model_id": "mpnikhil/SmolVLM2-500M-Video-Instruct-mpnikhil1", "gated": "False", "card": "---\nlibrary_name: transformers\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM2-500M-Video-Instruct-mpnikhil1\n results: []\n---\n\n\n\n# SmolVLM2-500M-Video-Instruct-mpnikhil1\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 16\n- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 1\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.50.0.dev0\n- Pytorch 2.6.0\n- Datasets 3.3.2\n- Tokenizers 0.21.0\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM2-500M-Video-Instruct" ], "base_model": "mpnikhil/SmolVLM2-500M-Video-Instruct-mpnikhil1", "base_model_relation": "base" }, { "model_id": "Karthick2020/SmolVLM2-500M-Video-Instruct-video-feedback", "gated": "False", "card": "---\nlibrary_name: transformers\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM2-500M-Video-Instruct-video-feedback\n results: []\n---\n\n\n\n# SmolVLM2-500M-Video-Instruct-video-feedback\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 1\n\n### Framework versions\n\n- Transformers 4.50.0.dev0\n- Pytorch 2.5.1+cu124\n- Datasets 3.3.2\n- Tokenizers 0.21.0\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM2-500M-Video-Instruct" ], "base_model": "Karthick2020/SmolVLM2-500M-Video-Instruct-video-feedback", "base_model_relation": "base" }, { "model_id": "unreservedusername/SmolVLM2-500M-Video-Instruct-video-feedback", "gated": "False", "card": "---\nlibrary_name: transformers\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM2-500M-Video-Instruct-video-feedback\n results: []\n---\n\n\n\n# SmolVLM2-500M-Video-Instruct-video-feedback\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0133\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 5\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.50.0.dev0\n- Pytorch 2.6.0+cu124\n- Datasets 3.2.0\n- Tokenizers 0.21.0\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM2-500M-Video-Instruct" ], "base_model": "unreservedusername/SmolVLM2-500M-Video-Instruct-video-feedback", "base_model_relation": "base" }, { "model_id": "badger-lord/SmolVLM2-500M-Video-Instruct-video-feedback", "gated": "False", "card": "---\nlibrary_name: transformers\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM2-500M-Video-Instruct-video-feedback\n results: []\n---\n\n\n\n# SmolVLM2-500M-Video-Instruct-video-feedback\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 1\n\n### Framework versions\n\n- Transformers 4.50.0.dev0\n- Pytorch 2.5.1+cu124\n- Datasets 3.3.2\n- Tokenizers 0.21.0\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM2-500M-Video-Instruct" ], "base_model": "badger-lord/SmolVLM2-500M-Video-Instruct-video-feedback", "base_model_relation": "base" }, { "model_id": "sevimcengiz/SmolVLM2-500M-Video-Instruct-video-feedback", "gated": "False", "card": "---\nlibrary_name: transformers\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM2-500M-Video-Instruct-video-feedback\n results: []\n---\n\n\n\n# SmolVLM2-500M-Video-Instruct-video-feedback\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Use OptimizerNames.ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 1\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.51.0.dev0\n- Pytorch 2.6.0+cu124\n- Datasets 3.5.0\n- Tokenizers 0.21.1\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM2-500M-Video-Instruct" ], "base_model": "sevimcengiz/SmolVLM2-500M-Video-Instruct-video-feedback", "base_model_relation": "base" }, { "model_id": "Arnav0400/SmolVLM2-500M-Video-Instruct-video-feedback", "gated": "False", "card": "---\nlibrary_name: transformers\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM2-500M-Video-Instruct-video-feedback\n results: []\n---\n\n\n\n# SmolVLM2-500M-Video-Instruct-video-feedback\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 16\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 1\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.52.0.dev0\n- Pytorch 2.6.0+cu126\n- Datasets 3.5.0\n- Tokenizers 0.21.1\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM2-500M-Video-Instruct" ], "base_model": "Arnav0400/SmolVLM2-500M-Video-Instruct-video-feedback", "base_model_relation": "base" }, { "model_id": "superenghb/SmolVLM2-500M-Video-Instruct-video-feedback", "gated": "False", "card": "---\nlibrary_name: transformers\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM2-500M-Video-Instruct-video-feedback\n results: []\n---\n\n\n\n# SmolVLM2-500M-Video-Instruct-video-feedback\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Use adamw_hf with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 1\n\n### Framework versions\n\n- Transformers 4.50.0.dev0\n- Pytorch 2.4.0a0+f70bd71a48.nv24.06\n- Datasets 3.5.0\n- Tokenizers 0.21.1\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM2-500M-Video-Instruct" ], "base_model": "superenghb/SmolVLM2-500M-Video-Instruct-video-feedback", "base_model_relation": "base" }, { "model_id": "mosherosen/SmolVLM2-500M-Video-Instruct-video-feedback", "gated": "False", "card": "---\nlibrary_name: transformers\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM2-500M-Video-Instruct-video-feedback\n results: []\n---\n\n\n\n# SmolVLM2-500M-Video-Instruct-video-feedback\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 1\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.52.0.dev0\n- Pytorch 2.6.0+cu124\n- Datasets 3.5.0\n- Tokenizers 0.21.1\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM2-500M-Video-Instruct" ], "base_model": "mosherosen/SmolVLM2-500M-Video-Instruct-video-feedback", "base_model_relation": "base" }, { "model_id": "lukesutor/SmolVLM-500M-ActivityTracking", "gated": "False", "card": "---\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\nlibrary_name: transformers\nmodel_name: SmolVLM-500M-ActivityTracking\ntags:\n- generated_from_trainer\n- trl\n- sft\nlicence: license\n---\n\n# Model Card for SmolVLM-500M-ActivityTracking\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct).\nIt has been trained using [TRL](https://github.com/huggingface/trl).\n\n## Quick start\n\n```python\nfrom transformers import pipeline\n\nquestion = \"If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?\"\ngenerator = pipeline(\"text-generation\", model=\"lukesutor/SmolVLM-500M-ActivityTracking\", device=\"cuda\")\noutput = generator([{\"role\": \"user\", \"content\": question}], max_new_tokens=128, return_full_text=False)[0]\nprint(output[\"generated_text\"])\n```\n\n## Training procedure\n\n \n\n\nThis model was trained with SFT.\n\n### Framework versions\n\n- TRL: 0.17.0\n- Transformers: 4.52.0.dev0\n- Pytorch: 2.7.0\n- Datasets: 3.6.0\n- Tokenizers: 0.21.1\n\n## Citations\n\n\n\nCite TRL as:\n \n```bibtex\n@misc{vonwerra2022trl,\n\ttitle = {{TRL: Transformer Reinforcement Learning}},\n\tauthor = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\\'e}dec},\n\tyear = 2020,\n\tjournal = {GitHub repository},\n\tpublisher = {GitHub},\n\thowpublished = {\\url{https://github.com/huggingface/trl}}\n}\n```", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM2-500M-Video-Instruct" ], "base_model": "lukesutor/SmolVLM-500M-ActivityTracking", "base_model_relation": "base" }, { "model_id": "mlevytskyi/SmolVLM2-500M-Video-Instruct-video-feedback", "gated": "False", "card": "---\nlibrary_name: transformers\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM2-500M-Video-Instruct-video-feedback\n results: []\n---\n\n\n\n# SmolVLM2-500M-Video-Instruct-video-feedback\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 1\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.52.0.dev0\n- Pytorch 2.7.0+cu126\n- Datasets 3.6.0\n- Tokenizers 0.21.1\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM2-500M-Video-Instruct" ], "base_model": "mlevytskyi/SmolVLM2-500M-Video-Instruct-video-feedback", "base_model_relation": "base" }, { "model_id": "AFZAL0008/SmolVLM2-500M-Video-Instruct-video-feedback", "gated": "False", "card": "---\nlibrary_name: transformers\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM2-500M-Video-Instruct-video-feedback\n results: []\n---\n\n\n\n# SmolVLM2-500M-Video-Instruct-video-feedback\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0104\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 4\n- eval_batch_size: 4\n- seed: 42\n- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 1\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss |\n|:-------------:|:-----:|:----:|:---------------:|\n| 0.0058 | 0.05 | 50 | 0.0106 |\n| 0.0056 | 0.1 | 100 | 0.0105 |\n| 0.0052 | 0.15 | 150 | 0.0123 |\n| 0.0077 | 0.2 | 200 | 0.0108 |\n| 0.0053 | 0.25 | 250 | 0.0107 |\n| 0.0062 | 0.3 | 300 | 0.0109 |\n| 0.0058 | 0.35 | 350 | 0.0104 |\n| 0.006 | 0.4 | 400 | 0.0119 |\n| 0.0053 | 0.45 | 450 | 0.0104 |\n| 0.0066 | 0.5 | 500 | 0.0111 |\n| 0.0057 | 0.55 | 550 | 0.0104 |\n| 0.0059 | 0.6 | 600 | 0.0108 |\n| 0.0053 | 0.65 | 650 | 0.0104 |\n| 0.0052 | 0.7 | 700 | 0.0103 |\n| 0.0054 | 0.75 | 750 | 0.0106 |\n| 0.0064 | 0.8 | 800 | 0.0104 |\n| 0.0056 | 0.85 | 850 | 0.0104 |\n| 0.0069 | 0.9 | 900 | 0.0104 |\n| 0.0052 | 0.95 | 950 | 0.0104 |\n| 0.0053 | 1.0 | 1000 | 0.0104 |\n\n\n### Framework versions\n\n- Transformers 4.53.0.dev0\n- Pytorch 2.6.0+cu124\n- Datasets 3.6.0\n- Tokenizers 0.21.1\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM2-500M-Video-Instruct" ], "base_model": "AFZAL0008/SmolVLM2-500M-Video-Instruct-video-feedback", "base_model_relation": "base" }, { "model_id": "mlevytskyi/SmolVLM2-500M-Video-Instruct-coco-kaggle", "gated": "False", "card": "---\nlibrary_name: transformers\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM2-500M-Video-Instruct-coco-kaggle\n results: []\n---\n\n\n\n# SmolVLM2-500M-Video-Instruct-coco-kaggle\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.3318\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 16\n- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 1\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss |\n|:-------------:|:------:|:----:|:---------------:|\n| 0.397 | 0.1390 | 50 | 0.3987 |\n| 0.341 | 0.2780 | 100 | 0.3579 |\n| 0.3324 | 0.4170 | 150 | 0.3434 |\n| 0.3503 | 0.5559 | 200 | 0.3383 |\n| 0.3481 | 0.6949 | 250 | 0.3340 |\n| 0.3298 | 0.8339 | 300 | 0.3320 |\n| 0.3248 | 0.9729 | 350 | 0.3318 |\n\n\n### Framework versions\n\n- Transformers 4.52.0.dev0\n- Pytorch 2.7.0+cu126\n- Datasets 3.6.0\n- Tokenizers 0.21.1\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM2-500M-Video-Instruct" ], "base_model": "mlevytskyi/SmolVLM2-500M-Video-Instruct-coco-kaggle", "base_model_relation": "base" }, { "model_id": "liuhuanjim013/SmolVLM2-500M-Video-Instruct-video-feedback", "gated": "unknown", "card": "---\nlibrary_name: transformers\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM2-500M-Video-Instruct-video-feedback\n results: []\n---\n\n\n\n# SmolVLM2-500M-Video-Instruct-video-feedback\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 1\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.53.0.dev0\n- Pytorch 2.6.0+cu124\n- Datasets 3.6.0\n- Tokenizers 0.21.1\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM2-500M-Video-Instruct" ], "base_model": null, "base_model_relation": null }, { "model_id": "MRIII0917/SmolVLM2-500M-Video-Instruct-video-feedback", "gated": "False", "card": "---\nlibrary_name: transformers\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM2-500M-Video-Instruct-video-feedback\n results: []\n---\n\n\n\n# SmolVLM2-500M-Video-Instruct-video-feedback\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 1\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.50.0.dev0\n- Pytorch 2.7.0+cu126\n- Datasets 3.6.0\n- Tokenizers 0.21.1\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM2-500M-Video-Instruct" ], "base_model": "MRIII0917/SmolVLM2-500M-Video-Instruct-video-feedback", "base_model_relation": "base" }, { "model_id": "huggingFaceOfNabil/SmolVLM2-500M-Video-Instruct-dense", "gated": "unknown", "card": "---\nlibrary_name: transformers\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM2-500M-Video-Instruct-dense\n results: []\n---\n\n\n\n# SmolVLM2-500M-Video-Instruct-dense\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 1\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.52.4\n- Pytorch 2.7.1+cu126\n- Datasets 3.6.0\n- Tokenizers 0.21.1\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM2-500M-Video-Instruct" ], "base_model": null, "base_model_relation": null }, { "model_id": "rainorangelemon2/smolvlm-instruct-trl-sft-ChartQA", "gated": "unknown", "card": "---\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\nlibrary_name: transformers\nmodel_name: smolvlm-instruct-trl-sft-ChartQA\ntags:\n- generated_from_trainer\n- trl\n- sft\nlicence: license\n---\n\n# Model Card for smolvlm-instruct-trl-sft-ChartQA\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct).\nIt has been trained using [TRL](https://github.com/huggingface/trl).\n\n## Quick start\n\n```python\nfrom transformers import pipeline\n\nquestion = \"If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?\"\ngenerator = pipeline(\"text-generation\", model=\"rainorangelemon2/smolvlm-instruct-trl-sft-ChartQA\", device=\"cuda\")\noutput = generator([{\"role\": \"user\", \"content\": question}], max_new_tokens=128, return_full_text=False)[0]\nprint(output[\"generated_text\"])\n```\n\n## Training procedure\n\n[\"Visualize](https://wandb.ai/rainorangelemon/huggingface/runs/d611vuql) \n\n\nThis model was trained with SFT.\n\n### Framework versions\n\n- TRL: 0.19.0\n- Transformers: 4.52.4\n- Pytorch: 2.7.1\n- Datasets: 3.6.0\n- Tokenizers: 0.21.1\n\n## Citations\n\n\n\nCite TRL as:\n \n```bibtex\n@misc{vonwerra2022trl,\n\ttitle = {{TRL: Transformer Reinforcement Learning}},\n\tauthor = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\\'e}dec},\n\tyear = 2020,\n\tjournal = {GitHub repository},\n\tpublisher = {GitHub},\n\thowpublished = {\\url{https://github.com/huggingface/trl}}\n}\n```", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM2-500M-Video-Instruct" ], "base_model": null, "base_model_relation": null }, { "model_id": "rainorangelemon2/smolgemma-waymo-stage-1", "gated": "unknown", "card": "---\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\nlibrary_name: transformers\nmodel_name: smolgemma-waymo-stage-1\ntags:\n- generated_from_trainer\n- sft\n- trl\nlicence: license\n---\n\n# Model Card for smolgemma-waymo-stage-1\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct).\nIt has been trained using [TRL](https://github.com/huggingface/trl).\n\n## Quick start\n\n```python\nfrom transformers import pipeline\n\nquestion = \"If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?\"\ngenerator = pipeline(\"text-generation\", model=\"rainorangelemon2/smolgemma-waymo-stage-1\", device=\"cuda\")\noutput = generator([{\"role\": \"user\", \"content\": question}], max_new_tokens=128, return_full_text=False)[0]\nprint(output[\"generated_text\"])\n```\n\n## Training procedure\n\n[\"Visualize](https://wandb.ai/rainorangelemon/huggingface/runs/dyqdeiba) \n\n\nThis model was trained with SFT.\n\n### Framework versions\n\n- TRL: 0.19.0\n- Transformers: 4.52.4\n- Pytorch: 2.7.1\n- Datasets: 3.6.0\n- Tokenizers: 0.21.1\n\n## Citations\n\n\n\nCite TRL as:\n \n```bibtex\n@misc{vonwerra2022trl,\n\ttitle = {{TRL: Transformer Reinforcement Learning}},\n\tauthor = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\\'e}dec},\n\tyear = 2020,\n\tjournal = {GitHub repository},\n\tpublisher = {GitHub},\n\thowpublished = {\\url{https://github.com/huggingface/trl}}\n}\n```", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM2-500M-Video-Instruct" ], "base_model": null, "base_model_relation": null }, { "model_id": "rainorangelemon2/smolgemma-waymo-stage-2", "gated": "unknown", "card": "---\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\nlibrary_name: transformers\nmodel_name: smolgemma-waymo-stage-2\ntags:\n- generated_from_trainer\n- trl\n- sft\nlicence: license\n---\n\n# Model Card for smolgemma-waymo-stage-2\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct).\nIt has been trained using [TRL](https://github.com/huggingface/trl).\n\n## Quick start\n\n```python\nfrom transformers import pipeline\n\nquestion = \"If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?\"\ngenerator = pipeline(\"text-generation\", model=\"rainorangelemon2/smolgemma-waymo-stage-2\", device=\"cuda\")\noutput = generator([{\"role\": \"user\", \"content\": question}], max_new_tokens=128, return_full_text=False)[0]\nprint(output[\"generated_text\"])\n```\n\n## Training procedure\n\n[\"Visualize](https://wandb.ai/rainorangelemon/huggingface/runs/2fs9xc0v) \n\n\nThis model was trained with SFT.\n\n### Framework versions\n\n- TRL: 0.19.0\n- Transformers: 4.52.4\n- Pytorch: 2.7.1\n- Datasets: 3.6.0\n- Tokenizers: 0.21.1\n\n## Citations\n\n\n\nCite TRL as:\n \n```bibtex\n@misc{vonwerra2022trl,\n\ttitle = {{TRL: Transformer Reinforcement Learning}},\n\tauthor = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\\'e}dec},\n\tyear = 2020,\n\tjournal = {GitHub repository},\n\tpublisher = {GitHub},\n\thowpublished = {\\url{https://github.com/huggingface/trl}}\n}\n```", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM2-500M-Video-Instruct" ], "base_model": null, "base_model_relation": null }, { "model_id": "GKC96/SmolVLM2-500M-Video-Instruct-video-qna", "gated": "unknown", "card": "---\nlibrary_name: peft\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM2-500M-Video-Instruct-video-qna\n results: []\n---\n\n\n\n# SmolVLM2-500M-Video-Instruct-video-qna\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct) on the None dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 1\n\n### Training results\n\n\n\n### Framework versions\n\n- PEFT 0.15.2\n- Transformers 4.53.0.dev0\n- Pytorch 2.7.0+cu118\n- Datasets 3.6.0\n- Tokenizers 0.21.1", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM2-500M-Video-Instruct" ], "base_model": null, "base_model_relation": null }, { "model_id": "xco2/smolvlm2-500M-illustration-description", "gated": "unknown", "card": "---\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\nlibrary_name: peft\nlicense: apache-2.0\nlanguage:\n- en\npipeline_tag: image-text-to-text\n---\n\n# smolvlm2-500M-illustration-description\n\nAn illustration description generation model that provides richer image descriptions \nFine-tuned based on HuggingFaceTB/SmolVLM2-500M-Video-Instruct\n\n## Uses\nThis model can be used to generate descriptions of illustrations and engage in some simple Q&A related to illustration content\n\nSuggested prompts: \n- Write a descriptive caption for this image in a formal tone. \n- Write a descriptive caption for this image in a casual tone. \n- Analyze this image like an art critic would with information about its composition, style, symbolism, the use of color, light, any artistic movement it might belong to, etc. \n- What color is the hair of the character? \n- What are the characters wearing?\n\n## How to Get Started with the Model\n\n```python\nfrom transformers import AutoModelForImageTextToText, AutoProcessor\nfrom peft import PeftModel\nimport torch\n\nmodel_name = \"HuggingFaceTB/SmolVLM2-500M-Video-Instruct\"\nadapter_name = \"xco2/smolvlm2-500M-illustration-description\"\n\nmodel = AutoModelForImageTextToText.from_pretrained(\n model_name,\n torch_dtype=torch.bfloat16,\n _attn_implementation=\"flash_attention_2\"\n)\nmodel = PeftModel.from_pretrained(model, adapter_name)\n\nprocessor = AutoProcessor.from_pretrained(model_name)\n\nmodel = model.to('cuda').to(torch.bfloat16)\nmodel = model.merge_and_unload().eval()\n\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\",\n \"url\": \"https://cdn.donmai.us/sample/63/e7/__castorice_honkai_and_1_more_drawn_by_yolanda__sample-63e73017612352d472b24056e501656d.jpg\"},\n {\"type\": \"text\",\n \"text\": \"Write a descriptive caption for this image in a formal tone.\"},\n ]\n },\n]\n\ninputs = processor.apply_chat_template(\n messages,\n add_generation_prompt=True,\n tokenize=True,\n return_dict=True,\n return_tensors=\"pt\",\n).to(model.device, dtype=model.dtype)\n\ngenerated_ids = model.generate(**inputs, do_sample=True, max_new_tokens=2048)\ngenerated_texts = processor.batch_decode(\n generated_ids,\n skip_special_tokens=True,\n)\nprint(\"Assistant:\", generated_texts[0].split(\"Assistant:\")[-1])\n```\n\n## Training Details\n\n### Training Data\n\nImage description data: \n1. Utilized the quantized fancyfeast/joy-caption-pre-alpha model to describe approximately 100,000 illustrations with multiple prompts. \n2. Filtered out meaningless descriptions with repetitive phrases generated by the model. \n3. Generated Q&A data related to the content of the illustrations based on the generated descriptions using qwen3-12B. \nA total of about 240,000 training data entries were obtained in the end.\n\n### Framework versions\n\n- PEFT 0.15.2", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM2-500M-Video-Instruct" ], "base_model": null, "base_model_relation": null }, { "model_id": "ggml-org/SmolVLM2-500M-Video-Instruct-GGUF", "gated": "False", "card": "---\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\n---\n\n# SmolVLM2-500M-Video-Instruct\n\nOriginal model: https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct\n\nFor more info, please refer to this PR: https://github.com/ggml-org/llama.cpp/pull/13050\n\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM2-500M-Video-Instruct" ], "base_model": "ggml-org/SmolVLM2-500M-Video-Instruct-GGUF", "base_model_relation": "base" }, { "model_id": "mradermacher/SmolVLM2-500M-Video-Instruct-GGUF", "gated": "False", "card": "---\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\ndatasets:\n- HuggingFaceM4/the_cauldron\n- HuggingFaceM4/Docmatix\n- lmms-lab/LLaVA-OneVision-Data\n- lmms-lab/M4-Instruct-Data\n- HuggingFaceFV/finevideo\n- MAmmoTH-VL/MAmmoTH-VL-Instruct-12M\n- lmms-lab/LLaVA-Video-178K\n- orrzohar/Video-STaR\n- Mutonix/Vript\n- TIGER-Lab/VISTA-400K\n- Enxin/MovieChat-1K_train\n- ShareGPT4Video/ShareGPT4Video\nlanguage:\n- en\nlibrary_name: transformers\nlicense: apache-2.0\nquantized_by: mradermacher\n---\n## About\n\n\n\n\n\n\nstatic quants of https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct\n\n\nweighted/imatrix quants are available at https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF\n## Usage\n\nIf you are unsure how to use GGUF files, refer to one of [TheBloke's\nREADMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for\nmore details, including on how to concatenate multi-part files.\n\n## Provided Quants\n\n(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)\n\n| Link | Type | Size/GB | Notes |\n|:-----|:-----|--------:|:------|\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.Q2_K.gguf) | Q2_K | 0.3 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.Q3_K_S.gguf) | Q3_K_S | 0.3 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.IQ4_XS.gguf) | IQ4_XS | 0.4 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.Q3_K_M.gguf) | Q3_K_M | 0.4 | lower quality |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.Q3_K_L.gguf) | Q3_K_L | 0.4 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.Q4_K_S.gguf) | Q4_K_S | 0.4 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.Q4_K_M.gguf) | Q4_K_M | 0.4 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.Q5_K_S.gguf) | Q5_K_S | 0.4 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.Q5_K_M.gguf) | Q5_K_M | 0.4 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.Q6_K.gguf) | Q6_K | 0.5 | very good quality |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.Q8_0.gguf) | Q8_0 | 0.5 | fast, best quality |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.f16.gguf) | f16 | 0.9 | 16 bpw, overkill |\n\nHere is a handy graph by ikawrakow comparing some lower-quality quant\ntypes (lower is better):\n\n![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)\n\nAnd here are Artefact2's thoughts on the matter:\nhttps://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9\n\n## FAQ / Model Request\n\nSee https://huggingface.co/mradermacher/model_requests for some answers to\nquestions you might have and/or if you want some other model quantized.\n\n## Thanks\n\nI thank my company, [nethype GmbH](https://www.nethype.de/), for letting\nme use its servers and providing upgrades to my workstation to enable\nthis work in my free time.\n\n\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM2-500M-Video-Instruct" ], "base_model": "mradermacher/SmolVLM2-500M-Video-Instruct-GGUF", "base_model_relation": "base" }, { "model_id": "mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF", "gated": "False", "card": "---\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\ndatasets:\n- HuggingFaceM4/the_cauldron\n- HuggingFaceM4/Docmatix\n- lmms-lab/LLaVA-OneVision-Data\n- lmms-lab/M4-Instruct-Data\n- HuggingFaceFV/finevideo\n- MAmmoTH-VL/MAmmoTH-VL-Instruct-12M\n- lmms-lab/LLaVA-Video-178K\n- orrzohar/Video-STaR\n- Mutonix/Vript\n- TIGER-Lab/VISTA-400K\n- Enxin/MovieChat-1K_train\n- ShareGPT4Video/ShareGPT4Video\nlanguage:\n- en\nlibrary_name: transformers\nlicense: apache-2.0\nquantized_by: mradermacher\n---\n## About\n\n\n\n\n\n\nweighted/imatrix quants of https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct\n\n\nstatic quants are available at https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-GGUF\n## Usage\n\nIf you are unsure how to use GGUF files, refer to one of [TheBloke's\nREADMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for\nmore details, including on how to concatenate multi-part files.\n\n## Provided Quants\n\n(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)\n\n| Link | Type | Size/GB | Notes |\n|:-----|:-----|--------:|:------|\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 0.3 | for the desperate |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 0.3 | mostly desperate |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.3 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.3 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 0.3 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 0.3 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.3 | very low quality |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.3 | lower quality |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 0.3 | beats Q3_K* |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.3 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.i1-Q2_K.gguf) | i1-Q2_K | 0.3 | IQ3_XXS probably better |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.3 | IQ3_XS probably better |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 0.4 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.4 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.4 | prefer IQ4_XS |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.i1-Q4_0.gguf) | i1-Q4_0 | 0.4 | fast, low quality |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.4 | IQ3_S probably better |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.4 | IQ3_M probably better |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.i1-Q4_1.gguf) | i1-Q4_1 | 0.4 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.4 | optimal size/speed/quality |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.4 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.4 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.4 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.i1-Q6_K.gguf) | i1-Q6_K | 0.5 | practically like static Q6_K |\n\nHere is a handy graph by ikawrakow comparing some lower-quality quant\ntypes (lower is better):\n\n![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)\n\nAnd here are Artefact2's thoughts on the matter:\nhttps://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9\n\n## FAQ / Model Request\n\nSee https://huggingface.co/mradermacher/model_requests for some answers to\nquestions you might have and/or if you want some other model quantized.\n\n## Thanks\n\nI thank my company, [nethype GmbH](https://www.nethype.de/), for letting\nme use its servers and providing upgrades to my workstation to enable\nthis work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.\n\n\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM2-500M-Video-Instruct" ], "base_model": "mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF", "base_model_relation": "base" }, { "model_id": "second-state/SmolVLM2-500M-Video-Instruct-GGUF", "gated": "unknown", "card": "---\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\nlibrary_name: transformers\nlicense: apache-2.0\nmodel_creator: HuggingFaceTB\nmodel_name: SmolVLM2-500M-Video-Instruct\nquantized_by: Second State Inc.\npipeline_tag: image-text-to-text\nlanguage:\n- en\n---\n\n\n\n
\n\n
\n
\n\n\n# SmolVLM2-500M-Video-Instruct-GGUF\n\n## Original Model\n\n[HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct)\n\n## Run with LlamaEdge\n\n- LlamaEdge version: [v0.21.0](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.21.0) and above\n\n- Prompt template\n\n - Prompt type: `smol-vision`\n\n - Prompt string\n\n ```text\n <|im_start|>\n User: {user_message_1}\n Assistant: {assistant_message_1}\n User: {user_message_2}\n Assistant:\n ```\n\n- Context size: `2048`\n\n- Run as LlamaEdge service\n\n ```bash\n wasmedge --dir .:. --nn-preload default:GGML:AUTO:SmolVLM2-500M-Video-Instruct-Q5_K_M.gguf \\\n llama-api-server.wasm \\\n --prompt-template smol-vision \\\n --llava-mmproj SmolVLM2-500M-Video-Instruct-mmproj-f16.gguf \\\n --model-name SmolVLM2-500M-Video-Instruct \\\n --ctx-size 2048\n ```\n\n## Quantized GGUF Models\n\n| Name | Quant method | Bits | Size | Use case |\n| ---- | ---- | ---- | ---- | ----- |\n| [SmolVLM2-500M-Video-Instruct-Q2_K.gguf](https://huggingface.co/second-state/SmolVLM2-500M-Video-Instruct-GGUF/blob/main/SmolVLM2-500M-Video-Instruct-Q2_K.gguf) | Q2_K | 2 | 245 MB| smallest, significant quality loss - not recommended for most purposes |\n| [SmolVLM2-500M-Video-Instruct-Q3_K_L.gguf](https://huggingface.co/second-state/SmolVLM2-500M-Video-Instruct-GGUF/blob/main/SmolVLM2-500M-Video-Instruct-Q3_K_L.gguf) | Q3_K_L | 3 | 273 MB| small, substantial quality loss |\n| [SmolVLM2-500M-Video-Instruct-Q3_K_M.gguf](https://huggingface.co/second-state/SmolVLM2-500M-Video-Instruct-GGUF/blob/main/SmolVLM2-500M-Video-Instruct-Q3_K_M.gguf) | Q3_K_M | 3 | 261 MB| very small, high quality loss |\n| [SmolVLM2-500M-Video-Instruct-Q3_K_S.gguf](https://huggingface.co/second-state/SmolVLM2-500M-Video-Instruct-GGUF/blob/main/SmolVLM2-500M-Video-Instruct-Q3_K_S.gguf) | Q3_K_S | 3 | 245 MB| very small, high quality loss |\n| [SmolVLM2-500M-Video-Instruct-Q4_0.gguf](https://huggingface.co/second-state/SmolVLM2-500M-Video-Instruct-GGUF/blob/main/SmolVLM2-500M-Video-Instruct-Q4_0.gguf) | Q4_0 | 4 | 256 MB| legacy; small, very high quality loss - prefer using Q3_K_M |\n| [SmolVLM2-500M-Video-Instruct-Q4_K_M.gguf](https://huggingface.co/second-state/SmolVLM2-500M-Video-Instruct-GGUF/blob/main/SmolVLM2-500M-Video-Instruct-Q4_K_M.gguf) | Q4_K_M | 4 | 303 MB| medium, balanced quality - recommended |\n| [SmolVLM2-500M-Video-Instruct-Q4_K_S.gguf](https://huggingface.co/second-state/SmolVLM2-500M-Video-Instruct-GGUF/blob/main/SmolVLM2-500M-Video-Instruct-Q4_K_S.gguf) | Q4_K_S | 4 | 293 MB| small, greater quality loss |\n| [SmolVLM2-500M-Video-Instruct-Q5_0.gguf](https://huggingface.co/second-state/SmolVLM2-500M-Video-Instruct-GGUF/blob/main/SmolVLM2-500M-Video-Instruct-Q5_0.gguf) | Q5_0 | 5 | 301 MB| legacy; medium, balanced quality - prefer using Q4_K_M |\n| [SmolVLM2-500M-Video-Instruct-Q5_K_M.gguf](https://huggingface.co/second-state/SmolVLM2-500M-Video-Instruct-GGUF/blob/main/SmolVLM2-500M-Video-Instruct-Q5_K_M.gguf) | Q5_K_M | 5 | 326 MB| large, very low quality loss - recommended |\n| [SmolVLM2-500M-Video-Instruct-Q5_K_S.gguf](https://huggingface.co/second-state/SmolVLM2-500M-Video-Instruct-GGUF/blob/main/SmolVLM2-500M-Video-Instruct-Q5_K_S.gguf) | Q5_K_S | 5 | 319 MB| large, low quality loss - recommended |\n| [SmolVLM2-500M-Video-Instruct-Q6_K.gguf](https://huggingface.co/second-state/SmolVLM2-500M-Video-Instruct-GGUF/blob/main/SmolVLM2-500M-Video-Instruct-Q6_K.gguf) | Q6_K | 6 | 418 MB| very large, extremely low quality loss |\n| [SmolVLM2-500M-Video-Instruct-Q8_0.gguf](https://huggingface.co/second-state/SmolVLM2-500M-Video-Instruct-GGUF/blob/main/SmolVLM2-500M-Video-Instruct-Q8_0.gguf) | Q8_0 | 8 | 437 MB| very large, extremely low quality loss - not recommended |\n| [SmolVLM2-500M-Video-Instruct-f16.gguf](https://huggingface.co/second-state/SmolVLM2-500M-Video-Instruct-GGUF/blob/main/SmolVLM2-500M-Video-Instruct-f16.gguf) | f16 | 16 | 820 MB| |\n| [SmolVLM2-500M-Video-Instruct-mmproj-f16.gguf](https://huggingface.co/second-state/SmolVLM2-500M-Video-Instruct-GGUF/blob/main/SmolVLM2-500M-Video-Instruct-mmproj-f16.gguf) | f16 | 16 | 199 MB| |\n\n*Quantized with llama.cpp b5501*\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM2-500M-Video-Instruct" ], "base_model": null, "base_model_relation": null }, { "model_id": "gaianet/SmolVLM2-500M-Video-Instruct-GGUF", "gated": "unknown", "card": "---\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\nlibrary_name: transformers\nlicense: apache-2.0\nmodel_creator: HuggingFaceTB\nmodel_name: SmolVLM2-500M-Video-Instruct\nquantized_by: Second State Inc.\npipeline_tag: image-text-to-text\nlanguage:\n- en\n---\n\n# SmolVLM2-500M-Video-Instruct-GGUF\n\n## Original Model\n\n[HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct)\n\n## Run with Gaianet\n\n**Prompt template:**\n\nprompt template: `smol-vision`\n\n**Context size:**\n\nchat_ctx_size: `2048`\n\n**Run with GaiaNet:**\n\n- Quick start: https://docs.gaianet.ai/node-guide/quick-start\n\n- Customize your node: https://docs.gaianet.ai/node-guide/customize\n\n*Quantized with llama.cpp b5501*\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM2-500M-Video-Instruct" ], "base_model": null, "base_model_relation": null }, { "model_id": "DevQuasar/HuggingFaceTB.SmolVLM2-500M-Video-Instruct-GGUF", "gated": "unknown", "card": "---\nbase_model:\n- HuggingFaceTB/SmolVLM2-500M-Video-Instruct\npipeline_tag: image-text-to-text\n---\n\n[](https://devquasar.com)\n\n'Make knowledge free for everyone'\n\nQuantized version of: [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct)\nBuy Me a Coffee at ko-fi.com", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM2-500M-Video-Instruct" ], "base_model": null, "base_model_relation": null }, { "model_id": "AXERA-TECH/SmolVLM2-500M-Video-Instruct", "gated": "unknown", "card": "---\nlicense: bsd-3-clause\nlanguage:\n - en\n - zh\nbase_model:\n - HuggingFaceTB/SmolVLM2-500M-Video-Instruct\npipeline_tag: visual-question-answering\ntags:\n - HuggingFaceTB\n - SmolVLM2-500M-Video-Instruct\n---\n\n# SmolVLM2-500M-Video-Instruct-Int8\n\nThis version of SmolVLM2-500M-Video-Instruct has been converted to run on the Axera NPU using **w8a16** quantization.\n\nCompatible with Pulsar2 version: 4.0\n\n## Convert tools links:\n\nFor those who are interested in model conversion, you can try to export axmodel through the original repo:\n- https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct\n\n\n- [Pulsar2 Link, How to Convert LLM from Huggingface to axmodel](https://pulsar2-docs.readthedocs.io/en/latest/appendix/build_llm.html)\n\n## Support Platform\n- AX650\n - [M4N-Dock(\u7231\u82af\u6d3ePro)](https://wiki.sipeed.com/hardware/zh/maixIV/m4ndock/m4ndock.html)\n\n\n\n## How to use\n\nDownload all files from this repository to the device.\n\n**Using AX650 Board**\n\n```bash\nai@ai-bj ~/yongqiang/SmolVLM2-500M-Video-Instruct $ tree -L 1\n.\n\u251c\u2500\u2500 assets\n\u251c\u2500\u2500 embeds\n\u251c\u2500\u2500 infer_axmodel.py\n\u251c\u2500\u2500 README.md\n\u251c\u2500\u2500 smolvlm2_axmodel\n\u251c\u2500\u2500 smolvlm2_tokenizer\n\u2514\u2500\u2500 vit_mdoel\n\n5 directories, 2 files\n```\n\n#### Inference with AX650 Host, such as M4N-Dock(\u7231\u82af\u6d3ePro) or AX650N DEMO Board\n\n**Multimodal Understanding**\n\ninput image\n\n![](assets/bee.jpg)\n\ninput text:\n\n```\nCan you describe this image?\n```\n\nlog information:\n\n```bash\nai@ai-bj ~/yongqiang/SmolVLM2-500M-Video-Instruct $ python3 infer_axmodel.py\n\ninput prompt: Can you describe this image?\n\nanswer >> The image captures a close-up view of a pink flower, prominently featuring a bumblebee. The bumblebee, with its black and yellow stripes, is in the center of the frame, its body slightly tilted to the left. The flower, with its petals fully spread, is the main subject of the image. The background is blurred, drawing focus to the flower and the bumblebee. The blurred background suggests a garden or a field, providing a sense of depth to the image. The^@ colors in the image are vibrant, with the pink of the flower contrasting against the green of the leaves and the brown of the stems. The image does not provide enough detail to confidently identify the specific location or landmark referred to as \"sa_16743\".\n```", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "HuggingFaceTB/SmolVLM2-500M-Video-Instruct" ], "base_model": null, "base_model_relation": null } ] }