File size: 235,888 Bytes
7e4e4c8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 |
{
"base_model": "HuggingFaceTB/SmolVLM-500M-Instruct",
"tree": [
{
"model_id": "HuggingFaceTB/SmolVLM-500M-Instruct",
"gated": "False",
"card": "---\nlibrary_name: transformers\nlicense: apache-2.0\ndatasets:\n- HuggingFaceM4/the_cauldron\n- HuggingFaceM4/Docmatix\npipeline_tag: image-text-to-text\nlanguage:\n- en\nbase_model:\n- HuggingFaceTB/SmolLM2-360M-Instruct\n- google/siglip-base-patch16-512\n---\n\n<img src=\"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/SmolVLM_256_banner.png\" width=\"800\" height=\"auto\" alt=\"Image description\">\n\n# SmolVLM-500M\n\nSmolVLM-500M is a tiny multimodal model, member of the SmolVLM family. It accepts arbitrary sequences of image and text inputs to produce text outputs. It's designed for efficiency. SmolVLM can answer questions about images, describe visual content, or transcribe text. Its lightweight architecture makes it suitable for on-device applications while maintaining strong performance on multimodal tasks. It can run inference on one image with 1.23GB of GPU RAM.\n\n## Model Summary\n\n- **Developed by:** Hugging Face \ud83e\udd17\n- **Model type:** Multi-modal model (image+text)\n- **Language(s) (NLP):** English\n- **License:** Apache 2.0\n- **Architecture:** Based on [Idefics3](https://huggingface.co/HuggingFaceM4/Idefics3-8B-Llama3) (see technical summary)\n\n## Resources\n\n- **Demo:** [SmolVLM-256 Demo](https://huggingface.co/spaces/HuggingFaceTB/SmolVLM-256M-Demo)\n- **Blog:** [Blog post](https://huggingface.co/blog/smolvlm)\n\n## Uses\n\nSmolVLM can be used for inference on multimodal (image + text) tasks where the input comprises text queries along with one or more images. Text and images can be interleaved arbitrarily, enabling tasks like image captioning, visual question answering, and storytelling based on visual content. The model does not support image generation.\n\nTo fine-tune SmolVLM on a specific task, you can follow [the fine-tuning tutorial](https://github.com/huggingface/smollm/blob/main/vision/finetuning/Smol_VLM_FT.ipynb).\n\n## Evaluation\n\n\n<img src=\"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smoller_vlm_benchmarks.png\" alt=\"Benchmarks\" style=\"width:90%;\" />\n\n\n### Technical Summary\n\nSmolVLM leverages the lightweight SmolLM2 language model to provide a compact yet powerful multimodal experience. It introduces several changes compared to the larger SmolVLM 2.2B model:\n\n- **Image compression:** We introduce a more radical image compression compared to Idefics3 and SmolVLM-2.2B to enable the model to infer faster and use less RAM.\n- **Visual Token Encoding:** SmolVLM-256 uses 64 visual tokens to encode image patches of size 512\u00d7512. Larger images are divided into patches, each encoded separately, enhancing efficiency without compromising performance.\n- **New special tokens:** We added new special tokens to divide the subimages. This allows for more efficient tokenization of the images.\n- **Smoller vision encoder:** We went from a 400M parameter siglip vision encoder to a much smaller 93M encoder.\n- **Larger image patches:** We are now passing patches of 512x512 to the vision encoder, instead of 384x384 like the larger SmolVLM. This allows the information to be encoded more efficiently.\n\nMore details about the training and architecture are available in our technical report.\n\n### How to get started\n\nYou can use transformers to load, infer and fine-tune SmolVLM.\n\n```python\nimport torch\nfrom PIL import Image\nfrom transformers import AutoProcessor, AutoModelForVision2Seq\nfrom transformers.image_utils import load_image\n\nDEVICE = \"cuda\" if torch.cuda.is_available() else \"cpu\"\n\n# Load images\nimage = load_image(\"https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg\")\n\n# Initialize processor and model\nprocessor = AutoProcessor.from_pretrained(\"HuggingFaceTB/SmolVLM-500M-Instruct\")\nmodel = AutoModelForVision2Seq.from_pretrained(\n \"HuggingFaceTB/SmolVLM-500M-Instruct\",\n torch_dtype=torch.bfloat16,\n _attn_implementation=\"flash_attention_2\" if DEVICE == \"cuda\" else \"eager\",\n).to(DEVICE)\n\n# Create input messages\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\"},\n {\"type\": \"text\", \"text\": \"Can you describe this image?\"}\n ]\n },\n]\n\n# Prepare inputs\nprompt = processor.apply_chat_template(messages, add_generation_prompt=True)\ninputs = processor(text=prompt, images=[image], return_tensors=\"pt\")\ninputs = inputs.to(DEVICE)\n\n# Generate outputs\ngenerated_ids = model.generate(**inputs, max_new_tokens=500)\ngenerated_texts = processor.batch_decode(\n generated_ids,\n skip_special_tokens=True,\n)\n\nprint(generated_texts[0])\n\"\"\"\nAssistant: The image depicts a cityscape featuring a prominent landmark, the Statue of Liberty, prominently positioned on Liberty Island. The statue is a green, humanoid figure with a crown atop its head and is situated on a small island surrounded by water. The statue is characterized by its large, detailed structure, with a statue of a woman holding a torch above her head and a tablet in her left hand. The statue is surrounded by a small, rocky island, which is partially visible in the foreground.\nIn the background, the cityscape is dominated by numerous high-rise buildings, which are densely packed and vary in height. The buildings are primarily made of glass and steel, reflecting the sunlight and creating a bright, urban skyline. The skyline is filled with various architectural styles, including modern skyscrapers and older, more traditional buildings.\nThe water surrounding the island is calm, with a few small boats visible, indicating that the area is likely a popular tourist destination. The water is a deep blue, suggesting that it is a large body of water, possibly a river or a large lake.\nIn the foreground, there is a small strip of land with trees and grass, which adds a touch of natural beauty to the urban landscape. The trees are green, indicating that it is likely spring or summer.\nThe image captures a moment of tranquility and reflection, as the statue and the cityscape come together to create a harmonious and picturesque scene. The statue's presence in the foreground draws attention to the city's grandeur, while the calm water and natural elements in the background provide a sense of peace and serenity.\nIn summary, the image showcases the Statue of Liberty, a symbol of freedom and democracy, set against a backdrop of a bustling cityscape. The statue is a prominent and iconic representation of human achievement, while the cityscape is a testament to human ingenuity and progress. The image captures the beauty and complexity of urban life, with the statue serving as a symbol of hope and freedom, while the cityscape provides a glimpse into the modern world.\n\"\"\"\n```\n\n\n### Model optimizations\n\n**Precision**: For better performance, load and run the model in half-precision (`torch.bfloat16`) if your hardware supports it.\n\n```python\nfrom transformers import AutoModelForVision2Seq\nimport torch\n\nmodel = AutoModelForVision2Seq.from_pretrained(\n \"HuggingFaceTB/SmolVLM-Instruct\",\n torch_dtype=torch.bfloat16\n).to(\"cuda\")\n```\n\nYou can also load SmolVLM with 4/8-bit quantization using bitsandbytes, torchao or Quanto. Refer to [this page](https://huggingface.co/docs/transformers/en/main_classes/quantization) for other options.\n\n```python\nfrom transformers import AutoModelForVision2Seq, BitsAndBytesConfig\nimport torch\n\nquantization_config = BitsAndBytesConfig(load_in_8bit=True)\nmodel = AutoModelForVision2Seq.from_pretrained(\n \"HuggingFaceTB/SmolVLM-Instruct\",\n quantization_config=quantization_config,\n)\n```\n\n**Vision Encoder Efficiency**: Adjust the image resolution by setting `size={\"longest_edge\": N*512}` when initializing the processor, where N is your desired value. The default `N=4` works well, which results in input images of\nsize 2048\u00d72048. Decreasing N can save GPU memory and is appropriate for lower-resolution images. This is also useful if you want to fine-tune on videos.\n\n\n## Misuse and Out-of-scope Use\n\nSmolVLM is not intended for high-stakes scenarios or critical decision-making processes that affect an individual's well-being or livelihood. The model may produce content that appears factual but may not be accurate. Misuse includes, but is not limited to:\n\n- Prohibited Uses:\n - Evaluating or scoring individuals (e.g., in employment, education, credit)\n - Critical automated decision-making\n - Generating unreliable factual content\n- Malicious Activities:\n - Spam generation\n - Disinformation campaigns\n - Harassment or abuse\n - Unauthorized surveillance\n\n### License\n\nSmolVLM is built upon [SigLIP](https://huggingface.co/google/siglip-base-patch16-512) as image encoder and [SmolLM2](https://huggingface.co/HuggingFaceTB/SmolLM2-360M-Instruct) for text decoder part.\n\nWe release the SmolVLM checkpoints under the Apache 2.0 license.\n\n## Training Details\n\n### Training Data\n\nThe training data comes from [The Cauldron](https://huggingface.co/datasets/HuggingFaceM4/the_cauldron) and [Docmatix](https://huggingface.co/datasets/HuggingFaceM4/Docmatix) datasets, with emphasis on document understanding (25%) and image captioning (18%), while maintaining balanced coverage across other crucial capabilities like visual reasoning, chart comprehension, and general instruction following.\n<img src=\"https://huggingface.co/HuggingFaceTB/SmolVLM-Instruct/resolve/main/mixture_the_cauldron.png\" alt=\"Example Image\" style=\"width:90%;\" />\n\n# Citation information\nYou can cite us in the following way:\n```bibtex\n@article{marafioti2025smolvlm,\n title={SmolVLM: Redefining small and efficient multimodal models}, \n author={Andr\u00e9s Marafioti and Orr Zohar and Miquel Farr\u00e9 and Merve Noyan and Elie Bakouch and Pedro Cuenca and Cyril Zakka and Loubna Ben Allal and Anton Lozhkov and Nouamane Tazi and Vaibhav Srivastav and Joshua Lochner and Hugo Larcher and Mathieu Morlon and Lewis Tunstall and Leandro von Werra and Thomas Wolf},\n journal={arXiv preprint arXiv:2504.05299},\n year={2025}\n}\n```\n\n",
"metadata": "\"N/A\"",
"depth": 0,
"children": [
"vidore/ColSmolVLM-Instruct-500M-base",
"carles-mzms/tyrynzysmegalodon",
"lvxiangyu11/smolvlm-instruct-trl-sft-ChartQA",
"hasan-farooq/SmolVLM-500M-Instruct-vqav2",
"hasan-farooq/SmolVLM-500M-Instruct-vqav3",
"hasan-farooq/SmolVLM-500M-Instruct-med-vqav1",
"aadhibest/smolvlm-500m-instruct-13-03-2025",
"chiaky21/SmolVLM-500M-Instruct-vqav2",
"racineai/Flantier-SmolVLM-500M-dse",
"Soundappan123/smolvlm-dpo",
"BIOMEDICA/BMC-smolvlm1-500M",
"Pantelismak/smolvlm_cxr",
"JoseferEins/SmolVLM-500M-Instruct-fer0"
],
"children_count": 13,
"adapters": [
"VishalD1234/SmolVLM-500M-Instruct-vqav2",
"sasikaran04/SmolVLM-500M-Instruct-vqav2",
"Hirai-Labs/FT-SmolVLM-500M-Instruct-ALPR",
"revitotan/FT-SmolVLM-500M-Instruct-Helmet",
"dkhanh/SmolVLM-500M-Instruct-earths",
"dkhanh/SmolVLM-500M-Instruct-earth-v0",
"dkhanh/SmolVLM-500M-Instruct-earth-v1",
"dkhanh/SmolVLM-500M-Instruct-earths-v1",
"samlucas/smolvlm_500m-parking_occupancy-PKLot-instruct-without-context-without-expert",
"samlucas/smolvlm_500m-parking_occupancy-PKLot-instruct-with-context-without-expert",
"samlucas/smolvlm_500m-parking_occupancy-PKLot-instruct-with-context-with-expert",
"samlucas/smolvlm_500m-parking_occupancy-PKLot-instruct-without-context-with-expert",
"samlucas/smolvlm_500m-parking_occupancy-CNRPark-instruct-with-context-with-expert",
"samlucas/smolvlm_500m-parking_occupancy-CNRPark-instruct-with-context-without-expert",
"samlucas/smolvlm_500m-parking_occupancy-CNRPark-instruct-without-context-with-expert",
"samlucas/smolvlm_500m-parking_occupancy-CNRPark-instruct-without-context-without-expert",
"bilal1998/SmolVLM-500M-Instruct-vqav2"
],
"adapters_count": 17,
"quantized": [
"HuggingFaceTB/SmolVLM2-500M-Video-Instruct",
"moot20/SmolVLM-500M-Instruct-MLX-4bits",
"moot20/SmolVLM-500M-Instruct-MLX-6bits",
"moot20/SmolVLM-500M-Instruct-MLX-8bits",
"moot20/SmolVLM-500M-Instruct-MLX",
"ggml-org/SmolVLM-500M-Instruct-GGUF",
"mradermacher/SmolVLM-500M-Instruct-GGUF",
"mradermacher/SmolVLM-500M-Instruct-i1-GGUF",
"VyoJ/SmolVLM-500M-Instruct-be-GGUF"
],
"quantized_count": 9,
"merges": [],
"merges_count": 0,
"total_derivatives": 39,
"spaces": [],
"spaces_count": 0,
"parents": [],
"base_model": "HuggingFaceTB/SmolVLM-500M-Instruct",
"base_model_relation": "base"
},
{
"model_id": "vidore/ColSmolVLM-Instruct-500M-base",
"gated": "False",
"card": "---\nlicense: mit\nlibrary_name: colpali\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\nlanguage:\n- en\ntags:\n- colsmolvlm\n- vidore-experimental\n- vidore\n---\n# ColSmolVLM-500M-Instruct: Visual Retriever based on SmolVLM-500M-Instruct with ColBERT strategy\n\nColSmolVLM is a model based on a novel model architecture and training strategy based on Vision Language Models (VLMs) to efficiently index documents from their visual features.\nIt is a SmolVLM extension that generates [ColBERT](https://arxiv.org/abs/2004.12832)- style multi-vector representations of text and images. \nIt was introduced in the paper [ColPali: Efficient Document Retrieval with Vision Language Models](https://arxiv.org/abs/2407.01449) and first released in [this repository](https://github.com/ManuelFay/colpali)\n\nThis version is the untrained base version to guarantee deterministic projection layer initialization.\n\n\n## License\n\nColSmol's vision language backbone model (ColSmolVLM) is under `apache2.0` license. The adapters attached to the model are under MIT license.\n\n## Contact\n\n- Manuel Faysse: manuel.faysse@illuin.tech\n- Hugues Sibille: hugues.sibille@illuin.tech\n- Tony Wu: tony.wu@illuin.tech\n\n## Citation\n\nIf you use any datasets or models from this organization in your research, please cite the original dataset as follows:\n\n```bibtex\n@misc{faysse2024colpaliefficientdocumentretrieval,\n title={ColPali: Efficient Document Retrieval with Vision Language Models}, \n author={Manuel Faysse and Hugues Sibille and Tony Wu and Bilel Omrani and Gautier Viaud and C\u00e9line Hudelot and Pierre Colombo},\n year={2024},\n eprint={2407.01449},\n archivePrefix={arXiv},\n primaryClass={cs.IR},\n url={https://arxiv.org/abs/2407.01449}, \n}\n```",
"metadata": "\"N/A\"",
"depth": 1,
"children": [
"vidore/colSmol-500M",
"thoddnn/colSmol-500M",
"ingenio/IndoColSmol-500M"
],
"children_count": 3,
"adapters": [
"Oysiyl/colSmol-500M_ufo"
],
"adapters_count": 1,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 4,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM-500M-Instruct"
],
"base_model": "vidore/ColSmolVLM-Instruct-500M-base",
"base_model_relation": "base"
},
{
"model_id": "carles-mzms/tyrynzysmegalodon",
"gated": "False",
"card": "---\nlicense: cc-by-3.0\nlanguage:\n- es\n- pa\n- en\n- ca\n- fr\n- it\nbase_model:\n- HuggingFaceTB/SmolVLM-500M-Instruct\n---",
"metadata": "\"N/A\"",
"depth": 1,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM-500M-Instruct"
],
"base_model": "carles-mzms/tyrynzysmegalodon",
"base_model_relation": "base"
},
{
"model_id": "lvxiangyu11/smolvlm-instruct-trl-sft-ChartQA",
"gated": "False",
"card": "---\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\nlibrary_name: transformers\nmodel_name: smolvlm-instruct-trl-sft-ChartQA\ntags:\n- generated_from_trainer\n- trl\n- sft\nlicence: license\n---\n\n# Model Card for smolvlm-instruct-trl-sft-ChartQA\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM-500M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct).\nIt has been trained using [TRL](https://github.com/huggingface/trl).\n\n## Quick start\n\n```python\nfrom transformers import pipeline\n\nquestion = \"If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?\"\ngenerator = pipeline(\"text-generation\", model=\"lvxiangyu11/smolvlm-instruct-trl-sft-ChartQA\", device=\"cuda\")\noutput = generator([{\"role\": \"user\", \"content\": question}], max_new_tokens=128, return_full_text=False)[0]\nprint(output[\"generated_text\"])\n```\n\n## Training procedure\n\n \n\n\nThis model was trained with SFT.\n\n### Framework versions\n\n- TRL: 0.14.0\n- Transformers: 4.48.2\n- Pytorch: 2.5.1+cu121\n- Datasets: 3.2.0\n- Tokenizers: 0.21.0\n\n## Citations\n\n\n\nCite TRL as:\n \n```bibtex\n@misc{vonwerra2022trl,\n\ttitle = {{TRL: Transformer Reinforcement Learning}},\n\tauthor = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou\u00e9dec},\n\tyear = 2020,\n\tjournal = {GitHub repository},\n\tpublisher = {GitHub},\n\thowpublished = {\\url{https://github.com/huggingface/trl}}\n}\n```",
"metadata": "\"N/A\"",
"depth": 1,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM-500M-Instruct"
],
"base_model": "lvxiangyu11/smolvlm-instruct-trl-sft-ChartQA",
"base_model_relation": "base"
},
{
"model_id": "hasan-farooq/SmolVLM-500M-Instruct-vqav2",
"gated": "False",
"card": "---\nlibrary_name: transformers\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM-500M-Instruct-vqav2\n results: []\n---\n\n<!-- This model card has been generated automatically according to the information the Trainer had access to. You\nshould probably proofread and complete it, then remove this comment. -->\n\n# SmolVLM-500M-Instruct-vqav2\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM-500M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 16\n- optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 3\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.48.2\n- Pytorch 2.5.1+cu124\n- Datasets 3.2.0\n- Tokenizers 0.21.0\n",
"metadata": "\"N/A\"",
"depth": 1,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM-500M-Instruct"
],
"base_model": "hasan-farooq/SmolVLM-500M-Instruct-vqav2",
"base_model_relation": "base"
},
{
"model_id": "hasan-farooq/SmolVLM-500M-Instruct-vqav3",
"gated": "False",
"card": "---\nlibrary_name: transformers\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM-500M-Instruct-vqav3\n results: []\n---\n\n<!-- This model card has been generated automatically according to the information the Trainer had access to. You\nshould probably proofread and complete it, then remove this comment. -->\n\n# SmolVLM-500M-Instruct-vqav3\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM-500M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 10\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.48.2\n- Pytorch 2.5.1+cu124\n- Datasets 3.2.0\n- Tokenizers 0.21.0\n",
"metadata": "\"N/A\"",
"depth": 1,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM-500M-Instruct"
],
"base_model": "hasan-farooq/SmolVLM-500M-Instruct-vqav3",
"base_model_relation": "base"
},
{
"model_id": "hasan-farooq/SmolVLM-500M-Instruct-med-vqav1",
"gated": "False",
"card": "---\nlibrary_name: transformers\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM-500M-Instruct-med-vqav1\n results: []\n---\n\n<!-- This model card has been generated automatically according to the information the Trainer had access to. You\nshould probably proofread and complete it, then remove this comment. -->\n\n# SmolVLM-500M-Instruct-med-vqav1\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM-500M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.3924\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 8\n- optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 2\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss |\n|:-------------:|:------:|:----:|:---------------:|\n| 1.0375 | 0.4454 | 100 | 0.4305 |\n| 0.4064 | 0.8909 | 200 | 0.4024 |\n| 0.3378 | 1.3341 | 300 | 0.3941 |\n| 0.3348 | 1.7795 | 400 | 0.3924 |\n\n\n### Framework versions\n\n- Transformers 4.48.2\n- Pytorch 2.5.1+cu124\n- Datasets 3.3.0\n- Tokenizers 0.21.0\n",
"metadata": "\"N/A\"",
"depth": 1,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM-500M-Instruct"
],
"base_model": "hasan-farooq/SmolVLM-500M-Instruct-med-vqav1",
"base_model_relation": "base"
},
{
"model_id": "aadhibest/smolvlm-500m-instruct-13-03-2025",
"gated": "False",
"card": "---\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\nlibrary_name: transformers\nmodel_name: smolvlm-500m-instruct-13-03-2025\ntags:\n- generated_from_trainer\n- trl\n- sft\nlicence: license\n---\n\n# Model Card for smolvlm-500m-instruct-13-03-2025\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM-500M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct).\nIt has been trained using [TRL](https://github.com/huggingface/trl).\n\n## Quick start\n\n```python\nfrom transformers import pipeline\n\nquestion = \"If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?\"\ngenerator = pipeline(\"text-generation\", model=\"aadhibest/smolvlm-500m-instruct-13-03-2025\", device=\"cuda\")\noutput = generator([{\"role\": \"user\", \"content\": question}], max_new_tokens=128, return_full_text=False)[0]\nprint(output[\"generated_text\"])\n```\n\n## Training procedure\n\n \n\n\nThis model was trained with SFT.\n\n### Framework versions\n\n- TRL: 0.15.0\n- Transformers: 4.49.0\n- Pytorch: 2.6.0+cu118\n- Datasets: 3.3.1\n- Tokenizers: 0.21.0\n\n## Citations\n\n\n\nCite TRL as:\n \n```bibtex\n@misc{vonwerra2022trl,\n\ttitle = {{TRL: Transformer Reinforcement Learning}},\n\tauthor = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou\u00e9dec},\n\tyear = 2020,\n\tjournal = {GitHub repository},\n\tpublisher = {GitHub},\n\thowpublished = {\\url{https://github.com/huggingface/trl}}\n}\n```",
"metadata": "\"N/A\"",
"depth": 1,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM-500M-Instruct"
],
"base_model": "aadhibest/smolvlm-500m-instruct-13-03",
"base_model_relation": "finetune"
},
{
"model_id": "chiaky21/SmolVLM-500M-Instruct-vqav2",
"gated": "False",
"card": "---\nlibrary_name: transformers\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM-500M-Instruct-vqav2\n results: []\n---\n\n<!-- This model card has been generated automatically according to the information the Trainer had access to. You\nshould probably proofread and complete it, then remove this comment. -->\n\n# SmolVLM-500M-Instruct-vqav2\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM-500M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 25\n- num_epochs: 5\n\n### Framework versions\n\n- Transformers 4.49.0\n- Pytorch 2.4.1+cu124\n- Datasets 3.4.1\n- Tokenizers 0.21.1\n",
"metadata": "\"N/A\"",
"depth": 1,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM-500M-Instruct"
],
"base_model": "chiaky21/SmolVLM-500M-Instruct-vqav2",
"base_model_relation": "base"
},
{
"model_id": "racineai/Flantier-SmolVLM-500M-dse",
"gated": "False",
"card": "---\nlicense: apache-2.0\ndatasets:\n- racineai/OGC_2_vdr-visRAG-colpali\nlanguage:\n- fr\n- en\n- de\n- es\n- it\nbase_model:\n- HuggingFaceTB/SmolVLM-500M-Instruct\n---\n\n# Flantier-SmolVLM-500M-dse\n\nA lightweight multimodal vision-language model specialized for technical document retrieval.\n\n## Overview\n\nFlantier-SmolVLM-500M-dse (Document Screenshot Embedding) is a 500M parameter vision-language model designed for efficient retrieval of technical documentation. It directly encodes document screenshots into embeddings, preserving all information including text, images, and layout without requiring separate content extraction.\n\n## Key Features\n\n- **Efficient Retrieval**: Generates document and query embeddings for semantic similarity search\n- **Multimodal Understanding**: Processes text, diagrams, charts, and tables in their original layout\n- **Lightweight Architecture**: Only 500M parameters, runs on consumer GPUs\n- **No Preprocessing Required**: Directly works with document screenshots\n\n## Installation\n\n```bash\npip install transformers accelerate pillow\n```\n\n## Usage Example\n\n```python\nfrom PIL import Image\nimport torch\nfrom transformers import AutoProcessor, AutoModelForVision2Seq\n\n# Load model and processor\nprocessor = AutoProcessor.from_pretrained(\"racineai/Flantier-SmolVLM-500M-dse\")\nmodel = AutoModelForVision2Seq.from_pretrained(\n \"racineai/Flantier-SmolVLM-500M-dse\",\n torch_dtype=torch.bfloat16,\n device_map=\"auto\"\n)\n\n# Load document image\ndocument_image = Image.open(\"technical_document.jpg\")\n\n# Process for document embedding\ndoc_messages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\"},\n {\"type\": \"text\", \"text\": \"What is shown in this image?\"}\n ]\n },\n]\ndoc_prompt = processor.apply_chat_template(doc_messages, add_generation_prompt=True)\ndoc_inputs = processor(text=doc_prompt, images=[document_image], return_tensors=\"pt\").to(model.device)\n\n# Generate document embedding\nwith torch.no_grad():\n doc_outputs = model(**doc_inputs, output_hidden_states=True, return_dict=True)\n doc_embedding = doc_outputs.hidden_states[-1][:, -1] # Last token embedding\n doc_embedding = torch.nn.functional.normalize(doc_embedding, p=2, dim=-1)\n\n# Process query embedding\nquery = \"What are the specifications of this component?\"\nquery_messages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"text\", \"text\": query}\n ]\n },\n]\nquery_prompt = processor.apply_chat_template(query_messages, add_generation_prompt=True)\nquery_inputs = processor(text=query_prompt, return_tensors=\"pt\").to(model.device)\n\n# Generate query embedding\nwith torch.no_grad():\n query_outputs = model(**query_inputs, output_hidden_states=True, return_dict=True)\n query_embedding = query_outputs.hidden_states[-1][:, -1] # Last token embedding\n query_embedding = torch.nn.functional.normalize(query_embedding, p=2, dim=-1)\n\n# Calculate similarity\nsimilarity = torch.nn.functional.cosine_similarity(query_embedding, doc_embedding)\nprint(f\"Similarity score: {similarity.item():.4f}\")\n```\n\n## Applications\n\n- **Technical Document Retrieval**: Find relevant documents based on technical queries\n- **Technical Support Systems**: Match user questions to relevant documentation\n- **Engineering Knowledge Management**: Index and search technical specifications, diagrams, and reports\n\n## Training Methodology\n\nThis model was trained using the Document Screenshot Embedding (DSE) approach, which treats document screenshots as a unified input format. This eliminates the need for content extraction preprocessing while preserving all visual and textual information in documents.\n\n## Citation\n\n```\n@misc{flantier-smolvlm-dse,\n author = {racine.ai},\n title = {Flantier-SmolVLM-500M-dse: A Lightweight Document Screenshot Embedding Model},\n year = {2025},\n publisher = {Hugging Face},\n url = {https://huggingface.co/racineai/Flantier-SmolVLM-500M-dse}\n}\n```\n\n## License\n\nThis model is released under the Apache 2.0 license.",
"metadata": "\"N/A\"",
"depth": 1,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM-500M-Instruct"
],
"base_model": "racineai/Flantier-SmolVLM-500M-dse",
"base_model_relation": "base"
},
{
"model_id": "Soundappan123/smolvlm-dpo",
"gated": "False",
"card": "---\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\nlibrary_name: transformers\nmodel_name: smolvlm-dpo\ntags:\n- generated_from_trainer\n- trl\n- dpo\nlicence: license\n---\n\n# Model Card for smolvlm-dpo\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM-500M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct).\nIt has been trained using [TRL](https://github.com/huggingface/trl).\n\n## Quick start\n\n```python\nfrom transformers import pipeline\n\nquestion = \"If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?\"\ngenerator = pipeline(\"text-generation\", model=\"Soundappan123/smolvlm-dpo\", device=\"cuda\")\noutput = generator([{\"role\": \"user\", \"content\": question}], max_new_tokens=128, return_full_text=False)[0]\nprint(output[\"generated_text\"])\n```\n\n## Training procedure\n\n \n\n\nThis model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).\n\n### Framework versions\n\n- TRL: 0.17.0\n- Transformers: 4.51.3\n- Pytorch: 2.7.0\n- Datasets: 3.5.1\n- Tokenizers: 0.21.1\n\n## Citations\n\nCite DPO as:\n\n```bibtex\n@inproceedings{rafailov2023direct,\n title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},\n author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},\n year = 2023,\n booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},\n url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},\n editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},\n}\n```\n\nCite TRL as:\n \n```bibtex\n@misc{vonwerra2022trl,\n\ttitle = {{TRL: Transformer Reinforcement Learning}},\n\tauthor = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\\'e}dec},\n\tyear = 2020,\n\tjournal = {GitHub repository},\n\tpublisher = {GitHub},\n\thowpublished = {\\url{https://github.com/huggingface/trl}}\n}\n```",
"metadata": "\"N/A\"",
"depth": 1,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM-500M-Instruct"
],
"base_model": "Soundappan123/smolvlm-dpo",
"base_model_relation": "base"
},
{
"model_id": "BIOMEDICA/BMC-smolvlm1-500M",
"gated": "False",
"card": "---\ndatasets:\n- BIOMEDICA/biomedica_webdataset_24M\nlanguage:\n- en\nbase_model:\n- HuggingFaceTB/SmolVLM-Instruct-500M\n---\n\n\n<div align=\"center\" style=\"margin-bottom: -20px;\">\n <img src=\"https://raw.githubusercontent.com/minwoosun/biomedica-etl/refs/heads/main/media/Biomedica-Isologo-sin-espacio-2025.png\" alt=\"Pull Figure\" width=\"300\" />\n</div>\n\n\n\nBMC-SmolVLM1 is a family of lightweight biomedical vision-language models (ranging from 256M to 2.2B parameters) based on SmolVLM. These models are designed for efficient multimodal understanding in the biomedical domain. Please ensure you are using a GPU runtime to run this notebook.\n\n\nColab Tutorial: [](https://colab.research.google.com/drive/1Bg_pdLsXfHVX0U8AESL7TaiBQLDy2G7j?usp=sharing)\n",
"metadata": "\"N/A\"",
"depth": 1,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM-500M-Instruct"
],
"base_model": "BIOMEDICA/BMC-smolvlm1",
"base_model_relation": "finetune"
},
{
"model_id": "Pantelismak/smolvlm_cxr",
"gated": "False",
"card": "---\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\nlibrary_name: transformers\nmodel_name: smolvlm_cxr\ntags:\n- generated_from_trainer\n- trl\n- sft\nlicence: license\n---\n\n# Model Card for smolvlm_cxr\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM-500M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct).\nIt has been trained using [TRL](https://github.com/huggingface/trl).\n\n## Quick start\n\n```python\nfrom transformers import pipeline\n\nquestion = \"If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?\"\ngenerator = pipeline(\"text-generation\", model=\"Pantelismak/smolvlm_cxr\", device=\"cuda\")\noutput = generator([{\"role\": \"user\", \"content\": question}], max_new_tokens=128, return_full_text=False)[0]\nprint(output[\"generated_text\"])\n```\n\n## Training procedure\n\n \n\n\nThis model was trained with SFT.\n\n### Framework versions\n\n- TRL: 0.17.0\n- Transformers: 4.51.3\n- Pytorch: 2.6.0+cu124\n- Datasets: 3.6.0\n- Tokenizers: 0.21.1\n\n## Citations\n\n\n\nCite TRL as:\n \n```bibtex\n@misc{vonwerra2022trl,\n\ttitle = {{TRL: Transformer Reinforcement Learning}},\n\tauthor = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\\'e}dec},\n\tyear = 2020,\n\tjournal = {GitHub repository},\n\tpublisher = {GitHub},\n\thowpublished = {\\url{https://github.com/huggingface/trl}}\n}\n```",
"metadata": "\"N/A\"",
"depth": 1,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM-500M-Instruct"
],
"base_model": "Pantelismak/smolvlm_cxr",
"base_model_relation": "base"
},
{
"model_id": "JoseferEins/SmolVLM-500M-Instruct-fer0",
"gated": "unknown",
"card": "---\nlibrary_name: transformers\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\ntags:\n- fine-tuned\n- vision-language\n- emotion-recognition\nmodel-index:\n- name: SmolVLM-500M-Instruct-fer0\n results: []\n---\n\n# SmolVLM-500M-Instruct-fer0\n\nFine-tuned version of [SmolVLM-500M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct) on a subset of AffectNet (emotion recognition), with text labels transcribed via GPT-4o-mini.\n\n\nThis is just priliminary, we'll update soon with proper evalutation and info.\n## Example\n\n**Image input** \n\n\n**Predictions:** \n- *Base model*: A woman with blonde hair is looking to the side with a hand on her chin.\n- *This model*: The expression conveys a sense of contemplation or concern. The furrowed brow and slightly parted lips suggest a deep thought or worry. The hand on the chin indicates a hint of introspection, hinting at a possible emotional state of unease or contemplation.\n\n\n## Training Summary\n\n- **Loss values**: \n\n| Step | Training Loss |\n|-------|----------------|\n| 25 | 2.80 |\n| 50 | 0.82 |\n| 75 | 0.48 |\n| 100 | 0.43 |\n\n- **Hyperparameters**: \n - Learning rate: 1e-4 \n - Batch size: 4 (grad. accum. \u00d74) \n - Epochs: 1 \n - Optimizer: 8-bit AdamW \n - Scheduler: linear (warmup 50 steps) \n - Seed: 42\n\n## Frameworks\n\n- Transformers 4.50.0 \n- PyTorch 2.3.1+cu121 \n- Datasets 3.6.0 \n- Tokenizers 0.21.1\n",
"metadata": "\"N/A\"",
"depth": 1,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM-500M-Instruct"
],
"base_model": null,
"base_model_relation": null
},
{
"model_id": "VishalD1234/SmolVLM-500M-Instruct-vqav2",
"gated": "False",
"card": "---\nlibrary_name: peft\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM-500M-Instruct-vqav2\n results: []\n---\n\n<!-- This model card has been generated automatically according to the information the Trainer had access to. You\nshould probably proofread and complete it, then remove this comment. -->\n\n# SmolVLM-500M-Instruct-vqav2\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM-500M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 3\n- eval_batch_size: 3\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 12\n- optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 2\n\n### Training results\n\n\n\n### Framework versions\n\n- PEFT 0.14.0\n- Transformers 4.48.1\n- Pytorch 2.5.1+cu121\n- Datasets 3.2.0\n- Tokenizers 0.21.0",
"metadata": "\"N/A\"",
"depth": 1,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM-500M-Instruct"
],
"base_model": "VishalD1234/SmolVLM-500M-Instruct-vqav2",
"base_model_relation": "base"
},
{
"model_id": "sasikaran04/SmolVLM-500M-Instruct-vqav2",
"gated": "False",
"card": "---\nlibrary_name: peft\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM-500M-Instruct-vqav2\n results: []\n---\n\n<!-- This model card has been generated automatically according to the information the Trainer had access to. You\nshould probably proofread and complete it, then remove this comment. -->\n\n# SmolVLM-500M-Instruct-vqav2\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM-500M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 3\n- eval_batch_size: 3\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 12\n- optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 5\n\n### Training results\n\n\n\n### Framework versions\n\n- PEFT 0.14.0\n- Transformers 4.48.1\n- Pytorch 2.5.1+cu121\n- Datasets 3.2.0\n- Tokenizers 0.21.0",
"metadata": "\"N/A\"",
"depth": 1,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM-500M-Instruct"
],
"base_model": "sasikaran04/SmolVLM-500M-Instruct-vqav2",
"base_model_relation": "base"
},
{
"model_id": "Hirai-Labs/FT-SmolVLM-500M-Instruct-ALPR",
"gated": "False",
"card": "---\nlibrary_name: peft\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: FT-SmolVLM-500M-Instruct-ALPR\n results: []\n---\n\n<!-- This model card has been generated automatically according to the information the Trainer had access to. You\nshould probably proofread and complete it, then remove this comment. -->\n\n# FT-SmolVLM-500M-Instruct-ALPR\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM-500M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 4\n- optimizer: Use paged_adamw_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 10\n\n### Training results\n\n\n\n### Framework versions\n\n- PEFT 0.14.0\n- Transformers 4.47.0\n- Pytorch 2.5.1+cu121\n- Datasets 3.2.0\n- Tokenizers 0.21.0",
"metadata": "\"N/A\"",
"depth": 1,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM-500M-Instruct"
],
"base_model": "Hirai-Labs/FT-SmolVLM-500M-Instruct-ALPR",
"base_model_relation": "base"
},
{
"model_id": "revitotan/FT-SmolVLM-500M-Instruct-Helmet",
"gated": "False",
"card": "---\nlibrary_name: peft\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: FT-SmolVLM-500M-Instruct-Helmet\n results: []\n---\n\n<!-- This model card has been generated automatically according to the information the Trainer had access to. You\nshould probably proofread and complete it, then remove this comment. -->\n\n[<img src=\"https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg\" alt=\"Visualize in Weights & Biases\" width=\"200\" height=\"32\"/>](https://wandb.ai/revitopradipa-muhammadiyah-university-of-surakarta/HelmetVLM/runs/lg1n8bj5)\n# FT-SmolVLM-500M-Instruct-Helmet\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM-500M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 4\n- optimizer: Use paged_adamw_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 10\n\n### Framework versions\n\n- PEFT 0.14.0\n- Transformers 4.47.0\n- Pytorch 2.5.1+cu121\n- Datasets 3.3.1\n- Tokenizers 0.21.0",
"metadata": "\"N/A\"",
"depth": 1,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM-500M-Instruct"
],
"base_model": "revitotan/FT-SmolVLM-500M-Instruct-Helmet",
"base_model_relation": "base"
},
{
"model_id": "dkhanh/SmolVLM-500M-Instruct-earths",
"gated": "False",
"card": "---\nlibrary_name: peft\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM-500M-Instruct-earths\n results: []\n---\n\n<!-- This model card has been generated automatically according to the information the Trainer had access to. You\nshould probably proofread and complete it, then remove this comment. -->\n\n# SmolVLM-500M-Instruct-earths\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM-500M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 8\n- optimizer: Use paged_adamw_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- num_epochs: 3\n\n### Training results\n\n\n\n### Framework versions\n\n- PEFT 0.15.1\n- Transformers 4.51.3\n- Pytorch 2.6.0+cu124\n- Datasets 3.5.0\n- Tokenizers 0.21.1",
"metadata": "\"N/A\"",
"depth": 1,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM-500M-Instruct"
],
"base_model": "dkhanh/SmolVLM-500M-Instruct-earths",
"base_model_relation": "base"
},
{
"model_id": "dkhanh/SmolVLM-500M-Instruct-earth-v0",
"gated": "False",
"card": "---\nlibrary_name: peft\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM-500M-Instruct-earth-v0\n results: []\n---\n\n<!-- This model card has been generated automatically according to the information the Trainer had access to. You\nshould probably proofread and complete it, then remove this comment. -->\n\n# SmolVLM-500M-Instruct-earth-v0\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM-500M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 4e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 8\n- optimizer: Use paged_adamw_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- num_epochs: 4\n\n### Training results\n\n\n\n### Framework versions\n\n- PEFT 0.13.2\n- Transformers 4.51.3\n- Pytorch 2.6.0+cu124\n- Datasets 3.5.0\n- Tokenizers 0.21.1",
"metadata": "\"N/A\"",
"depth": 1,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM-500M-Instruct"
],
"base_model": "dkhanh/SmolVLM-500M-Instruct-earth-v0",
"base_model_relation": "base"
},
{
"model_id": "dkhanh/SmolVLM-500M-Instruct-earth-v1",
"gated": "False",
"card": "---\nlibrary_name: peft\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM-500M-Instruct-earth-v1\n results: []\n---\n\n<!-- This model card has been generated automatically according to the information the Trainer had access to. You\nshould probably proofread and complete it, then remove this comment. -->\n\n# SmolVLM-500M-Instruct-earth-v1\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM-500M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 16\n- optimizer: Use paged_adamw_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 5\n\n### Training results\n\n\n\n### Framework versions\n\n- PEFT 0.15.2\n- Transformers 4.51.3\n- Pytorch 2.7.0+cu126\n- Datasets 3.5.0\n- Tokenizers 0.21.1",
"metadata": "\"N/A\"",
"depth": 1,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM-500M-Instruct"
],
"base_model": "dkhanh/SmolVLM-500M-Instruct-earth-v1",
"base_model_relation": "base"
},
{
"model_id": "dkhanh/SmolVLM-500M-Instruct-earths-v1",
"gated": "False",
"card": "---\nlibrary_name: peft\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM-500M-Instruct-earths-v1\n results: []\n---\n\n<!-- This model card has been generated automatically according to the information the Trainer had access to. You\nshould probably proofread and complete it, then remove this comment. -->\n\n# SmolVLM-500M-Instruct-earths-v1\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM-500M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 16\n- optimizer: Use paged_adamw_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n\n### Training results\n\n\n\n### Framework versions\n\n- PEFT 0.15.2\n- Transformers 4.51.3\n- Pytorch 2.7.0+cu126\n- Datasets 3.5.0\n- Tokenizers 0.21.1",
"metadata": "\"N/A\"",
"depth": 1,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM-500M-Instruct"
],
"base_model": "dkhanh/SmolVLM-500M-Instruct-earths-v1",
"base_model_relation": "base"
},
{
"model_id": "samlucas/smolvlm_500m-parking_occupancy-PKLot-instruct-without-context-without-expert",
"gated": "False",
"card": "---\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\nlibrary_name: peft\n---\n\n# Model Card for Model ID\n\n<!-- Provide a quick summary of what the model is/does. -->\n\n\n\n## Model Details\n\n### Model Description\n\n<!-- Provide a longer summary of what this model is. -->\n\n\n\n- **Developed by:** [More Information Needed]\n- **Funded by [optional]:** [More Information Needed]\n- **Shared by [optional]:** [More Information Needed]\n- **Model type:** [More Information Needed]\n- **Language(s) (NLP):** [More Information Needed]\n- **License:** [More Information Needed]\n- **Finetuned from model [optional]:** [More Information Needed]\n\n### Model Sources [optional]\n\n<!-- Provide the basic links for the model. -->\n\n- **Repository:** [More Information Needed]\n- **Paper [optional]:** [More Information Needed]\n- **Demo [optional]:** [More Information Needed]\n\n## Uses\n\n<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->\n\n### Direct Use\n\n<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->\n\n[More Information Needed]\n\n### Downstream Use [optional]\n\n<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->\n\n[More Information Needed]\n\n### Out-of-Scope Use\n\n<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->\n\n[More Information Needed]\n\n## Bias, Risks, and Limitations\n\n<!-- This section is meant to convey both technical and sociotechnical limitations. -->\n\n[More Information Needed]\n\n### Recommendations\n\n<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.\n\n## How to Get Started with the Model\n\nUse the code below to get started with the model.\n\n[More Information Needed]\n\n## Training Details\n\n### Training Data\n\n<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->\n\n[More Information Needed]\n\n### Training Procedure\n\n<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->\n\n#### Preprocessing [optional]\n\n[More Information Needed]\n\n\n#### Training Hyperparameters\n\n- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->\n\n#### Speeds, Sizes, Times [optional]\n\n<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->\n\n[More Information Needed]\n\n## Evaluation\n\n<!-- This section describes the evaluation protocols and provides the results. -->\n\n### Testing Data, Factors & Metrics\n\n#### Testing Data\n\n<!-- This should link to a Dataset Card if possible. -->\n\n[More Information Needed]\n\n#### Factors\n\n<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->\n\n[More Information Needed]\n\n#### Metrics\n\n<!-- These are the evaluation metrics being used, ideally with a description of why. -->\n\n[More Information Needed]\n\n### Results\n\n[More Information Needed]\n\n#### Summary\n\n\n\n## Model Examination [optional]\n\n<!-- Relevant interpretability work for the model goes here -->\n\n[More Information Needed]\n\n## Environmental Impact\n\n<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).\n\n- **Hardware Type:** [More Information Needed]\n- **Hours used:** [More Information Needed]\n- **Cloud Provider:** [More Information Needed]\n- **Compute Region:** [More Information Needed]\n- **Carbon Emitted:** [More Information Needed]\n\n## Technical Specifications [optional]\n\n### Model Architecture and Objective\n\n[More Information Needed]\n\n### Compute Infrastructure\n\n[More Information Needed]\n\n#### Hardware\n\n[More Information Needed]\n\n#### Software\n\n[More Information Needed]\n\n## Citation [optional]\n\n<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->\n\n**BibTeX:**\n\n[More Information Needed]\n\n**APA:**\n\n[More Information Needed]\n\n## Glossary [optional]\n\n<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->\n\n[More Information Needed]\n\n## More Information [optional]\n\n[More Information Needed]\n\n## Model Card Authors [optional]\n\n[More Information Needed]\n\n## Model Card Contact\n\n[More Information Needed]\n### Framework versions\n\n- PEFT 0.15.2",
"metadata": "\"N/A\"",
"depth": 1,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM-500M-Instruct"
],
"base_model": "samlucas/smolvlm_500m-parking_occupancy-PKLot-instruct-without-context-without-expert",
"base_model_relation": "base"
},
{
"model_id": "samlucas/smolvlm_500m-parking_occupancy-PKLot-instruct-with-context-without-expert",
"gated": "False",
"card": "---\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\nlibrary_name: peft\n---\n\n# Model Card for Model ID\n\n<!-- Provide a quick summary of what the model is/does. -->\n\n\n\n## Model Details\n\n### Model Description\n\n<!-- Provide a longer summary of what this model is. -->\n\n\n\n- **Developed by:** [More Information Needed]\n- **Funded by [optional]:** [More Information Needed]\n- **Shared by [optional]:** [More Information Needed]\n- **Model type:** [More Information Needed]\n- **Language(s) (NLP):** [More Information Needed]\n- **License:** [More Information Needed]\n- **Finetuned from model [optional]:** [More Information Needed]\n\n### Model Sources [optional]\n\n<!-- Provide the basic links for the model. -->\n\n- **Repository:** [More Information Needed]\n- **Paper [optional]:** [More Information Needed]\n- **Demo [optional]:** [More Information Needed]\n\n## Uses\n\n<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->\n\n### Direct Use\n\n<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->\n\n[More Information Needed]\n\n### Downstream Use [optional]\n\n<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->\n\n[More Information Needed]\n\n### Out-of-Scope Use\n\n<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->\n\n[More Information Needed]\n\n## Bias, Risks, and Limitations\n\n<!-- This section is meant to convey both technical and sociotechnical limitations. -->\n\n[More Information Needed]\n\n### Recommendations\n\n<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.\n\n## How to Get Started with the Model\n\nUse the code below to get started with the model.\n\n[More Information Needed]\n\n## Training Details\n\n### Training Data\n\n<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->\n\n[More Information Needed]\n\n### Training Procedure\n\n<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->\n\n#### Preprocessing [optional]\n\n[More Information Needed]\n\n\n#### Training Hyperparameters\n\n- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->\n\n#### Speeds, Sizes, Times [optional]\n\n<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->\n\n[More Information Needed]\n\n## Evaluation\n\n<!-- This section describes the evaluation protocols and provides the results. -->\n\n### Testing Data, Factors & Metrics\n\n#### Testing Data\n\n<!-- This should link to a Dataset Card if possible. -->\n\n[More Information Needed]\n\n#### Factors\n\n<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->\n\n[More Information Needed]\n\n#### Metrics\n\n<!-- These are the evaluation metrics being used, ideally with a description of why. -->\n\n[More Information Needed]\n\n### Results\n\n[More Information Needed]\n\n#### Summary\n\n\n\n## Model Examination [optional]\n\n<!-- Relevant interpretability work for the model goes here -->\n\n[More Information Needed]\n\n## Environmental Impact\n\n<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).\n\n- **Hardware Type:** [More Information Needed]\n- **Hours used:** [More Information Needed]\n- **Cloud Provider:** [More Information Needed]\n- **Compute Region:** [More Information Needed]\n- **Carbon Emitted:** [More Information Needed]\n\n## Technical Specifications [optional]\n\n### Model Architecture and Objective\n\n[More Information Needed]\n\n### Compute Infrastructure\n\n[More Information Needed]\n\n#### Hardware\n\n[More Information Needed]\n\n#### Software\n\n[More Information Needed]\n\n## Citation [optional]\n\n<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->\n\n**BibTeX:**\n\n[More Information Needed]\n\n**APA:**\n\n[More Information Needed]\n\n## Glossary [optional]\n\n<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->\n\n[More Information Needed]\n\n## More Information [optional]\n\n[More Information Needed]\n\n## Model Card Authors [optional]\n\n[More Information Needed]\n\n## Model Card Contact\n\n[More Information Needed]\n### Framework versions\n\n- PEFT 0.15.2",
"metadata": "\"N/A\"",
"depth": 1,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM-500M-Instruct"
],
"base_model": "samlucas/smolvlm_500m-parking_occupancy-PKLot-instruct-with-context-without-expert",
"base_model_relation": "base"
},
{
"model_id": "samlucas/smolvlm_500m-parking_occupancy-PKLot-instruct-with-context-with-expert",
"gated": "False",
"card": "---\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\nlibrary_name: peft\n---\n\n# Model Card for Model ID\n\n<!-- Provide a quick summary of what the model is/does. -->\n\n\n\n## Model Details\n\n### Model Description\n\n<!-- Provide a longer summary of what this model is. -->\n\n\n\n- **Developed by:** [More Information Needed]\n- **Funded by [optional]:** [More Information Needed]\n- **Shared by [optional]:** [More Information Needed]\n- **Model type:** [More Information Needed]\n- **Language(s) (NLP):** [More Information Needed]\n- **License:** [More Information Needed]\n- **Finetuned from model [optional]:** [More Information Needed]\n\n### Model Sources [optional]\n\n<!-- Provide the basic links for the model. -->\n\n- **Repository:** [More Information Needed]\n- **Paper [optional]:** [More Information Needed]\n- **Demo [optional]:** [More Information Needed]\n\n## Uses\n\n<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->\n\n### Direct Use\n\n<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->\n\n[More Information Needed]\n\n### Downstream Use [optional]\n\n<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->\n\n[More Information Needed]\n\n### Out-of-Scope Use\n\n<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->\n\n[More Information Needed]\n\n## Bias, Risks, and Limitations\n\n<!-- This section is meant to convey both technical and sociotechnical limitations. -->\n\n[More Information Needed]\n\n### Recommendations\n\n<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.\n\n## How to Get Started with the Model\n\nUse the code below to get started with the model.\n\n[More Information Needed]\n\n## Training Details\n\n### Training Data\n\n<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->\n\n[More Information Needed]\n\n### Training Procedure\n\n<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->\n\n#### Preprocessing [optional]\n\n[More Information Needed]\n\n\n#### Training Hyperparameters\n\n- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->\n\n#### Speeds, Sizes, Times [optional]\n\n<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->\n\n[More Information Needed]\n\n## Evaluation\n\n<!-- This section describes the evaluation protocols and provides the results. -->\n\n### Testing Data, Factors & Metrics\n\n#### Testing Data\n\n<!-- This should link to a Dataset Card if possible. -->\n\n[More Information Needed]\n\n#### Factors\n\n<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->\n\n[More Information Needed]\n\n#### Metrics\n\n<!-- These are the evaluation metrics being used, ideally with a description of why. -->\n\n[More Information Needed]\n\n### Results\n\n[More Information Needed]\n\n#### Summary\n\n\n\n## Model Examination [optional]\n\n<!-- Relevant interpretability work for the model goes here -->\n\n[More Information Needed]\n\n## Environmental Impact\n\n<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).\n\n- **Hardware Type:** [More Information Needed]\n- **Hours used:** [More Information Needed]\n- **Cloud Provider:** [More Information Needed]\n- **Compute Region:** [More Information Needed]\n- **Carbon Emitted:** [More Information Needed]\n\n## Technical Specifications [optional]\n\n### Model Architecture and Objective\n\n[More Information Needed]\n\n### Compute Infrastructure\n\n[More Information Needed]\n\n#### Hardware\n\n[More Information Needed]\n\n#### Software\n\n[More Information Needed]\n\n## Citation [optional]\n\n<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->\n\n**BibTeX:**\n\n[More Information Needed]\n\n**APA:**\n\n[More Information Needed]\n\n## Glossary [optional]\n\n<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->\n\n[More Information Needed]\n\n## More Information [optional]\n\n[More Information Needed]\n\n## Model Card Authors [optional]\n\n[More Information Needed]\n\n## Model Card Contact\n\n[More Information Needed]\n### Framework versions\n\n- PEFT 0.15.2",
"metadata": "\"N/A\"",
"depth": 1,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM-500M-Instruct"
],
"base_model": "samlucas/smolvlm_500m-parking_occupancy-PKLot-instruct-with-context-with-expert",
"base_model_relation": "base"
},
{
"model_id": "samlucas/smolvlm_500m-parking_occupancy-PKLot-instruct-without-context-with-expert",
"gated": "False",
"card": "---\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\nlibrary_name: peft\n---\n\n# Model Card for Model ID\n\n<!-- Provide a quick summary of what the model is/does. -->\n\n\n\n## Model Details\n\n### Model Description\n\n<!-- Provide a longer summary of what this model is. -->\n\n\n\n- **Developed by:** [More Information Needed]\n- **Funded by [optional]:** [More Information Needed]\n- **Shared by [optional]:** [More Information Needed]\n- **Model type:** [More Information Needed]\n- **Language(s) (NLP):** [More Information Needed]\n- **License:** [More Information Needed]\n- **Finetuned from model [optional]:** [More Information Needed]\n\n### Model Sources [optional]\n\n<!-- Provide the basic links for the model. -->\n\n- **Repository:** [More Information Needed]\n- **Paper [optional]:** [More Information Needed]\n- **Demo [optional]:** [More Information Needed]\n\n## Uses\n\n<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->\n\n### Direct Use\n\n<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->\n\n[More Information Needed]\n\n### Downstream Use [optional]\n\n<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->\n\n[More Information Needed]\n\n### Out-of-Scope Use\n\n<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->\n\n[More Information Needed]\n\n## Bias, Risks, and Limitations\n\n<!-- This section is meant to convey both technical and sociotechnical limitations. -->\n\n[More Information Needed]\n\n### Recommendations\n\n<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.\n\n## How to Get Started with the Model\n\nUse the code below to get started with the model.\n\n[More Information Needed]\n\n## Training Details\n\n### Training Data\n\n<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->\n\n[More Information Needed]\n\n### Training Procedure\n\n<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->\n\n#### Preprocessing [optional]\n\n[More Information Needed]\n\n\n#### Training Hyperparameters\n\n- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->\n\n#### Speeds, Sizes, Times [optional]\n\n<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->\n\n[More Information Needed]\n\n## Evaluation\n\n<!-- This section describes the evaluation protocols and provides the results. -->\n\n### Testing Data, Factors & Metrics\n\n#### Testing Data\n\n<!-- This should link to a Dataset Card if possible. -->\n\n[More Information Needed]\n\n#### Factors\n\n<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->\n\n[More Information Needed]\n\n#### Metrics\n\n<!-- These are the evaluation metrics being used, ideally with a description of why. -->\n\n[More Information Needed]\n\n### Results\n\n[More Information Needed]\n\n#### Summary\n\n\n\n## Model Examination [optional]\n\n<!-- Relevant interpretability work for the model goes here -->\n\n[More Information Needed]\n\n## Environmental Impact\n\n<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).\n\n- **Hardware Type:** [More Information Needed]\n- **Hours used:** [More Information Needed]\n- **Cloud Provider:** [More Information Needed]\n- **Compute Region:** [More Information Needed]\n- **Carbon Emitted:** [More Information Needed]\n\n## Technical Specifications [optional]\n\n### Model Architecture and Objective\n\n[More Information Needed]\n\n### Compute Infrastructure\n\n[More Information Needed]\n\n#### Hardware\n\n[More Information Needed]\n\n#### Software\n\n[More Information Needed]\n\n## Citation [optional]\n\n<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->\n\n**BibTeX:**\n\n[More Information Needed]\n\n**APA:**\n\n[More Information Needed]\n\n## Glossary [optional]\n\n<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->\n\n[More Information Needed]\n\n## More Information [optional]\n\n[More Information Needed]\n\n## Model Card Authors [optional]\n\n[More Information Needed]\n\n## Model Card Contact\n\n[More Information Needed]\n### Framework versions\n\n- PEFT 0.15.2",
"metadata": "\"N/A\"",
"depth": 1,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM-500M-Instruct"
],
"base_model": "samlucas/smolvlm_500m-parking_occupancy-PKLot-instruct-without-context-with-expert",
"base_model_relation": "base"
},
{
"model_id": "samlucas/smolvlm_500m-parking_occupancy-CNRPark-instruct-with-context-with-expert",
"gated": "False",
"card": "---\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\nlibrary_name: peft\n---\n\n# Model Card for Model ID\n\n<!-- Provide a quick summary of what the model is/does. -->\n\n\n\n## Model Details\n\n### Model Description\n\n<!-- Provide a longer summary of what this model is. -->\n\n\n\n- **Developed by:** [More Information Needed]\n- **Funded by [optional]:** [More Information Needed]\n- **Shared by [optional]:** [More Information Needed]\n- **Model type:** [More Information Needed]\n- **Language(s) (NLP):** [More Information Needed]\n- **License:** [More Information Needed]\n- **Finetuned from model [optional]:** [More Information Needed]\n\n### Model Sources [optional]\n\n<!-- Provide the basic links for the model. -->\n\n- **Repository:** [More Information Needed]\n- **Paper [optional]:** [More Information Needed]\n- **Demo [optional]:** [More Information Needed]\n\n## Uses\n\n<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->\n\n### Direct Use\n\n<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->\n\n[More Information Needed]\n\n### Downstream Use [optional]\n\n<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->\n\n[More Information Needed]\n\n### Out-of-Scope Use\n\n<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->\n\n[More Information Needed]\n\n## Bias, Risks, and Limitations\n\n<!-- This section is meant to convey both technical and sociotechnical limitations. -->\n\n[More Information Needed]\n\n### Recommendations\n\n<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.\n\n## How to Get Started with the Model\n\nUse the code below to get started with the model.\n\n[More Information Needed]\n\n## Training Details\n\n### Training Data\n\n<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->\n\n[More Information Needed]\n\n### Training Procedure\n\n<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->\n\n#### Preprocessing [optional]\n\n[More Information Needed]\n\n\n#### Training Hyperparameters\n\n- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->\n\n#### Speeds, Sizes, Times [optional]\n\n<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->\n\n[More Information Needed]\n\n## Evaluation\n\n<!-- This section describes the evaluation protocols and provides the results. -->\n\n### Testing Data, Factors & Metrics\n\n#### Testing Data\n\n<!-- This should link to a Dataset Card if possible. -->\n\n[More Information Needed]\n\n#### Factors\n\n<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->\n\n[More Information Needed]\n\n#### Metrics\n\n<!-- These are the evaluation metrics being used, ideally with a description of why. -->\n\n[More Information Needed]\n\n### Results\n\n[More Information Needed]\n\n#### Summary\n\n\n\n## Model Examination [optional]\n\n<!-- Relevant interpretability work for the model goes here -->\n\n[More Information Needed]\n\n## Environmental Impact\n\n<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).\n\n- **Hardware Type:** [More Information Needed]\n- **Hours used:** [More Information Needed]\n- **Cloud Provider:** [More Information Needed]\n- **Compute Region:** [More Information Needed]\n- **Carbon Emitted:** [More Information Needed]\n\n## Technical Specifications [optional]\n\n### Model Architecture and Objective\n\n[More Information Needed]\n\n### Compute Infrastructure\n\n[More Information Needed]\n\n#### Hardware\n\n[More Information Needed]\n\n#### Software\n\n[More Information Needed]\n\n## Citation [optional]\n\n<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->\n\n**BibTeX:**\n\n[More Information Needed]\n\n**APA:**\n\n[More Information Needed]\n\n## Glossary [optional]\n\n<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->\n\n[More Information Needed]\n\n## More Information [optional]\n\n[More Information Needed]\n\n## Model Card Authors [optional]\n\n[More Information Needed]\n\n## Model Card Contact\n\n[More Information Needed]\n### Framework versions\n\n- PEFT 0.15.2",
"metadata": "\"N/A\"",
"depth": 1,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM-500M-Instruct"
],
"base_model": "samlucas/smolvlm_500m-parking_occupancy-CNRPark-instruct-with-context-with-expert",
"base_model_relation": "base"
},
{
"model_id": "samlucas/smolvlm_500m-parking_occupancy-CNRPark-instruct-with-context-without-expert",
"gated": "False",
"card": "---\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\nlibrary_name: peft\n---\n\n# Model Card for Model ID\n\n<!-- Provide a quick summary of what the model is/does. -->\n\n\n\n## Model Details\n\n### Model Description\n\n<!-- Provide a longer summary of what this model is. -->\n\n\n\n- **Developed by:** [More Information Needed]\n- **Funded by [optional]:** [More Information Needed]\n- **Shared by [optional]:** [More Information Needed]\n- **Model type:** [More Information Needed]\n- **Language(s) (NLP):** [More Information Needed]\n- **License:** [More Information Needed]\n- **Finetuned from model [optional]:** [More Information Needed]\n\n### Model Sources [optional]\n\n<!-- Provide the basic links for the model. -->\n\n- **Repository:** [More Information Needed]\n- **Paper [optional]:** [More Information Needed]\n- **Demo [optional]:** [More Information Needed]\n\n## Uses\n\n<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->\n\n### Direct Use\n\n<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->\n\n[More Information Needed]\n\n### Downstream Use [optional]\n\n<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->\n\n[More Information Needed]\n\n### Out-of-Scope Use\n\n<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->\n\n[More Information Needed]\n\n## Bias, Risks, and Limitations\n\n<!-- This section is meant to convey both technical and sociotechnical limitations. -->\n\n[More Information Needed]\n\n### Recommendations\n\n<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.\n\n## How to Get Started with the Model\n\nUse the code below to get started with the model.\n\n[More Information Needed]\n\n## Training Details\n\n### Training Data\n\n<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->\n\n[More Information Needed]\n\n### Training Procedure\n\n<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->\n\n#### Preprocessing [optional]\n\n[More Information Needed]\n\n\n#### Training Hyperparameters\n\n- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->\n\n#### Speeds, Sizes, Times [optional]\n\n<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->\n\n[More Information Needed]\n\n## Evaluation\n\n<!-- This section describes the evaluation protocols and provides the results. -->\n\n### Testing Data, Factors & Metrics\n\n#### Testing Data\n\n<!-- This should link to a Dataset Card if possible. -->\n\n[More Information Needed]\n\n#### Factors\n\n<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->\n\n[More Information Needed]\n\n#### Metrics\n\n<!-- These are the evaluation metrics being used, ideally with a description of why. -->\n\n[More Information Needed]\n\n### Results\n\n[More Information Needed]\n\n#### Summary\n\n\n\n## Model Examination [optional]\n\n<!-- Relevant interpretability work for the model goes here -->\n\n[More Information Needed]\n\n## Environmental Impact\n\n<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).\n\n- **Hardware Type:** [More Information Needed]\n- **Hours used:** [More Information Needed]\n- **Cloud Provider:** [More Information Needed]\n- **Compute Region:** [More Information Needed]\n- **Carbon Emitted:** [More Information Needed]\n\n## Technical Specifications [optional]\n\n### Model Architecture and Objective\n\n[More Information Needed]\n\n### Compute Infrastructure\n\n[More Information Needed]\n\n#### Hardware\n\n[More Information Needed]\n\n#### Software\n\n[More Information Needed]\n\n## Citation [optional]\n\n<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->\n\n**BibTeX:**\n\n[More Information Needed]\n\n**APA:**\n\n[More Information Needed]\n\n## Glossary [optional]\n\n<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->\n\n[More Information Needed]\n\n## More Information [optional]\n\n[More Information Needed]\n\n## Model Card Authors [optional]\n\n[More Information Needed]\n\n## Model Card Contact\n\n[More Information Needed]\n### Framework versions\n\n- PEFT 0.15.2",
"metadata": "\"N/A\"",
"depth": 1,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM-500M-Instruct"
],
"base_model": "samlucas/smolvlm_500m-parking_occupancy-CNRPark-instruct-with-context-without-expert",
"base_model_relation": "base"
},
{
"model_id": "samlucas/smolvlm_500m-parking_occupancy-CNRPark-instruct-without-context-with-expert",
"gated": "False",
"card": "---\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\nlibrary_name: peft\n---\n\n# Model Card for Model ID\n\n<!-- Provide a quick summary of what the model is/does. -->\n\n\n\n## Model Details\n\n### Model Description\n\n<!-- Provide a longer summary of what this model is. -->\n\n\n\n- **Developed by:** [More Information Needed]\n- **Funded by [optional]:** [More Information Needed]\n- **Shared by [optional]:** [More Information Needed]\n- **Model type:** [More Information Needed]\n- **Language(s) (NLP):** [More Information Needed]\n- **License:** [More Information Needed]\n- **Finetuned from model [optional]:** [More Information Needed]\n\n### Model Sources [optional]\n\n<!-- Provide the basic links for the model. -->\n\n- **Repository:** [More Information Needed]\n- **Paper [optional]:** [More Information Needed]\n- **Demo [optional]:** [More Information Needed]\n\n## Uses\n\n<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->\n\n### Direct Use\n\n<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->\n\n[More Information Needed]\n\n### Downstream Use [optional]\n\n<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->\n\n[More Information Needed]\n\n### Out-of-Scope Use\n\n<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->\n\n[More Information Needed]\n\n## Bias, Risks, and Limitations\n\n<!-- This section is meant to convey both technical and sociotechnical limitations. -->\n\n[More Information Needed]\n\n### Recommendations\n\n<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.\n\n## How to Get Started with the Model\n\nUse the code below to get started with the model.\n\n[More Information Needed]\n\n## Training Details\n\n### Training Data\n\n<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->\n\n[More Information Needed]\n\n### Training Procedure\n\n<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->\n\n#### Preprocessing [optional]\n\n[More Information Needed]\n\n\n#### Training Hyperparameters\n\n- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->\n\n#### Speeds, Sizes, Times [optional]\n\n<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->\n\n[More Information Needed]\n\n## Evaluation\n\n<!-- This section describes the evaluation protocols and provides the results. -->\n\n### Testing Data, Factors & Metrics\n\n#### Testing Data\n\n<!-- This should link to a Dataset Card if possible. -->\n\n[More Information Needed]\n\n#### Factors\n\n<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->\n\n[More Information Needed]\n\n#### Metrics\n\n<!-- These are the evaluation metrics being used, ideally with a description of why. -->\n\n[More Information Needed]\n\n### Results\n\n[More Information Needed]\n\n#### Summary\n\n\n\n## Model Examination [optional]\n\n<!-- Relevant interpretability work for the model goes here -->\n\n[More Information Needed]\n\n## Environmental Impact\n\n<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).\n\n- **Hardware Type:** [More Information Needed]\n- **Hours used:** [More Information Needed]\n- **Cloud Provider:** [More Information Needed]\n- **Compute Region:** [More Information Needed]\n- **Carbon Emitted:** [More Information Needed]\n\n## Technical Specifications [optional]\n\n### Model Architecture and Objective\n\n[More Information Needed]\n\n### Compute Infrastructure\n\n[More Information Needed]\n\n#### Hardware\n\n[More Information Needed]\n\n#### Software\n\n[More Information Needed]\n\n## Citation [optional]\n\n<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->\n\n**BibTeX:**\n\n[More Information Needed]\n\n**APA:**\n\n[More Information Needed]\n\n## Glossary [optional]\n\n<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->\n\n[More Information Needed]\n\n## More Information [optional]\n\n[More Information Needed]\n\n## Model Card Authors [optional]\n\n[More Information Needed]\n\n## Model Card Contact\n\n[More Information Needed]\n### Framework versions\n\n- PEFT 0.15.2",
"metadata": "\"N/A\"",
"depth": 1,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM-500M-Instruct"
],
"base_model": "samlucas/smolvlm_500m-parking_occupancy-CNRPark-instruct-without-context-with-expert",
"base_model_relation": "base"
},
{
"model_id": "samlucas/smolvlm_500m-parking_occupancy-CNRPark-instruct-without-context-without-expert",
"gated": "False",
"card": "---\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\nlibrary_name: peft\n---\n\n# Model Card for Model ID\n\n<!-- Provide a quick summary of what the model is/does. -->\n\n\n\n## Model Details\n\n### Model Description\n\n<!-- Provide a longer summary of what this model is. -->\n\n\n\n- **Developed by:** [More Information Needed]\n- **Funded by [optional]:** [More Information Needed]\n- **Shared by [optional]:** [More Information Needed]\n- **Model type:** [More Information Needed]\n- **Language(s) (NLP):** [More Information Needed]\n- **License:** [More Information Needed]\n- **Finetuned from model [optional]:** [More Information Needed]\n\n### Model Sources [optional]\n\n<!-- Provide the basic links for the model. -->\n\n- **Repository:** [More Information Needed]\n- **Paper [optional]:** [More Information Needed]\n- **Demo [optional]:** [More Information Needed]\n\n## Uses\n\n<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->\n\n### Direct Use\n\n<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->\n\n[More Information Needed]\n\n### Downstream Use [optional]\n\n<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->\n\n[More Information Needed]\n\n### Out-of-Scope Use\n\n<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->\n\n[More Information Needed]\n\n## Bias, Risks, and Limitations\n\n<!-- This section is meant to convey both technical and sociotechnical limitations. -->\n\n[More Information Needed]\n\n### Recommendations\n\n<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.\n\n## How to Get Started with the Model\n\nUse the code below to get started with the model.\n\n[More Information Needed]\n\n## Training Details\n\n### Training Data\n\n<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->\n\n[More Information Needed]\n\n### Training Procedure\n\n<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->\n\n#### Preprocessing [optional]\n\n[More Information Needed]\n\n\n#### Training Hyperparameters\n\n- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->\n\n#### Speeds, Sizes, Times [optional]\n\n<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->\n\n[More Information Needed]\n\n## Evaluation\n\n<!-- This section describes the evaluation protocols and provides the results. -->\n\n### Testing Data, Factors & Metrics\n\n#### Testing Data\n\n<!-- This should link to a Dataset Card if possible. -->\n\n[More Information Needed]\n\n#### Factors\n\n<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->\n\n[More Information Needed]\n\n#### Metrics\n\n<!-- These are the evaluation metrics being used, ideally with a description of why. -->\n\n[More Information Needed]\n\n### Results\n\n[More Information Needed]\n\n#### Summary\n\n\n\n## Model Examination [optional]\n\n<!-- Relevant interpretability work for the model goes here -->\n\n[More Information Needed]\n\n## Environmental Impact\n\n<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).\n\n- **Hardware Type:** [More Information Needed]\n- **Hours used:** [More Information Needed]\n- **Cloud Provider:** [More Information Needed]\n- **Compute Region:** [More Information Needed]\n- **Carbon Emitted:** [More Information Needed]\n\n## Technical Specifications [optional]\n\n### Model Architecture and Objective\n\n[More Information Needed]\n\n### Compute Infrastructure\n\n[More Information Needed]\n\n#### Hardware\n\n[More Information Needed]\n\n#### Software\n\n[More Information Needed]\n\n## Citation [optional]\n\n<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->\n\n**BibTeX:**\n\n[More Information Needed]\n\n**APA:**\n\n[More Information Needed]\n\n## Glossary [optional]\n\n<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->\n\n[More Information Needed]\n\n## More Information [optional]\n\n[More Information Needed]\n\n## Model Card Authors [optional]\n\n[More Information Needed]\n\n## Model Card Contact\n\n[More Information Needed]\n### Framework versions\n\n- PEFT 0.15.2",
"metadata": "\"N/A\"",
"depth": 1,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM-500M-Instruct"
],
"base_model": "samlucas/smolvlm_500m-parking_occupancy-CNRPark-instruct-without-context-without-expert",
"base_model_relation": "base"
},
{
"model_id": "bilal1998/SmolVLM-500M-Instruct-vqav2",
"gated": "unknown",
"card": "---\nlibrary_name: peft\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM-500M-Instruct-vqav2\n results: []\n---\n\n<!-- This model card has been generated automatically according to the information the Trainer had access to. You\nshould probably proofread and complete it, then remove this comment. -->\n\n# SmolVLM-500M-Instruct-vqav2\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM-500M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct) on the None dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 8\n- eval_batch_size: 16\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 32\n- optimizer: Use paged_adamw_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 100\n\n### Training results\n\n\n\n### Framework versions\n\n- PEFT 0.15.2\n- Transformers 4.52.4\n- Pytorch 2.7.1+cu126\n- Datasets 3.6.0\n- Tokenizers 0.21.1",
"metadata": "\"N/A\"",
"depth": 1,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM-500M-Instruct"
],
"base_model": null,
"base_model_relation": null
},
{
"model_id": "HuggingFaceTB/SmolVLM2-500M-Video-Instruct",
"gated": "False",
"card": "---\nlibrary_name: transformers\nlicense: apache-2.0\ndatasets:\n- HuggingFaceM4/the_cauldron\n- HuggingFaceM4/Docmatix\n- lmms-lab/LLaVA-OneVision-Data\n- lmms-lab/M4-Instruct-Data\n- HuggingFaceFV/finevideo\n- MAmmoTH-VL/MAmmoTH-VL-Instruct-12M\n- lmms-lab/LLaVA-Video-178K\n- orrzohar/Video-STaR\n- Mutonix/Vript\n- TIGER-Lab/VISTA-400K\n- Enxin/MovieChat-1K_train\n- ShareGPT4Video/ShareGPT4Video\npipeline_tag: image-text-to-text\nlanguage:\n- en\nbase_model:\n- HuggingFaceTB/SmolVLM-500M-Instruct\n---\n\n<img src=\"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/SmolVLM2_banner.png\" width=\"800\" height=\"auto\" alt=\"Image description\">\n\n# SmolVLM2-500M-Video\n\nSmolVLM2-500M-Video is a lightweight multimodal model designed to analyze video content. The model processes videos, images, and text inputs to generate text outputs - whether answering questions about media files, comparing visual content, or transcribing text from images. Despite its compact size, requiring only 1.8GB of GPU RAM for video inference, it delivers robust performance on complex multimodal tasks. This efficiency makes it particularly well-suited for on-device applications where computational resources may be limited.\n## Model Summary\n\n- **Developed by:** Hugging Face \ud83e\udd17\n- **Model type:** Multi-modal model (image/multi-image/video/text)\n- **Language(s) (NLP):** English\n- **License:** Apache 2.0\n- **Architecture:** Based on [Idefics3](https://huggingface.co/HuggingFaceM4/Idefics3-8B-Llama3) (see technical summary)\n\n## Resources\n\n- **Demo:** [Video Highlight Generator](https://huggingface.co/spaces/HuggingFaceTB/SmolVLM2-HighlightGenerator)\n- **Blog:** [Blog post](https://huggingface.co/blog/smolvlm2)\n\n## Uses\n\nSmolVLM2 can be used for inference on multimodal (video / image / text) tasks where the input consists of text queries along with video or one or more images. Text and media files can be interleaved arbitrarily, enabling tasks like captioning, visual question answering, and storytelling based on visual content. The model does not support image or video generation.\n\nTo fine-tune SmolVLM2 on a specific task, you can follow [the fine-tuning tutorial](https://github.com/huggingface/smollm/blob/main/vision/finetuning/Smol_VLM_FT.ipynb).\n\n## Evaluation \n\nWe evaluated the performance of the SmolVLM2 family on the following scientific benchmarks:\n\n| Size | Video-MME | MLVU | MVBench |\n|----------|-----------------|----------|---------------|\n| 2.2B | 52.1 | 55.2 | 46.27 |\n| 500M | 42.2 | 47.3 | 39.73 |\n| 256M | 33.7 | 40.6 | 32.7 |\n\n\n### How to get started\n\nYou can use transformers to load, infer and fine-tune SmolVLM. Make sure you have num2words, flash-attn and latest transformers installed.\nYou can load the model as follows.\n\n```python\nfrom transformers import AutoProcessor, AutoModelForImageTextToText\nimport torch\n\nmodel_path = \"HuggingFaceTB/SmolVLM2-500M-Video-Instruct\"\nprocessor = AutoProcessor.from_pretrained(model_path)\nmodel = AutoModelForImageTextToText.from_pretrained(\n model_path,\n torch_dtype=torch.bfloat16,\n _attn_implementation=\"flash_attention_2\"\n).to(\"cuda\")\n```\n\n#### Simple Inference\n\nYou preprocess your inputs directly using chat templates and directly passing them \n\n```python\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\", \"url\": \"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg\"},\n {\"type\": \"text\", \"text\": \"Can you describe this image?\"},\n ]\n },\n]\n\ninputs = processor.apply_chat_template(\n messages,\n add_generation_prompt=True,\n tokenize=True,\n return_dict=True,\n return_tensors=\"pt\",\n).to(model.device, dtype=torch.bfloat16)\n\ngenerated_ids = model.generate(**inputs, do_sample=False, max_new_tokens=64)\ngenerated_texts = processor.batch_decode(\n generated_ids,\n skip_special_tokens=True,\n)\nprint(generated_texts[0])\n```\n\n#### Video Inference\n\nTo use SmolVLM2 for video inference, make sure you have decord installed. \n\n```python\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"video\", \"path\": \"path_to_video.mp4\"},\n {\"type\": \"text\", \"text\": \"Describe this video in detail\"}\n ]\n },\n]\n\ninputs = processor.apply_chat_template(\n messages,\n add_generation_prompt=True,\n tokenize=True,\n return_dict=True,\n return_tensors=\"pt\",\n).to(model.device, dtype=torch.bfloat16)\n\ngenerated_ids = model.generate(**inputs, do_sample=False, max_new_tokens=64)\ngenerated_texts = processor.batch_decode(\n generated_ids,\n skip_special_tokens=True,\n)\n\nprint(generated_texts[0])\n```\n#### Multi-image Interleaved Inference\n\nYou can interleave multiple media with text using chat templates.\n\n```python\nimport torch\n\n\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"text\", \"text\": \"What is the similarity between these two images?\"},\n {\"type\": \"image\", \"url\": \"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg\"},\n {\"type\": \"image\", \"url\": \"https://huggingface.co/datasets/huggingface/documentation-images/resolve/0052a70beed5bf71b92610a43a52df6d286cd5f3/diffusers/rabbit.jpg\"}, \n ]\n },\n]\n\ninputs = processor.apply_chat_template(\n messages,\n add_generation_prompt=True,\n tokenize=True,\n return_dict=True,\n return_tensors=\"pt\",\n).to(model.device, dtype=torch.bfloat16)\n\ngenerated_ids = model.generate(**inputs, do_sample=False, max_new_tokens=64)\ngenerated_texts = processor.batch_decode(\n generated_ids,\n skip_special_tokens=True,\n)\nprint(generated_texts[0])\n```\n\n\n### Model optimizations\n\n## Misuse and Out-of-scope Use\n\nSmolVLM is not intended for high-stakes scenarios or critical decision-making processes that affect an individual's well-being or livelihood. The model may produce content that appears factual but may not be accurate. Misuse includes, but is not limited to:\n\n- Prohibited Uses:\n - Evaluating or scoring individuals (e.g., in employment, education, credit)\n - Critical automated decision-making\n - Generating unreliable factual content\n- Malicious Activities:\n - Spam generation\n - Disinformation campaigns\n - Harassment or abuse\n - Unauthorized surveillance\n\n### License\n\nSmolVLM2 is built upon [SigLIP](https://huggingface.co/google/siglip-base-patch16-512) as image encoder and [SmolLM2](https://huggingface.co/HuggingFaceTB/SmolLM2-360M-Instruct) for text decoder part.\n\nWe release the SmolVLM2 checkpoints under the Apache 2.0 license.\n\n## Citation information\nYou can cite us in the following way:\n```bibtex\n@article{marafioti2025smolvlm,\n title={SmolVLM: Redefining small and efficient multimodal models}, \n author={Andr\u00e9s Marafioti and Orr Zohar and Miquel Farr\u00e9 and Merve Noyan and Elie Bakouch and Pedro Cuenca and Cyril Zakka and Loubna Ben Allal and Anton Lozhkov and Nouamane Tazi and Vaibhav Srivastav and Joshua Lochner and Hugo Larcher and Mathieu Morlon and Lewis Tunstall and Leandro von Werra and Thomas Wolf},\n journal={arXiv preprint arXiv:2504.05299},\n year={2025}\n}\n```\n\n## Training Data\nSmolVLM2 used 3.3M samples for training originally from ten different datasets: [LlaVa Onevision](https://huggingface.co/datasets/lmms-lab/LLaVA-OneVision-Data), [M4-Instruct](https://huggingface.co/datasets/lmms-lab/M4-Instruct-Data), [Mammoth](https://huggingface.co/datasets/MAmmoTH-VL/MAmmoTH-VL-Instruct-12M), [LlaVa Video 178K](https://huggingface.co/datasets/lmms-lab/LLaVA-Video-178K), [FineVideo](https://huggingface.co/datasets/HuggingFaceFV/finevideo), [VideoStar](https://huggingface.co/datasets/orrzohar/Video-STaR), [VRipt](https://huggingface.co/datasets/Mutonix/Vript), [Vista-400K](https://huggingface.co/datasets/TIGER-Lab/VISTA-400K), [MovieChat](https://huggingface.co/datasets/Enxin/MovieChat-1K_train) and [ShareGPT4Video](https://huggingface.co/datasets/ShareGPT4Video/ShareGPT4Video).\nIn the following plots we give a general overview of the samples across modalities and the source of those samples.\n<!--\n<center><img src=\"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolvlm2_data_split.png\" width=\"auto\" height=\"auto\" alt=\"Image description\">\n</center>\n\n### Details\n<img src=\"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolvlm2_datadetails.png\" width=\"auto\" height=\"auto\" alt=\"Image description\"> -->\n\n## Data Split per modality\n\n| Data Type | Percentage |\n|--------------|------------|\n| Image | 34.4% |\n| Text | 20.2% |\n| Video | 33.0% |\n| Multi-image | 12.3% |\n\n\n## Granular dataset slices per modality\n\n### Text Datasets\n| Dataset | Percentage |\n|--------------------------------------------|------------|\n| llava-onevision/magpie_pro_ft3_80b_mt | 6.8% |\n| llava-onevision/magpie_pro_ft3_80b_tt | 6.8% |\n| llava-onevision/magpie_pro_qwen2_72b_tt | 5.8% |\n| llava-onevision/mathqa | 0.9% |\n\n### Multi-image Datasets\n| Dataset | Percentage |\n|--------------------------------------------|------------|\n| m4-instruct-data/m4_instruct_multiimage | 10.4% |\n| mammoth/multiimage-cap6 | 1.9% |\n\n### Image Datasets\n| Dataset | Percentage |\n|--------------------------------------------|------------|\n| llava-onevision/other | 17.4% |\n| llava-onevision/vision_flan | 3.9% |\n| llava-onevision/mavis_math_metagen | 2.6% |\n| llava-onevision/mavis_math_rule_geo | 2.5% |\n| llava-onevision/sharegpt4o | 1.7% |\n| llava-onevision/sharegpt4v_coco | 1.5% |\n| llava-onevision/image_textualization | 1.3% |\n| llava-onevision/sharegpt4v_llava | 0.9% |\n| llava-onevision/mapqa | 0.9% |\n| llava-onevision/qa | 0.8% |\n| llava-onevision/textocr | 0.8% |\n\n### Video Datasets\n| Dataset | Percentage |\n|--------------------------------------------|------------|\n| llava-video-178k/1-2m | 7.3% |\n| llava-video-178k/2-3m | 7.0% |\n| other-video/combined | 5.7% |\n| llava-video-178k/hound | 4.4% |\n| llava-video-178k/0-30s | 2.4% |\n| video-star/starb | 2.2% |\n| vista-400k/combined | 2.2% |\n| vript/long | 1.0% |\n| ShareGPT4Video/all | 0.8% |\n",
"metadata": "\"N/A\"",
"depth": 1,
"children": [
"mfarre/SmolVLM2-500M-Video-Instruct-emotions",
"merve/SmolVLM2-500M-Video-Instruct-emotions",
"merve/SmolVLM2-500M-Video-Instruct-videofeedback",
"merve/SmolVLM2-500M-Video-Instruct-video-feedback",
"AeonOmniverse/SmolVLM2-500M-Video-Instruct-video-feedback",
"mpnikhil/SmolVLM2-500M-Video-Instruct-mpnikhil1",
"Karthick2020/SmolVLM2-500M-Video-Instruct-video-feedback",
"unreservedusername/SmolVLM2-500M-Video-Instruct-video-feedback",
"badger-lord/SmolVLM2-500M-Video-Instruct-video-feedback",
"sevimcengiz/SmolVLM2-500M-Video-Instruct-video-feedback",
"Arnav0400/SmolVLM2-500M-Video-Instruct-video-feedback",
"superenghb/SmolVLM2-500M-Video-Instruct-video-feedback",
"mosherosen/SmolVLM2-500M-Video-Instruct-video-feedback",
"lukesutor/SmolVLM-500M-ActivityTracking",
"mlevytskyi/SmolVLM2-500M-Video-Instruct-video-feedback",
"AFZAL0008/SmolVLM2-500M-Video-Instruct-video-feedback",
"mlevytskyi/SmolVLM2-500M-Video-Instruct-coco-kaggle",
"liuhuanjim013/SmolVLM2-500M-Video-Instruct-video-feedback",
"MRIII0917/SmolVLM2-500M-Video-Instruct-video-feedback",
"huggingFaceOfNabil/SmolVLM2-500M-Video-Instruct-dense",
"rainorangelemon2/smolvlm-instruct-trl-sft-ChartQA",
"rainorangelemon2/smolgemma-waymo-stage-1",
"rainorangelemon2/smolgemma-waymo-stage-2"
],
"children_count": 23,
"adapters": [
"GKC96/SmolVLM2-500M-Video-Instruct-video-qna",
"xco2/smolvlm2-500M-illustration-description"
],
"adapters_count": 2,
"quantized": [
"ggml-org/SmolVLM2-500M-Video-Instruct-GGUF",
"mradermacher/SmolVLM2-500M-Video-Instruct-GGUF",
"mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF",
"second-state/SmolVLM2-500M-Video-Instruct-GGUF",
"gaianet/SmolVLM2-500M-Video-Instruct-GGUF",
"DevQuasar/HuggingFaceTB.SmolVLM2-500M-Video-Instruct-GGUF",
"AXERA-TECH/SmolVLM2-500M-Video-Instruct"
],
"quantized_count": 7,
"merges": [],
"merges_count": 0,
"total_derivatives": 32,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM-500M-Instruct"
],
"base_model": "HuggingFaceTB/SmolVLM2-500M-Video-Instruct",
"base_model_relation": "base"
},
{
"model_id": "moot20/SmolVLM-500M-Instruct-MLX-4bits",
"gated": "False",
"card": "---\nlibrary_name: transformers\nlicense: apache-2.0\ndatasets:\n- HuggingFaceM4/the_cauldron\n- HuggingFaceM4/Docmatix\npipeline_tag: image-text-to-text\nlanguage:\n- en\nbase_model:\n- HuggingFaceTB/SmolVLM-500M-Instruct\nbase_model_relation: quantized\ntags:\n- mlx\n---\n\n# moot20/SmolVLM-500M-Instruct-MLX-4bits\nThis model was converted to MLX format from [`HuggingFaceTB/SmolVLM-500M-Instruct`]() using mlx-vlm version **0.1.12**.\nRefer to the [original model card](https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct) for more details on the model.\n## Use with mlx\n\n```bash\npip install -U mlx-vlm\n```\n\n```bash\npython -m mlx_vlm.generate --model moot20/SmolVLM-500M-Instruct-MLX-4bits --max-tokens 100 --temp 0.0 --prompt \"Describe this image.\" --image <path_to_image>\n```\n",
"metadata": "\"N/A\"",
"depth": 1,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM-500M-Instruct"
],
"base_model": "moot20/SmolVLM-500M-Instruct-MLX",
"base_model_relation": "finetune"
},
{
"model_id": "moot20/SmolVLM-500M-Instruct-MLX-6bits",
"gated": "False",
"card": "---\nlibrary_name: transformers\nlicense: apache-2.0\ndatasets:\n- HuggingFaceM4/the_cauldron\n- HuggingFaceM4/Docmatix\npipeline_tag: image-text-to-text\nlanguage:\n- en\nbase_model:\n- HuggingFaceTB/SmolVLM-500M-Instruct\nbase_model_relation: quantized\ntags:\n- mlx\n---\n\n# moot20/SmolVLM-500M-Instruct-MLX-6bits\nThis model was converted to MLX format from [`HuggingFaceTB/SmolVLM-500M-Instruct`]() using mlx-vlm version **0.1.12**.\nRefer to the [original model card](https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct) for more details on the model.\n## Use with mlx\n\n```bash\npip install -U mlx-vlm\n```\n\n```bash\npython -m mlx_vlm.generate --model moot20/SmolVLM-500M-Instruct-MLX-6bits --max-tokens 100 --temp 0.0 --prompt \"Describe this image.\" --image <path_to_image>\n```\n",
"metadata": "\"N/A\"",
"depth": 1,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM-500M-Instruct"
],
"base_model": "moot20/SmolVLM-500M-Instruct-MLX",
"base_model_relation": "finetune"
},
{
"model_id": "moot20/SmolVLM-500M-Instruct-MLX-8bits",
"gated": "False",
"card": "---\nlibrary_name: transformers\nlicense: apache-2.0\ndatasets:\n- HuggingFaceM4/the_cauldron\n- HuggingFaceM4/Docmatix\npipeline_tag: image-text-to-text\nlanguage:\n- en\nbase_model:\n- HuggingFaceTB/SmolVLM-500M-Instruct\nbase_model_relation: quantized\ntags:\n- mlx\n---\n\n# moot20/SmolVLM-500M-Instruct-MLX-8bits\nThis model was converted to MLX format from [`HuggingFaceTB/SmolVLM-500M-Instruct`]() using mlx-vlm version **0.1.12**.\nRefer to the [original model card](https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct) for more details on the model.\n## Use with mlx\n\n```bash\npip install -U mlx-vlm\n```\n\n```bash\npython -m mlx_vlm.generate --model moot20/SmolVLM-500M-Instruct-MLX-8bits --max-tokens 100 --temp 0.0 --prompt \"Describe this image.\" --image <path_to_image>\n```\n",
"metadata": "\"N/A\"",
"depth": 1,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM-500M-Instruct"
],
"base_model": "moot20/SmolVLM-500M-Instruct-MLX",
"base_model_relation": "finetune"
},
{
"model_id": "moot20/SmolVLM-500M-Instruct-MLX",
"gated": "unknown",
"card": "---\nlibrary_name: transformers\nlicense: apache-2.0\ndatasets:\n- HuggingFaceM4/the_cauldron\n- HuggingFaceM4/Docmatix\npipeline_tag: image-text-to-text\nlanguage:\n- en\nbase_model:\n- HuggingFaceTB/SmolVLM-500M-Instruct\nbase_model_relation: quantized\ntags:\n- mlx\n---\n\n# moot20/SmolVLM-500M-Instruct-MLX\nThis model was converted to MLX format from [`HuggingFaceTB/SmolVLM-500M-Instruct`]() using mlx-vlm version **0.1.12**.\nRefer to the [original model card](https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct) for more details on the model.\n## Use with mlx\n\n```bash\npip install -U mlx-vlm\n```\n\n```bash\npython -m mlx_vlm.generate --model moot20/SmolVLM-500M-Instruct-MLX --max-tokens 100 --temp 0.0 --prompt \"Describe this image.\" --image <path_to_image>\n```\n",
"metadata": "\"N/A\"",
"depth": 1,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM-500M-Instruct"
],
"base_model": null,
"base_model_relation": null
},
{
"model_id": "ggml-org/SmolVLM-500M-Instruct-GGUF",
"gated": "False",
"card": "---\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\n---\n\n# SmolVLM-500M-Instruct\n\nOriginal model: https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct\n\nFor more info, please refer to this PR: https://github.com/ggml-org/llama.cpp/pull/13050\n\n",
"metadata": "\"N/A\"",
"depth": 1,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM-500M-Instruct"
],
"base_model": "ggml-org/SmolVLM-500M-Instruct-GGUF",
"base_model_relation": "base"
},
{
"model_id": "mradermacher/SmolVLM-500M-Instruct-GGUF",
"gated": "False",
"card": "---\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\ndatasets:\n- HuggingFaceM4/the_cauldron\n- HuggingFaceM4/Docmatix\nlanguage:\n- en\nlibrary_name: transformers\nlicense: apache-2.0\nquantized_by: mradermacher\n---\n## About\n\n<!-- ### quantize_version: 2 -->\n<!-- ### output_tensor_quantised: 1 -->\n<!-- ### convert_type: hf -->\n<!-- ### vocab_type: -->\n<!-- ### tags: -->\nstatic quants of https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct\n\n<!-- provided-files -->\nweighted/imatrix quants are available at https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-i1-GGUF\n## Usage\n\nIf you are unsure how to use GGUF files, refer to one of [TheBloke's\nREADMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for\nmore details, including on how to concatenate multi-part files.\n\n## Provided Quants\n\n(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)\n\n| Link | Type | Size/GB | Notes |\n|:-----|:-----|--------:|:------|\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-GGUF/resolve/main/SmolVLM-500M-Instruct.Q2_K.gguf) | Q2_K | 0.3 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-GGUF/resolve/main/SmolVLM-500M-Instruct.Q3_K_S.gguf) | Q3_K_S | 0.3 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-GGUF/resolve/main/SmolVLM-500M-Instruct.IQ4_XS.gguf) | IQ4_XS | 0.4 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-GGUF/resolve/main/SmolVLM-500M-Instruct.Q3_K_M.gguf) | Q3_K_M | 0.4 | lower quality |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-GGUF/resolve/main/SmolVLM-500M-Instruct.Q3_K_L.gguf) | Q3_K_L | 0.4 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-GGUF/resolve/main/SmolVLM-500M-Instruct.Q4_K_S.gguf) | Q4_K_S | 0.4 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-GGUF/resolve/main/SmolVLM-500M-Instruct.Q4_K_M.gguf) | Q4_K_M | 0.4 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-GGUF/resolve/main/SmolVLM-500M-Instruct.Q5_K_S.gguf) | Q5_K_S | 0.4 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-GGUF/resolve/main/SmolVLM-500M-Instruct.Q5_K_M.gguf) | Q5_K_M | 0.4 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-GGUF/resolve/main/SmolVLM-500M-Instruct.Q6_K.gguf) | Q6_K | 0.5 | very good quality |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-GGUF/resolve/main/SmolVLM-500M-Instruct.Q8_0.gguf) | Q8_0 | 0.5 | fast, best quality |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-GGUF/resolve/main/SmolVLM-500M-Instruct.f16.gguf) | f16 | 0.9 | 16 bpw, overkill |\n\nHere is a handy graph by ikawrakow comparing some lower-quality quant\ntypes (lower is better):\n\n\n\nAnd here are Artefact2's thoughts on the matter:\nhttps://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9\n\n## FAQ / Model Request\n\nSee https://huggingface.co/mradermacher/model_requests for some answers to\nquestions you might have and/or if you want some other model quantized.\n\n## Thanks\n\nI thank my company, [nethype GmbH](https://www.nethype.de/), for letting\nme use its servers and providing upgrades to my workstation to enable\nthis work in my free time.\n\n<!-- end -->\n",
"metadata": "\"N/A\"",
"depth": 1,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM-500M-Instruct"
],
"base_model": "mradermacher/SmolVLM-500M-Instruct-GGUF",
"base_model_relation": "base"
},
{
"model_id": "mradermacher/SmolVLM-500M-Instruct-i1-GGUF",
"gated": "False",
"card": "---\nbase_model: HuggingFaceTB/SmolVLM-500M-Instruct\ndatasets:\n- HuggingFaceM4/the_cauldron\n- HuggingFaceM4/Docmatix\nlanguage:\n- en\nlibrary_name: transformers\nlicense: apache-2.0\nquantized_by: mradermacher\n---\n## About\n\n<!-- ### quantize_version: 2 -->\n<!-- ### output_tensor_quantised: 1 -->\n<!-- ### convert_type: hf -->\n<!-- ### vocab_type: -->\n<!-- ### tags: nicoboss -->\nweighted/imatrix quants of https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct\n\n<!-- provided-files -->\nstatic quants are available at https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-GGUF\n## Usage\n\nIf you are unsure how to use GGUF files, refer to one of [TheBloke's\nREADMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for\nmore details, including on how to concatenate multi-part files.\n\n## Provided Quants\n\n(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)\n\n| Link | Type | Size/GB | Notes |\n|:-----|:-----|--------:|:------|\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-i1-GGUF/resolve/main/SmolVLM-500M-Instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 0.3 | for the desperate |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-i1-GGUF/resolve/main/SmolVLM-500M-Instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 0.3 | mostly desperate |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-i1-GGUF/resolve/main/SmolVLM-500M-Instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.3 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-i1-GGUF/resolve/main/SmolVLM-500M-Instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.3 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-i1-GGUF/resolve/main/SmolVLM-500M-Instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 0.3 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-i1-GGUF/resolve/main/SmolVLM-500M-Instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 0.3 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-i1-GGUF/resolve/main/SmolVLM-500M-Instruct.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.3 | very low quality |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-i1-GGUF/resolve/main/SmolVLM-500M-Instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.3 | lower quality |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-i1-GGUF/resolve/main/SmolVLM-500M-Instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 0.3 | beats Q3_K* |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-i1-GGUF/resolve/main/SmolVLM-500M-Instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.3 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-i1-GGUF/resolve/main/SmolVLM-500M-Instruct.i1-Q2_K.gguf) | i1-Q2_K | 0.3 | IQ3_XXS probably better |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-i1-GGUF/resolve/main/SmolVLM-500M-Instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.3 | IQ3_XS probably better |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-i1-GGUF/resolve/main/SmolVLM-500M-Instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 0.4 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-i1-GGUF/resolve/main/SmolVLM-500M-Instruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.4 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-i1-GGUF/resolve/main/SmolVLM-500M-Instruct.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.4 | prefer IQ4_XS |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-i1-GGUF/resolve/main/SmolVLM-500M-Instruct.i1-Q4_0.gguf) | i1-Q4_0 | 0.4 | fast, low quality |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-i1-GGUF/resolve/main/SmolVLM-500M-Instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.4 | IQ3_S probably better |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-i1-GGUF/resolve/main/SmolVLM-500M-Instruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.4 | IQ3_M probably better |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-i1-GGUF/resolve/main/SmolVLM-500M-Instruct.i1-Q4_1.gguf) | i1-Q4_1 | 0.4 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-i1-GGUF/resolve/main/SmolVLM-500M-Instruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.4 | optimal size/speed/quality |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-i1-GGUF/resolve/main/SmolVLM-500M-Instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.4 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-i1-GGUF/resolve/main/SmolVLM-500M-Instruct.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.4 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-i1-GGUF/resolve/main/SmolVLM-500M-Instruct.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.4 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM-500M-Instruct-i1-GGUF/resolve/main/SmolVLM-500M-Instruct.i1-Q6_K.gguf) | i1-Q6_K | 0.5 | practically like static Q6_K |\n\nHere is a handy graph by ikawrakow comparing some lower-quality quant\ntypes (lower is better):\n\n\n\nAnd here are Artefact2's thoughts on the matter:\nhttps://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9\n\n## FAQ / Model Request\n\nSee https://huggingface.co/mradermacher/model_requests for some answers to\nquestions you might have and/or if you want some other model quantized.\n\n## Thanks\n\nI thank my company, [nethype GmbH](https://www.nethype.de/), for letting\nme use its servers and providing upgrades to my workstation to enable\nthis work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.\n\n<!-- end -->\n",
"metadata": "\"N/A\"",
"depth": 1,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM-500M-Instruct"
],
"base_model": "mradermacher/SmolVLM-500M-Instruct-i1-GGUF",
"base_model_relation": "base"
},
{
"model_id": "VyoJ/SmolVLM-500M-Instruct-be-GGUF",
"gated": "unknown",
"card": "---\nlicense: apache-2.0\nlanguage:\n- en\nbase_model:\n- ggml-org/SmolVLM-500M-Instruct-GGUF\n- HuggingFaceTB/SmolVLM-500M-Instruct\npipeline_tag: image-text-to-text\nlibrary_name: transformers\n---\n\n# Model Information\n\nSmolVLM-500M is a tiny multimodal model by HuggingFace. It was converted to the GGUF format by ggml-org.\n\nI converted it to a big-endian format and uploaded for use on IBM z/OS machines.\n\n**Model developer**: HuggingFace\n\n**Model Architecture**: Based on Idefics3\n\n**License**: Apache 2.0\n\nFor more details on the model, please go to Meta's original [model card](https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct)",
"metadata": "\"N/A\"",
"depth": 1,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM-500M-Instruct"
],
"base_model": null,
"base_model_relation": null
},
{
"model_id": "vidore/colSmol-500M",
"gated": "False",
"card": "---\nlicense: mit\nlibrary_name: colpali\nbase_model: vidore/ColSmolVLM-Instruct-500M\nlanguage:\n- en\ntags:\n- colsmolvlm\n- vidore-experimental\n- vidore\npipeline_tag: visual-document-retrieval\n---\n# ColSmolVLM-Instruct-500M: Visual Retriever based on SmolVLM-Instruct-500M with ColBERT strategy\n\n### This is a version trained with batch_size 32 for 3 epochs\n\nColSmolVLM is a model based on a novel model architecture and training strategy based on Vision Language Models (VLMs) to efficiently index documents from their visual features.\nIt is a SmolVLM extension that generates [ColBERT](https://arxiv.org/abs/2004.12832)- style multi-vector representations of text and images. \nIt was introduced in the paper [ColPali: Efficient Document Retrieval with Vision Language Models](https://arxiv.org/abs/2407.01449) and first released in [this repository](https://github.com/ManuelFay/colpali)\n\n<p align=\"center\"><img width=800 src=\"https://github.com/illuin-tech/colpali/blob/main/assets/colpali_architecture.webp?raw=true\"/></p>\n\n## Version specificity\n\nThis version is trained with the commit b983e40 of the Colpali repository. (main branch from the repo)\n\nData is the same as the ColPali data described in the paper.\n\n\n## Model Training\n\n### Dataset\nOur training dataset of 127,460 query-page pairs is comprised of train sets of openly available academic datasets (63%) and a synthetic dataset made up of pages from web-crawled PDF documents and augmented with VLM-generated (Claude-3 Sonnet) pseudo-questions (37%). \nOur training set is fully English by design, enabling us to study zero-shot generalization to non-English languages. We explicitly verify no multi-page PDF document is used both [*ViDoRe*](https://huggingface.co/collections/vidore/vidore-benchmark-667173f98e70a1c0fa4db00d) and in the train set to prevent evaluation contamination. \nA validation set is created with 2% of the samples to tune hyperparameters.\n\n*Note: Multilingual data is present in the pretraining corpus of the language model and most probably in the multimodal training.*\n\n### Parameters\n\nUnless specified otherwise, we train models in `bfloat16` format, use low-rank adapters ([LoRA](https://arxiv.org/abs/2106.09685)) \nwith `alpha=32` and `r=32` on the transformer layers from the language model, \nas well as the final randomly initialized projection layer, and use a `paged_adamw_8bit` optimizer. \nWe train on a 4 GPU setup with data parallelism, a learning rate of 5e-4 with linear decay with 2.5% warmup steps, and a batch size of 8.\n\n## Usage\n\nMake sure `colpali-engine` is installed from source or with a version superior to 0.3.5 (main branch from the repo currently).\n`transformers` version must be > 4.46.2.\n\n```bash\npip install git+https://github.com/illuin-tech/colpali\n```\n\n```python\nimport torch\nfrom PIL import Image\n\nfrom colpali_engine.models import ColIdefics3, ColIdefics3Processor\n\nmodel = ColIdefics3.from_pretrained(\n \"vidore/colSmol-500M\",\n torch_dtype=torch.bfloat16,\n device_map=\"cuda:0\",\n attn_implementation=\"flash_attention_2\" # or eager\n ).eval()\nprocessor = ColIdefics3Processor.from_pretrained(\"vidore/colSmol-500M\")\n\n# Your inputs\nimages = [\n Image.new(\"RGB\", (32, 32), color=\"white\"),\n Image.new(\"RGB\", (16, 16), color=\"black\"),\n]\nqueries = [\n \"Is attention really all you need?\",\n \"What is the amount of bananas farmed in Salvador?\",\n]\n\n# Process the inputs\nbatch_images = processor.process_images(images).to(model.device)\nbatch_queries = processor.process_queries(queries).to(model.device)\n\n# Forward pass\nwith torch.no_grad():\n image_embeddings = model(**batch_images)\n query_embeddings = model(**batch_queries)\n\nscores = processor.score_multi_vector(query_embeddings, image_embeddings)\n```\n\n\n## Limitations\n\n - **Focus**: The model primarily focuses on PDF-type documents and high-ressources languages, potentially limiting its generalization to other document types or less represented languages.\n - **Support**: The model relies on multi-vector retreiving derived from the ColBERT late interaction mechanism, which may require engineering efforts to adapt to widely used vector retrieval frameworks that lack native multi-vector support.\n\n## License\n\nColQwen2's vision language backbone model (Qwen2-VL) is under `apache2.0` license. The adapters attached to the model are under MIT license.\n\n## Contact\n\n- Manuel Faysse: manuel.faysse@illuin.tech\n- Hugues Sibille: hugues.sibille@illuin.tech\n- Tony Wu: tony.wu@illuin.tech\n\n## Citation\n\nIf you use any datasets or models from this organization in your research, please cite the original dataset as follows:\n\n```bibtex\n@misc{faysse2024colpaliefficientdocumentretrieval,\n title={ColPali: Efficient Document Retrieval with Vision Language Models}, \n author={Manuel Faysse and Hugues Sibille and Tony Wu and Bilel Omrani and Gautier Viaud and C\u00e9line Hudelot and Pierre Colombo},\n year={2024},\n eprint={2407.01449},\n archivePrefix={arXiv},\n primaryClass={cs.IR},\n url={https://arxiv.org/abs/2407.01449}, \n}\n```",
"metadata": "\"N/A\"",
"depth": 2,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"vidore/ColSmolVLM-Instruct-500M-base"
],
"base_model": "vidore/colSmol",
"base_model_relation": "finetune"
},
{
"model_id": "thoddnn/colSmol-500M",
"gated": "False",
"card": "---\nlicense: mit\nlibrary_name: colpali\nbase_model: vidore/ColSmolVLM-Instruct-500M\nlanguage:\n- en\ntags:\n- colsmolvlm\n- vidore-experimental\n- vidore\npipeline_tag: visual-document-retrieval\n---\n# ColSmolVLM-Instruct-500M: Visual Retriever based on SmolVLM-Instruct-500M with ColBERT strategy\n\n### This is a version trained with batch_size 32 for 3 epochs\n\nColSmolVLM is a model based on a novel model architecture and training strategy based on Vision Language Models (VLMs) to efficiently index documents from their visual features.\nIt is a SmolVLM extension that generates [ColBERT](https://arxiv.org/abs/2004.12832)- style multi-vector representations of text and images. \nIt was introduced in the paper [ColPali: Efficient Document Retrieval with Vision Language Models](https://arxiv.org/abs/2407.01449) and first released in [this repository](https://github.com/ManuelFay/colpali)\n\n<p align=\"center\"><img width=800 src=\"https://github.com/illuin-tech/colpali/blob/main/assets/colpali_architecture.webp?raw=true\"/></p>\n\n## Version specificity\n\nThis version is trained with the commit b983e40 of the Colpali repository. (main branch from the repo)\n\nData is the same as the ColPali data described in the paper.\n\n\n## Model Training\n\n### Dataset\nOur training dataset of 127,460 query-page pairs is comprised of train sets of openly available academic datasets (63%) and a synthetic dataset made up of pages from web-crawled PDF documents and augmented with VLM-generated (Claude-3 Sonnet) pseudo-questions (37%). \nOur training set is fully English by design, enabling us to study zero-shot generalization to non-English languages. We explicitly verify no multi-page PDF document is used both [*ViDoRe*](https://huggingface.co/collections/vidore/vidore-benchmark-667173f98e70a1c0fa4db00d) and in the train set to prevent evaluation contamination. \nA validation set is created with 2% of the samples to tune hyperparameters.\n\n*Note: Multilingual data is present in the pretraining corpus of the language model and most probably in the multimodal training.*\n\n### Parameters\n\nUnless specified otherwise, we train models in `bfloat16` format, use low-rank adapters ([LoRA](https://arxiv.org/abs/2106.09685)) \nwith `alpha=32` and `r=32` on the transformer layers from the language model, \nas well as the final randomly initialized projection layer, and use a `paged_adamw_8bit` optimizer. \nWe train on a 4 GPU setup with data parallelism, a learning rate of 5e-4 with linear decay with 2.5% warmup steps, and a batch size of 8.\n\n## Usage\n\nMake sure `colpali-engine` is installed from source or with a version superior to 0.3.5 (main branch from the repo currently).\n`transformers` version must be > 4.46.2.\n\n```bash\npip install git+https://github.com/illuin-tech/colpali\n```\n\n```python\nimport torch\nfrom PIL import Image\n\nfrom colpali_engine.models import ColIdefics3, ColIdefics3Processor\n\nmodel = ColIdefics3.from_pretrained(\n \"vidore/colSmol-500M\",\n torch_dtype=torch.bfloat16,\n device_map=\"cuda:0\",\n attn_implementation=\"flash_attention_2\" # or eager\n ).eval()\nprocessor = ColIdefics3Processor.from_pretrained(\"vidore/colSmol-500M\")\n\n# Your inputs\nimages = [\n Image.new(\"RGB\", (32, 32), color=\"white\"),\n Image.new(\"RGB\", (16, 16), color=\"black\"),\n]\nqueries = [\n \"Is attention really all you need?\",\n \"What is the amount of bananas farmed in Salvador?\",\n]\n\n# Process the inputs\nbatch_images = processor.process_images(images).to(model.device)\nbatch_queries = processor.process_queries(queries).to(model.device)\n\n# Forward pass\nwith torch.no_grad():\n image_embeddings = model(**batch_images)\n query_embeddings = model(**batch_queries)\n\nscores = processor.score_multi_vector(query_embeddings, image_embeddings)\n```\n\n\n## Limitations\n\n - **Focus**: The model primarily focuses on PDF-type documents and high-ressources languages, potentially limiting its generalization to other document types or less represented languages.\n - **Support**: The model relies on multi-vector retreiving derived from the ColBERT late interaction mechanism, which may require engineering efforts to adapt to widely used vector retrieval frameworks that lack native multi-vector support.\n\n## License\n\nColQwen2's vision language backbone model (Qwen2-VL) is under `apache2.0` license. The adapters attached to the model are under MIT license.\n\n## Contact\n\n- Manuel Faysse: manuel.faysse@illuin.tech\n- Hugues Sibille: hugues.sibille@illuin.tech\n- Tony Wu: tony.wu@illuin.tech\n\n## Citation\n\nIf you use any datasets or models from this organization in your research, please cite the original dataset as follows:\n\n```bibtex\n@misc{faysse2024colpaliefficientdocumentretrieval,\n title={ColPali: Efficient Document Retrieval with Vision Language Models}, \n author={Manuel Faysse and Hugues Sibille and Tony Wu and Bilel Omrani and Gautier Viaud and C\u00e9line Hudelot and Pierre Colombo},\n year={2024},\n eprint={2407.01449},\n archivePrefix={arXiv},\n primaryClass={cs.IR},\n url={https://arxiv.org/abs/2407.01449}, \n}\n```",
"metadata": "\"N/A\"",
"depth": 2,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"vidore/ColSmolVLM-Instruct-500M-base"
],
"base_model": "thoddnn/colSmol",
"base_model_relation": "finetune"
},
{
"model_id": "ingenio/IndoColSmol-500M",
"gated": "False",
"card": "---\nlibrary_name: transformers\nlicense: mit\nbase_model: vidore/ColSmolVLM-Instruct-500M-base\ntags:\n- colpali\n- generated_from_trainer\nmodel-index:\n- name: IndoColSmol-500M\n results: []\n---\n\n<!-- This model card has been generated automatically according to the information the Trainer had access to. You\nshould probably proofread and complete it, then remove this comment. -->\n\n# IndoColSmol-500M\n\nThis model is a fine-tuned version of [vidore/ColSmolVLM-Instruct-500M-base](https://huggingface.co/vidore/ColSmolVLM-Instruct-500M-base) on the ingenio/indodvqa_dataset dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.3641\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 2\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss |\n|:-------------:|:------:|:----:|:---------------:|\n| No log | 0.0099 | 1 | 0.4474 |\n| 0.4523 | 0.3960 | 40 | 0.4055 |\n| 0.3996 | 0.7921 | 80 | 0.3804 |\n| 0.3637 | 1.1881 | 120 | 0.3687 |\n| 0.345 | 1.5842 | 160 | 0.3627 |\n| 0.3466 | 1.9802 | 200 | 0.3630 |\n\n\n### Framework versions\n\n- Transformers 4.51.3\n- Pytorch 2.6.0+cu124\n- Datasets 3.5.1\n- Tokenizers 0.21.1\n",
"metadata": "\"N/A\"",
"depth": 2,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"vidore/ColSmolVLM-Instruct-500M-base"
],
"base_model": "ingenio/IndoColSmol",
"base_model_relation": "finetune"
},
{
"model_id": "Oysiyl/colSmol-500M_ufo",
"gated": "False",
"card": "---\nlibrary_name: peft\nlicense: mit\nbase_model: vidore/ColSmolVLM-Instruct-500M-base\ntags:\n- generated_from_trainer\nmodel-index:\n- name: colSmol-500M_ufo\n results: []\n---\n\n<!-- This model card has been generated automatically according to the information the Trainer had access to. You\nshould probably proofread and complete it, then remove this comment. -->\n\n# colSmol-500M_ufo\n\nThis model is a fine-tuned version of [vidore/ColSmolVLM-Instruct-500M-base](https://huggingface.co/vidore/ColSmolVLM-Instruct-500M-base) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0878\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 100\n- num_epochs: 1\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss |\n|:-------------:|:------:|:----:|:---------------:|\n| 0.1306 | 0.1636 | 80 | 0.1418 |\n| 0.0751 | 0.3272 | 160 | 0.1086 |\n| 0.0823 | 0.4908 | 240 | 0.0912 |\n| 0.0513 | 0.6544 | 320 | 0.0887 |\n| 0.0475 | 0.8180 | 400 | 0.0865 |\n| 0.0572 | 0.9816 | 480 | 0.0878 |\n\n\n### Framework versions\n\n- PEFT 0.15.2\n- Transformers 4.51.3\n- Pytorch 2.6.0+cu124\n- Datasets 3.3.1\n- Tokenizers 0.21.0",
"metadata": "\"N/A\"",
"depth": 2,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"vidore/ColSmolVLM-Instruct-500M-base"
],
"base_model": "Oysiyl/colSmol-500M_ufo",
"base_model_relation": "base"
},
{
"model_id": "mfarre/SmolVLM2-500M-Video-Instruct-emotions",
"gated": "False",
"card": "---\nlibrary_name: transformers\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM2-500M-Video-Instruct-emotions\n results: []\n---\n\n<!-- This model card has been generated automatically according to the information the Trainer had access to. You\nshould probably proofread and complete it, then remove this comment. -->\n\n# SmolVLM2-500M-Video-Instruct-emotions\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Use adamw_hf with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 1\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.49.0.dev0\n- Pytorch 2.6.0+cu124\n- Tokenizers 0.21.0\n",
"metadata": "\"N/A\"",
"depth": 2,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM2-500M-Video-Instruct"
],
"base_model": "mfarre/SmolVLM2-500M-Video-Instruct-emotions",
"base_model_relation": "base"
},
{
"model_id": "merve/SmolVLM2-500M-Video-Instruct-emotions",
"gated": "False",
"card": "---\nlibrary_name: transformers\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM2-500M-Video-Instruct-emotions\n results: []\n---\n\n<!-- This model card has been generated automatically according to the information the Trainer had access to. You\nshould probably proofread and complete it, then remove this comment. -->\n\n# SmolVLM2-500M-Video-Instruct-emotions\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 1\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.50.0.dev0\n- Pytorch 2.5.1+cu124\n- Datasets 3.3.1\n- Tokenizers 0.21.0\n",
"metadata": "\"N/A\"",
"depth": 2,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM2-500M-Video-Instruct"
],
"base_model": "merve/SmolVLM2-500M-Video-Instruct-emotions",
"base_model_relation": "base"
},
{
"model_id": "merve/SmolVLM2-500M-Video-Instruct-videofeedback",
"gated": "False",
"card": "---\nlibrary_name: transformers\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM2-500M-Video-Instruct-videofeedback\n results: []\n---\n\n<!-- This model card has been generated automatically according to the information the Trainer had access to. You\nshould probably proofread and complete it, then remove this comment. -->\n\n# SmolVLM2-500M-Video-Instruct-videofeedback\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 1\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.50.0.dev0\n- Pytorch 2.5.1+cu124\n- Datasets 3.3.1\n- Tokenizers 0.21.0\n",
"metadata": "\"N/A\"",
"depth": 2,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM2-500M-Video-Instruct"
],
"base_model": "merve/SmolVLM2-500M-Video-Instruct-videofeedback",
"base_model_relation": "base"
},
{
"model_id": "merve/SmolVLM2-500M-Video-Instruct-video-feedback",
"gated": "False",
"card": "---\nlibrary_name: transformers\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM2-500M-Video-Instruct-video-feedback\n results: []\n---\n\n<!-- This model card has been generated automatically according to the information the Trainer had access to. You\nshould probably proofread and complete it, then remove this comment. -->\n\n# SmolVLM2-500M-Video-Instruct-video-feedback\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 1\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.50.0.dev0\n- Pytorch 2.5.1+cu124\n- Datasets 3.3.1\n- Tokenizers 0.21.0\n",
"metadata": "\"N/A\"",
"depth": 2,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM2-500M-Video-Instruct"
],
"base_model": "merve/SmolVLM2-500M-Video-Instruct-video-feedback",
"base_model_relation": "base"
},
{
"model_id": "AeonOmniverse/SmolVLM2-500M-Video-Instruct-video-feedback",
"gated": "False",
"card": "---\nlibrary_name: transformers\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM2-500M-Video-Instruct-video-feedback\n results: []\n---\n\n<!-- This model card has been generated automatically according to the information the Trainer had access to. You\nshould probably proofread and complete it, then remove this comment. -->\n\n# SmolVLM2-500M-Video-Instruct-video-feedback\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 1\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.51.0.dev0\n- Pytorch 2.6.0+cu124\n- Datasets 3.4.1\n- Tokenizers 0.21.1\n",
"metadata": "\"N/A\"",
"depth": 2,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM2-500M-Video-Instruct"
],
"base_model": "AeonOmniverse/SmolVLM2-500M-Video-Instruct-video-feedback",
"base_model_relation": "base"
},
{
"model_id": "mpnikhil/SmolVLM2-500M-Video-Instruct-mpnikhil1",
"gated": "False",
"card": "---\nlibrary_name: transformers\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM2-500M-Video-Instruct-mpnikhil1\n results: []\n---\n\n<!-- This model card has been generated automatically according to the information the Trainer had access to. You\nshould probably proofread and complete it, then remove this comment. -->\n\n# SmolVLM2-500M-Video-Instruct-mpnikhil1\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 16\n- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 1\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.50.0.dev0\n- Pytorch 2.6.0\n- Datasets 3.3.2\n- Tokenizers 0.21.0\n",
"metadata": "\"N/A\"",
"depth": 2,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM2-500M-Video-Instruct"
],
"base_model": "mpnikhil/SmolVLM2-500M-Video-Instruct-mpnikhil1",
"base_model_relation": "base"
},
{
"model_id": "Karthick2020/SmolVLM2-500M-Video-Instruct-video-feedback",
"gated": "False",
"card": "---\nlibrary_name: transformers\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM2-500M-Video-Instruct-video-feedback\n results: []\n---\n\n<!-- This model card has been generated automatically according to the information the Trainer had access to. You\nshould probably proofread and complete it, then remove this comment. -->\n\n# SmolVLM2-500M-Video-Instruct-video-feedback\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 1\n\n### Framework versions\n\n- Transformers 4.50.0.dev0\n- Pytorch 2.5.1+cu124\n- Datasets 3.3.2\n- Tokenizers 0.21.0\n",
"metadata": "\"N/A\"",
"depth": 2,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM2-500M-Video-Instruct"
],
"base_model": "Karthick2020/SmolVLM2-500M-Video-Instruct-video-feedback",
"base_model_relation": "base"
},
{
"model_id": "unreservedusername/SmolVLM2-500M-Video-Instruct-video-feedback",
"gated": "False",
"card": "---\nlibrary_name: transformers\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM2-500M-Video-Instruct-video-feedback\n results: []\n---\n\n<!-- This model card has been generated automatically according to the information the Trainer had access to. You\nshould probably proofread and complete it, then remove this comment. -->\n\n# SmolVLM2-500M-Video-Instruct-video-feedback\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0133\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 5\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.50.0.dev0\n- Pytorch 2.6.0+cu124\n- Datasets 3.2.0\n- Tokenizers 0.21.0\n",
"metadata": "\"N/A\"",
"depth": 2,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM2-500M-Video-Instruct"
],
"base_model": "unreservedusername/SmolVLM2-500M-Video-Instruct-video-feedback",
"base_model_relation": "base"
},
{
"model_id": "badger-lord/SmolVLM2-500M-Video-Instruct-video-feedback",
"gated": "False",
"card": "---\nlibrary_name: transformers\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM2-500M-Video-Instruct-video-feedback\n results: []\n---\n\n<!-- This model card has been generated automatically according to the information the Trainer had access to. You\nshould probably proofread and complete it, then remove this comment. -->\n\n# SmolVLM2-500M-Video-Instruct-video-feedback\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 1\n\n### Framework versions\n\n- Transformers 4.50.0.dev0\n- Pytorch 2.5.1+cu124\n- Datasets 3.3.2\n- Tokenizers 0.21.0\n",
"metadata": "\"N/A\"",
"depth": 2,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM2-500M-Video-Instruct"
],
"base_model": "badger-lord/SmolVLM2-500M-Video-Instruct-video-feedback",
"base_model_relation": "base"
},
{
"model_id": "sevimcengiz/SmolVLM2-500M-Video-Instruct-video-feedback",
"gated": "False",
"card": "---\nlibrary_name: transformers\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM2-500M-Video-Instruct-video-feedback\n results: []\n---\n\n<!-- This model card has been generated automatically according to the information the Trainer had access to. You\nshould probably proofread and complete it, then remove this comment. -->\n\n# SmolVLM2-500M-Video-Instruct-video-feedback\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Use OptimizerNames.ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 1\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.51.0.dev0\n- Pytorch 2.6.0+cu124\n- Datasets 3.5.0\n- Tokenizers 0.21.1\n",
"metadata": "\"N/A\"",
"depth": 2,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM2-500M-Video-Instruct"
],
"base_model": "sevimcengiz/SmolVLM2-500M-Video-Instruct-video-feedback",
"base_model_relation": "base"
},
{
"model_id": "Arnav0400/SmolVLM2-500M-Video-Instruct-video-feedback",
"gated": "False",
"card": "---\nlibrary_name: transformers\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM2-500M-Video-Instruct-video-feedback\n results: []\n---\n\n<!-- This model card has been generated automatically according to the information the Trainer had access to. You\nshould probably proofread and complete it, then remove this comment. -->\n\n# SmolVLM2-500M-Video-Instruct-video-feedback\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 16\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 1\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.52.0.dev0\n- Pytorch 2.6.0+cu126\n- Datasets 3.5.0\n- Tokenizers 0.21.1\n",
"metadata": "\"N/A\"",
"depth": 2,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM2-500M-Video-Instruct"
],
"base_model": "Arnav0400/SmolVLM2-500M-Video-Instruct-video-feedback",
"base_model_relation": "base"
},
{
"model_id": "superenghb/SmolVLM2-500M-Video-Instruct-video-feedback",
"gated": "False",
"card": "---\nlibrary_name: transformers\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM2-500M-Video-Instruct-video-feedback\n results: []\n---\n\n<!-- This model card has been generated automatically according to the information the Trainer had access to. You\nshould probably proofread and complete it, then remove this comment. -->\n\n# SmolVLM2-500M-Video-Instruct-video-feedback\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Use adamw_hf with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 1\n\n### Framework versions\n\n- Transformers 4.50.0.dev0\n- Pytorch 2.4.0a0+f70bd71a48.nv24.06\n- Datasets 3.5.0\n- Tokenizers 0.21.1\n",
"metadata": "\"N/A\"",
"depth": 2,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM2-500M-Video-Instruct"
],
"base_model": "superenghb/SmolVLM2-500M-Video-Instruct-video-feedback",
"base_model_relation": "base"
},
{
"model_id": "mosherosen/SmolVLM2-500M-Video-Instruct-video-feedback",
"gated": "False",
"card": "---\nlibrary_name: transformers\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM2-500M-Video-Instruct-video-feedback\n results: []\n---\n\n<!-- This model card has been generated automatically according to the information the Trainer had access to. You\nshould probably proofread and complete it, then remove this comment. -->\n\n# SmolVLM2-500M-Video-Instruct-video-feedback\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 1\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.52.0.dev0\n- Pytorch 2.6.0+cu124\n- Datasets 3.5.0\n- Tokenizers 0.21.1\n",
"metadata": "\"N/A\"",
"depth": 2,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM2-500M-Video-Instruct"
],
"base_model": "mosherosen/SmolVLM2-500M-Video-Instruct-video-feedback",
"base_model_relation": "base"
},
{
"model_id": "lukesutor/SmolVLM-500M-ActivityTracking",
"gated": "False",
"card": "---\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\nlibrary_name: transformers\nmodel_name: SmolVLM-500M-ActivityTracking\ntags:\n- generated_from_trainer\n- trl\n- sft\nlicence: license\n---\n\n# Model Card for SmolVLM-500M-ActivityTracking\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct).\nIt has been trained using [TRL](https://github.com/huggingface/trl).\n\n## Quick start\n\n```python\nfrom transformers import pipeline\n\nquestion = \"If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?\"\ngenerator = pipeline(\"text-generation\", model=\"lukesutor/SmolVLM-500M-ActivityTracking\", device=\"cuda\")\noutput = generator([{\"role\": \"user\", \"content\": question}], max_new_tokens=128, return_full_text=False)[0]\nprint(output[\"generated_text\"])\n```\n\n## Training procedure\n\n \n\n\nThis model was trained with SFT.\n\n### Framework versions\n\n- TRL: 0.17.0\n- Transformers: 4.52.0.dev0\n- Pytorch: 2.7.0\n- Datasets: 3.6.0\n- Tokenizers: 0.21.1\n\n## Citations\n\n\n\nCite TRL as:\n \n```bibtex\n@misc{vonwerra2022trl,\n\ttitle = {{TRL: Transformer Reinforcement Learning}},\n\tauthor = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\\'e}dec},\n\tyear = 2020,\n\tjournal = {GitHub repository},\n\tpublisher = {GitHub},\n\thowpublished = {\\url{https://github.com/huggingface/trl}}\n}\n```",
"metadata": "\"N/A\"",
"depth": 2,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM2-500M-Video-Instruct"
],
"base_model": "lukesutor/SmolVLM-500M-ActivityTracking",
"base_model_relation": "base"
},
{
"model_id": "mlevytskyi/SmolVLM2-500M-Video-Instruct-video-feedback",
"gated": "False",
"card": "---\nlibrary_name: transformers\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM2-500M-Video-Instruct-video-feedback\n results: []\n---\n\n<!-- This model card has been generated automatically according to the information the Trainer had access to. You\nshould probably proofread and complete it, then remove this comment. -->\n\n# SmolVLM2-500M-Video-Instruct-video-feedback\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 1\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.52.0.dev0\n- Pytorch 2.7.0+cu126\n- Datasets 3.6.0\n- Tokenizers 0.21.1\n",
"metadata": "\"N/A\"",
"depth": 2,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM2-500M-Video-Instruct"
],
"base_model": "mlevytskyi/SmolVLM2-500M-Video-Instruct-video-feedback",
"base_model_relation": "base"
},
{
"model_id": "AFZAL0008/SmolVLM2-500M-Video-Instruct-video-feedback",
"gated": "False",
"card": "---\nlibrary_name: transformers\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM2-500M-Video-Instruct-video-feedback\n results: []\n---\n\n<!-- This model card has been generated automatically according to the information the Trainer had access to. You\nshould probably proofread and complete it, then remove this comment. -->\n\n# SmolVLM2-500M-Video-Instruct-video-feedback\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0104\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 4\n- eval_batch_size: 4\n- seed: 42\n- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 1\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss |\n|:-------------:|:-----:|:----:|:---------------:|\n| 0.0058 | 0.05 | 50 | 0.0106 |\n| 0.0056 | 0.1 | 100 | 0.0105 |\n| 0.0052 | 0.15 | 150 | 0.0123 |\n| 0.0077 | 0.2 | 200 | 0.0108 |\n| 0.0053 | 0.25 | 250 | 0.0107 |\n| 0.0062 | 0.3 | 300 | 0.0109 |\n| 0.0058 | 0.35 | 350 | 0.0104 |\n| 0.006 | 0.4 | 400 | 0.0119 |\n| 0.0053 | 0.45 | 450 | 0.0104 |\n| 0.0066 | 0.5 | 500 | 0.0111 |\n| 0.0057 | 0.55 | 550 | 0.0104 |\n| 0.0059 | 0.6 | 600 | 0.0108 |\n| 0.0053 | 0.65 | 650 | 0.0104 |\n| 0.0052 | 0.7 | 700 | 0.0103 |\n| 0.0054 | 0.75 | 750 | 0.0106 |\n| 0.0064 | 0.8 | 800 | 0.0104 |\n| 0.0056 | 0.85 | 850 | 0.0104 |\n| 0.0069 | 0.9 | 900 | 0.0104 |\n| 0.0052 | 0.95 | 950 | 0.0104 |\n| 0.0053 | 1.0 | 1000 | 0.0104 |\n\n\n### Framework versions\n\n- Transformers 4.53.0.dev0\n- Pytorch 2.6.0+cu124\n- Datasets 3.6.0\n- Tokenizers 0.21.1\n",
"metadata": "\"N/A\"",
"depth": 2,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM2-500M-Video-Instruct"
],
"base_model": "AFZAL0008/SmolVLM2-500M-Video-Instruct-video-feedback",
"base_model_relation": "base"
},
{
"model_id": "mlevytskyi/SmolVLM2-500M-Video-Instruct-coco-kaggle",
"gated": "False",
"card": "---\nlibrary_name: transformers\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM2-500M-Video-Instruct-coco-kaggle\n results: []\n---\n\n<!-- This model card has been generated automatically according to the information the Trainer had access to. You\nshould probably proofread and complete it, then remove this comment. -->\n\n# SmolVLM2-500M-Video-Instruct-coco-kaggle\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.3318\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 16\n- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 1\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss |\n|:-------------:|:------:|:----:|:---------------:|\n| 0.397 | 0.1390 | 50 | 0.3987 |\n| 0.341 | 0.2780 | 100 | 0.3579 |\n| 0.3324 | 0.4170 | 150 | 0.3434 |\n| 0.3503 | 0.5559 | 200 | 0.3383 |\n| 0.3481 | 0.6949 | 250 | 0.3340 |\n| 0.3298 | 0.8339 | 300 | 0.3320 |\n| 0.3248 | 0.9729 | 350 | 0.3318 |\n\n\n### Framework versions\n\n- Transformers 4.52.0.dev0\n- Pytorch 2.7.0+cu126\n- Datasets 3.6.0\n- Tokenizers 0.21.1\n",
"metadata": "\"N/A\"",
"depth": 2,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM2-500M-Video-Instruct"
],
"base_model": "mlevytskyi/SmolVLM2-500M-Video-Instruct-coco-kaggle",
"base_model_relation": "base"
},
{
"model_id": "liuhuanjim013/SmolVLM2-500M-Video-Instruct-video-feedback",
"gated": "unknown",
"card": "---\nlibrary_name: transformers\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM2-500M-Video-Instruct-video-feedback\n results: []\n---\n\n<!-- This model card has been generated automatically according to the information the Trainer had access to. You\nshould probably proofread and complete it, then remove this comment. -->\n\n# SmolVLM2-500M-Video-Instruct-video-feedback\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 1\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.53.0.dev0\n- Pytorch 2.6.0+cu124\n- Datasets 3.6.0\n- Tokenizers 0.21.1\n",
"metadata": "\"N/A\"",
"depth": 2,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM2-500M-Video-Instruct"
],
"base_model": null,
"base_model_relation": null
},
{
"model_id": "MRIII0917/SmolVLM2-500M-Video-Instruct-video-feedback",
"gated": "False",
"card": "---\nlibrary_name: transformers\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM2-500M-Video-Instruct-video-feedback\n results: []\n---\n\n<!-- This model card has been generated automatically according to the information the Trainer had access to. You\nshould probably proofread and complete it, then remove this comment. -->\n\n# SmolVLM2-500M-Video-Instruct-video-feedback\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 1\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.50.0.dev0\n- Pytorch 2.7.0+cu126\n- Datasets 3.6.0\n- Tokenizers 0.21.1\n",
"metadata": "\"N/A\"",
"depth": 2,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM2-500M-Video-Instruct"
],
"base_model": "MRIII0917/SmolVLM2-500M-Video-Instruct-video-feedback",
"base_model_relation": "base"
},
{
"model_id": "huggingFaceOfNabil/SmolVLM2-500M-Video-Instruct-dense",
"gated": "unknown",
"card": "---\nlibrary_name: transformers\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM2-500M-Video-Instruct-dense\n results: []\n---\n\n<!-- This model card has been generated automatically according to the information the Trainer had access to. You\nshould probably proofread and complete it, then remove this comment. -->\n\n# SmolVLM2-500M-Video-Instruct-dense\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 1\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.52.4\n- Pytorch 2.7.1+cu126\n- Datasets 3.6.0\n- Tokenizers 0.21.1\n",
"metadata": "\"N/A\"",
"depth": 2,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM2-500M-Video-Instruct"
],
"base_model": null,
"base_model_relation": null
},
{
"model_id": "rainorangelemon2/smolvlm-instruct-trl-sft-ChartQA",
"gated": "unknown",
"card": "---\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\nlibrary_name: transformers\nmodel_name: smolvlm-instruct-trl-sft-ChartQA\ntags:\n- generated_from_trainer\n- trl\n- sft\nlicence: license\n---\n\n# Model Card for smolvlm-instruct-trl-sft-ChartQA\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct).\nIt has been trained using [TRL](https://github.com/huggingface/trl).\n\n## Quick start\n\n```python\nfrom transformers import pipeline\n\nquestion = \"If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?\"\ngenerator = pipeline(\"text-generation\", model=\"rainorangelemon2/smolvlm-instruct-trl-sft-ChartQA\", device=\"cuda\")\noutput = generator([{\"role\": \"user\", \"content\": question}], max_new_tokens=128, return_full_text=False)[0]\nprint(output[\"generated_text\"])\n```\n\n## Training procedure\n\n[<img src=\"https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg\" alt=\"Visualize in Weights & Biases\" width=\"150\" height=\"24\"/>](https://wandb.ai/rainorangelemon/huggingface/runs/d611vuql) \n\n\nThis model was trained with SFT.\n\n### Framework versions\n\n- TRL: 0.19.0\n- Transformers: 4.52.4\n- Pytorch: 2.7.1\n- Datasets: 3.6.0\n- Tokenizers: 0.21.1\n\n## Citations\n\n\n\nCite TRL as:\n \n```bibtex\n@misc{vonwerra2022trl,\n\ttitle = {{TRL: Transformer Reinforcement Learning}},\n\tauthor = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\\'e}dec},\n\tyear = 2020,\n\tjournal = {GitHub repository},\n\tpublisher = {GitHub},\n\thowpublished = {\\url{https://github.com/huggingface/trl}}\n}\n```",
"metadata": "\"N/A\"",
"depth": 2,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM2-500M-Video-Instruct"
],
"base_model": null,
"base_model_relation": null
},
{
"model_id": "rainorangelemon2/smolgemma-waymo-stage-1",
"gated": "unknown",
"card": "---\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\nlibrary_name: transformers\nmodel_name: smolgemma-waymo-stage-1\ntags:\n- generated_from_trainer\n- sft\n- trl\nlicence: license\n---\n\n# Model Card for smolgemma-waymo-stage-1\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct).\nIt has been trained using [TRL](https://github.com/huggingface/trl).\n\n## Quick start\n\n```python\nfrom transformers import pipeline\n\nquestion = \"If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?\"\ngenerator = pipeline(\"text-generation\", model=\"rainorangelemon2/smolgemma-waymo-stage-1\", device=\"cuda\")\noutput = generator([{\"role\": \"user\", \"content\": question}], max_new_tokens=128, return_full_text=False)[0]\nprint(output[\"generated_text\"])\n```\n\n## Training procedure\n\n[<img src=\"https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg\" alt=\"Visualize in Weights & Biases\" width=\"150\" height=\"24\"/>](https://wandb.ai/rainorangelemon/huggingface/runs/dyqdeiba) \n\n\nThis model was trained with SFT.\n\n### Framework versions\n\n- TRL: 0.19.0\n- Transformers: 4.52.4\n- Pytorch: 2.7.1\n- Datasets: 3.6.0\n- Tokenizers: 0.21.1\n\n## Citations\n\n\n\nCite TRL as:\n \n```bibtex\n@misc{vonwerra2022trl,\n\ttitle = {{TRL: Transformer Reinforcement Learning}},\n\tauthor = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\\'e}dec},\n\tyear = 2020,\n\tjournal = {GitHub repository},\n\tpublisher = {GitHub},\n\thowpublished = {\\url{https://github.com/huggingface/trl}}\n}\n```",
"metadata": "\"N/A\"",
"depth": 2,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM2-500M-Video-Instruct"
],
"base_model": null,
"base_model_relation": null
},
{
"model_id": "rainorangelemon2/smolgemma-waymo-stage-2",
"gated": "unknown",
"card": "---\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\nlibrary_name: transformers\nmodel_name: smolgemma-waymo-stage-2\ntags:\n- generated_from_trainer\n- trl\n- sft\nlicence: license\n---\n\n# Model Card for smolgemma-waymo-stage-2\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct).\nIt has been trained using [TRL](https://github.com/huggingface/trl).\n\n## Quick start\n\n```python\nfrom transformers import pipeline\n\nquestion = \"If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?\"\ngenerator = pipeline(\"text-generation\", model=\"rainorangelemon2/smolgemma-waymo-stage-2\", device=\"cuda\")\noutput = generator([{\"role\": \"user\", \"content\": question}], max_new_tokens=128, return_full_text=False)[0]\nprint(output[\"generated_text\"])\n```\n\n## Training procedure\n\n[<img src=\"https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg\" alt=\"Visualize in Weights & Biases\" width=\"150\" height=\"24\"/>](https://wandb.ai/rainorangelemon/huggingface/runs/2fs9xc0v) \n\n\nThis model was trained with SFT.\n\n### Framework versions\n\n- TRL: 0.19.0\n- Transformers: 4.52.4\n- Pytorch: 2.7.1\n- Datasets: 3.6.0\n- Tokenizers: 0.21.1\n\n## Citations\n\n\n\nCite TRL as:\n \n```bibtex\n@misc{vonwerra2022trl,\n\ttitle = {{TRL: Transformer Reinforcement Learning}},\n\tauthor = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\\'e}dec},\n\tyear = 2020,\n\tjournal = {GitHub repository},\n\tpublisher = {GitHub},\n\thowpublished = {\\url{https://github.com/huggingface/trl}}\n}\n```",
"metadata": "\"N/A\"",
"depth": 2,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM2-500M-Video-Instruct"
],
"base_model": null,
"base_model_relation": null
},
{
"model_id": "GKC96/SmolVLM2-500M-Video-Instruct-video-qna",
"gated": "unknown",
"card": "---\nlibrary_name: peft\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\ntags:\n- generated_from_trainer\nmodel-index:\n- name: SmolVLM2-500M-Video-Instruct-video-qna\n results: []\n---\n\n<!-- This model card has been generated automatically according to the information the Trainer had access to. You\nshould probably proofread and complete it, then remove this comment. -->\n\n# SmolVLM2-500M-Video-Instruct-video-qna\n\nThis model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct) on the None dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 1\n\n### Training results\n\n\n\n### Framework versions\n\n- PEFT 0.15.2\n- Transformers 4.53.0.dev0\n- Pytorch 2.7.0+cu118\n- Datasets 3.6.0\n- Tokenizers 0.21.1",
"metadata": "\"N/A\"",
"depth": 2,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM2-500M-Video-Instruct"
],
"base_model": null,
"base_model_relation": null
},
{
"model_id": "xco2/smolvlm2-500M-illustration-description",
"gated": "unknown",
"card": "---\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\nlibrary_name: peft\nlicense: apache-2.0\nlanguage:\n- en\npipeline_tag: image-text-to-text\n---\n\n# smolvlm2-500M-illustration-description\n\nAn illustration description generation model that provides richer image descriptions \nFine-tuned based on HuggingFaceTB/SmolVLM2-500M-Video-Instruct\n\n## Uses\nThis model can be used to generate descriptions of illustrations and engage in some simple Q&A related to illustration content\n\nSuggested prompts: \n- Write a descriptive caption for this image in a formal tone. \n- Write a descriptive caption for this image in a casual tone. \n- Analyze this image like an art critic would with information about its composition, style, symbolism, the use of color, light, any artistic movement it might belong to, etc. \n- What color is the hair of the character? \n- What are the characters wearing?\n\n## How to Get Started with the Model\n\n```python\nfrom transformers import AutoModelForImageTextToText, AutoProcessor\nfrom peft import PeftModel\nimport torch\n\nmodel_name = \"HuggingFaceTB/SmolVLM2-500M-Video-Instruct\"\nadapter_name = \"xco2/smolvlm2-500M-illustration-description\"\n\nmodel = AutoModelForImageTextToText.from_pretrained(\n model_name,\n torch_dtype=torch.bfloat16,\n _attn_implementation=\"flash_attention_2\"\n)\nmodel = PeftModel.from_pretrained(model, adapter_name)\n\nprocessor = AutoProcessor.from_pretrained(model_name)\n\nmodel = model.to('cuda').to(torch.bfloat16)\nmodel = model.merge_and_unload().eval()\n\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\",\n \"url\": \"https://cdn.donmai.us/sample/63/e7/__castorice_honkai_and_1_more_drawn_by_yolanda__sample-63e73017612352d472b24056e501656d.jpg\"},\n {\"type\": \"text\",\n \"text\": \"Write a descriptive caption for this image in a formal tone.\"},\n ]\n },\n]\n\ninputs = processor.apply_chat_template(\n messages,\n add_generation_prompt=True,\n tokenize=True,\n return_dict=True,\n return_tensors=\"pt\",\n).to(model.device, dtype=model.dtype)\n\ngenerated_ids = model.generate(**inputs, do_sample=True, max_new_tokens=2048)\ngenerated_texts = processor.batch_decode(\n generated_ids,\n skip_special_tokens=True,\n)\nprint(\"Assistant:\", generated_texts[0].split(\"Assistant:\")[-1])\n```\n\n## Training Details\n\n### Training Data\n\nImage description data: \n1. Utilized the quantized fancyfeast/joy-caption-pre-alpha model to describe approximately 100,000 illustrations with multiple prompts. \n2. Filtered out meaningless descriptions with repetitive phrases generated by the model. \n3. Generated Q&A data related to the content of the illustrations based on the generated descriptions using qwen3-12B. \nA total of about 240,000 training data entries were obtained in the end.\n\n### Framework versions\n\n- PEFT 0.15.2",
"metadata": "\"N/A\"",
"depth": 2,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM2-500M-Video-Instruct"
],
"base_model": null,
"base_model_relation": null
},
{
"model_id": "ggml-org/SmolVLM2-500M-Video-Instruct-GGUF",
"gated": "False",
"card": "---\nlicense: apache-2.0\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\n---\n\n# SmolVLM2-500M-Video-Instruct\n\nOriginal model: https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct\n\nFor more info, please refer to this PR: https://github.com/ggml-org/llama.cpp/pull/13050\n\n",
"metadata": "\"N/A\"",
"depth": 2,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM2-500M-Video-Instruct"
],
"base_model": "ggml-org/SmolVLM2-500M-Video-Instruct-GGUF",
"base_model_relation": "base"
},
{
"model_id": "mradermacher/SmolVLM2-500M-Video-Instruct-GGUF",
"gated": "False",
"card": "---\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\ndatasets:\n- HuggingFaceM4/the_cauldron\n- HuggingFaceM4/Docmatix\n- lmms-lab/LLaVA-OneVision-Data\n- lmms-lab/M4-Instruct-Data\n- HuggingFaceFV/finevideo\n- MAmmoTH-VL/MAmmoTH-VL-Instruct-12M\n- lmms-lab/LLaVA-Video-178K\n- orrzohar/Video-STaR\n- Mutonix/Vript\n- TIGER-Lab/VISTA-400K\n- Enxin/MovieChat-1K_train\n- ShareGPT4Video/ShareGPT4Video\nlanguage:\n- en\nlibrary_name: transformers\nlicense: apache-2.0\nquantized_by: mradermacher\n---\n## About\n\n<!-- ### quantize_version: 2 -->\n<!-- ### output_tensor_quantised: 1 -->\n<!-- ### convert_type: hf -->\n<!-- ### vocab_type: -->\n<!-- ### tags: -->\nstatic quants of https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct\n\n<!-- provided-files -->\nweighted/imatrix quants are available at https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF\n## Usage\n\nIf you are unsure how to use GGUF files, refer to one of [TheBloke's\nREADMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for\nmore details, including on how to concatenate multi-part files.\n\n## Provided Quants\n\n(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)\n\n| Link | Type | Size/GB | Notes |\n|:-----|:-----|--------:|:------|\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.Q2_K.gguf) | Q2_K | 0.3 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.Q3_K_S.gguf) | Q3_K_S | 0.3 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.IQ4_XS.gguf) | IQ4_XS | 0.4 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.Q3_K_M.gguf) | Q3_K_M | 0.4 | lower quality |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.Q3_K_L.gguf) | Q3_K_L | 0.4 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.Q4_K_S.gguf) | Q4_K_S | 0.4 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.Q4_K_M.gguf) | Q4_K_M | 0.4 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.Q5_K_S.gguf) | Q5_K_S | 0.4 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.Q5_K_M.gguf) | Q5_K_M | 0.4 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.Q6_K.gguf) | Q6_K | 0.5 | very good quality |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.Q8_0.gguf) | Q8_0 | 0.5 | fast, best quality |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.f16.gguf) | f16 | 0.9 | 16 bpw, overkill |\n\nHere is a handy graph by ikawrakow comparing some lower-quality quant\ntypes (lower is better):\n\n\n\nAnd here are Artefact2's thoughts on the matter:\nhttps://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9\n\n## FAQ / Model Request\n\nSee https://huggingface.co/mradermacher/model_requests for some answers to\nquestions you might have and/or if you want some other model quantized.\n\n## Thanks\n\nI thank my company, [nethype GmbH](https://www.nethype.de/), for letting\nme use its servers and providing upgrades to my workstation to enable\nthis work in my free time.\n\n<!-- end -->\n",
"metadata": "\"N/A\"",
"depth": 2,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM2-500M-Video-Instruct"
],
"base_model": "mradermacher/SmolVLM2-500M-Video-Instruct-GGUF",
"base_model_relation": "base"
},
{
"model_id": "mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF",
"gated": "False",
"card": "---\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\ndatasets:\n- HuggingFaceM4/the_cauldron\n- HuggingFaceM4/Docmatix\n- lmms-lab/LLaVA-OneVision-Data\n- lmms-lab/M4-Instruct-Data\n- HuggingFaceFV/finevideo\n- MAmmoTH-VL/MAmmoTH-VL-Instruct-12M\n- lmms-lab/LLaVA-Video-178K\n- orrzohar/Video-STaR\n- Mutonix/Vript\n- TIGER-Lab/VISTA-400K\n- Enxin/MovieChat-1K_train\n- ShareGPT4Video/ShareGPT4Video\nlanguage:\n- en\nlibrary_name: transformers\nlicense: apache-2.0\nquantized_by: mradermacher\n---\n## About\n\n<!-- ### quantize_version: 2 -->\n<!-- ### output_tensor_quantised: 1 -->\n<!-- ### convert_type: hf -->\n<!-- ### vocab_type: -->\n<!-- ### tags: nicoboss -->\nweighted/imatrix quants of https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct\n\n<!-- provided-files -->\nstatic quants are available at https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-GGUF\n## Usage\n\nIf you are unsure how to use GGUF files, refer to one of [TheBloke's\nREADMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for\nmore details, including on how to concatenate multi-part files.\n\n## Provided Quants\n\n(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)\n\n| Link | Type | Size/GB | Notes |\n|:-----|:-----|--------:|:------|\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 0.3 | for the desperate |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 0.3 | mostly desperate |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.3 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.3 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 0.3 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 0.3 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.3 | very low quality |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.3 | lower quality |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 0.3 | beats Q3_K* |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.3 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.i1-Q2_K.gguf) | i1-Q2_K | 0.3 | IQ3_XXS probably better |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.3 | IQ3_XS probably better |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 0.4 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.4 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.4 | prefer IQ4_XS |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.i1-Q4_0.gguf) | i1-Q4_0 | 0.4 | fast, low quality |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.4 | IQ3_S probably better |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.4 | IQ3_M probably better |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.i1-Q4_1.gguf) | i1-Q4_1 | 0.4 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.4 | optimal size/speed/quality |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.4 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.4 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.4 | |\n| [GGUF](https://huggingface.co/mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF/resolve/main/SmolVLM2-500M-Video-Instruct.i1-Q6_K.gguf) | i1-Q6_K | 0.5 | practically like static Q6_K |\n\nHere is a handy graph by ikawrakow comparing some lower-quality quant\ntypes (lower is better):\n\n\n\nAnd here are Artefact2's thoughts on the matter:\nhttps://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9\n\n## FAQ / Model Request\n\nSee https://huggingface.co/mradermacher/model_requests for some answers to\nquestions you might have and/or if you want some other model quantized.\n\n## Thanks\n\nI thank my company, [nethype GmbH](https://www.nethype.de/), for letting\nme use its servers and providing upgrades to my workstation to enable\nthis work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.\n\n<!-- end -->\n",
"metadata": "\"N/A\"",
"depth": 2,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM2-500M-Video-Instruct"
],
"base_model": "mradermacher/SmolVLM2-500M-Video-Instruct-i1-GGUF",
"base_model_relation": "base"
},
{
"model_id": "second-state/SmolVLM2-500M-Video-Instruct-GGUF",
"gated": "unknown",
"card": "---\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\nlibrary_name: transformers\nlicense: apache-2.0\nmodel_creator: HuggingFaceTB\nmodel_name: SmolVLM2-500M-Video-Instruct\nquantized_by: Second State Inc.\npipeline_tag: image-text-to-text\nlanguage:\n- en\n---\n\n<!-- header start -->\n<!-- 200823 -->\n<div style=\"width: auto; margin-left: auto; margin-right: auto\">\n<img src=\"https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg\" style=\"width: 100%; min-width: 400px; display: block; margin: auto;\">\n</div>\n<hr style=\"margin-top: 1.0em; margin-bottom: 1.0em;\">\n<!-- header end -->\n\n# SmolVLM2-500M-Video-Instruct-GGUF\n\n## Original Model\n\n[HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct)\n\n## Run with LlamaEdge\n\n- LlamaEdge version: [v0.21.0](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.21.0) and above\n\n- Prompt template\n\n - Prompt type: `smol-vision`\n\n - Prompt string\n\n ```text\n <|im_start|>\n User: {user_message_1}<image>\n Assistant: {assistant_message_1}\n User: {user_message_2}<image>\n Assistant:\n ```\n\n- Context size: `2048`\n\n- Run as LlamaEdge service\n\n ```bash\n wasmedge --dir .:. --nn-preload default:GGML:AUTO:SmolVLM2-500M-Video-Instruct-Q5_K_M.gguf \\\n llama-api-server.wasm \\\n --prompt-template smol-vision \\\n --llava-mmproj SmolVLM2-500M-Video-Instruct-mmproj-f16.gguf \\\n --model-name SmolVLM2-500M-Video-Instruct \\\n --ctx-size 2048\n ```\n\n## Quantized GGUF Models\n\n| Name | Quant method | Bits | Size | Use case |\n| ---- | ---- | ---- | ---- | ----- |\n| [SmolVLM2-500M-Video-Instruct-Q2_K.gguf](https://huggingface.co/second-state/SmolVLM2-500M-Video-Instruct-GGUF/blob/main/SmolVLM2-500M-Video-Instruct-Q2_K.gguf) | Q2_K | 2 | 245 MB| smallest, significant quality loss - not recommended for most purposes |\n| [SmolVLM2-500M-Video-Instruct-Q3_K_L.gguf](https://huggingface.co/second-state/SmolVLM2-500M-Video-Instruct-GGUF/blob/main/SmolVLM2-500M-Video-Instruct-Q3_K_L.gguf) | Q3_K_L | 3 | 273 MB| small, substantial quality loss |\n| [SmolVLM2-500M-Video-Instruct-Q3_K_M.gguf](https://huggingface.co/second-state/SmolVLM2-500M-Video-Instruct-GGUF/blob/main/SmolVLM2-500M-Video-Instruct-Q3_K_M.gguf) | Q3_K_M | 3 | 261 MB| very small, high quality loss |\n| [SmolVLM2-500M-Video-Instruct-Q3_K_S.gguf](https://huggingface.co/second-state/SmolVLM2-500M-Video-Instruct-GGUF/blob/main/SmolVLM2-500M-Video-Instruct-Q3_K_S.gguf) | Q3_K_S | 3 | 245 MB| very small, high quality loss |\n| [SmolVLM2-500M-Video-Instruct-Q4_0.gguf](https://huggingface.co/second-state/SmolVLM2-500M-Video-Instruct-GGUF/blob/main/SmolVLM2-500M-Video-Instruct-Q4_0.gguf) | Q4_0 | 4 | 256 MB| legacy; small, very high quality loss - prefer using Q3_K_M |\n| [SmolVLM2-500M-Video-Instruct-Q4_K_M.gguf](https://huggingface.co/second-state/SmolVLM2-500M-Video-Instruct-GGUF/blob/main/SmolVLM2-500M-Video-Instruct-Q4_K_M.gguf) | Q4_K_M | 4 | 303 MB| medium, balanced quality - recommended |\n| [SmolVLM2-500M-Video-Instruct-Q4_K_S.gguf](https://huggingface.co/second-state/SmolVLM2-500M-Video-Instruct-GGUF/blob/main/SmolVLM2-500M-Video-Instruct-Q4_K_S.gguf) | Q4_K_S | 4 | 293 MB| small, greater quality loss |\n| [SmolVLM2-500M-Video-Instruct-Q5_0.gguf](https://huggingface.co/second-state/SmolVLM2-500M-Video-Instruct-GGUF/blob/main/SmolVLM2-500M-Video-Instruct-Q5_0.gguf) | Q5_0 | 5 | 301 MB| legacy; medium, balanced quality - prefer using Q4_K_M |\n| [SmolVLM2-500M-Video-Instruct-Q5_K_M.gguf](https://huggingface.co/second-state/SmolVLM2-500M-Video-Instruct-GGUF/blob/main/SmolVLM2-500M-Video-Instruct-Q5_K_M.gguf) | Q5_K_M | 5 | 326 MB| large, very low quality loss - recommended |\n| [SmolVLM2-500M-Video-Instruct-Q5_K_S.gguf](https://huggingface.co/second-state/SmolVLM2-500M-Video-Instruct-GGUF/blob/main/SmolVLM2-500M-Video-Instruct-Q5_K_S.gguf) | Q5_K_S | 5 | 319 MB| large, low quality loss - recommended |\n| [SmolVLM2-500M-Video-Instruct-Q6_K.gguf](https://huggingface.co/second-state/SmolVLM2-500M-Video-Instruct-GGUF/blob/main/SmolVLM2-500M-Video-Instruct-Q6_K.gguf) | Q6_K | 6 | 418 MB| very large, extremely low quality loss |\n| [SmolVLM2-500M-Video-Instruct-Q8_0.gguf](https://huggingface.co/second-state/SmolVLM2-500M-Video-Instruct-GGUF/blob/main/SmolVLM2-500M-Video-Instruct-Q8_0.gguf) | Q8_0 | 8 | 437 MB| very large, extremely low quality loss - not recommended |\n| [SmolVLM2-500M-Video-Instruct-f16.gguf](https://huggingface.co/second-state/SmolVLM2-500M-Video-Instruct-GGUF/blob/main/SmolVLM2-500M-Video-Instruct-f16.gguf) | f16 | 16 | 820 MB| |\n| [SmolVLM2-500M-Video-Instruct-mmproj-f16.gguf](https://huggingface.co/second-state/SmolVLM2-500M-Video-Instruct-GGUF/blob/main/SmolVLM2-500M-Video-Instruct-mmproj-f16.gguf) | f16 | 16 | 199 MB| |\n\n*Quantized with llama.cpp b5501*\n",
"metadata": "\"N/A\"",
"depth": 2,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM2-500M-Video-Instruct"
],
"base_model": null,
"base_model_relation": null
},
{
"model_id": "gaianet/SmolVLM2-500M-Video-Instruct-GGUF",
"gated": "unknown",
"card": "---\nbase_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct\nlibrary_name: transformers\nlicense: apache-2.0\nmodel_creator: HuggingFaceTB\nmodel_name: SmolVLM2-500M-Video-Instruct\nquantized_by: Second State Inc.\npipeline_tag: image-text-to-text\nlanguage:\n- en\n---\n\n# SmolVLM2-500M-Video-Instruct-GGUF\n\n## Original Model\n\n[HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct)\n\n## Run with Gaianet\n\n**Prompt template:**\n\nprompt template: `smol-vision`\n\n**Context size:**\n\nchat_ctx_size: `2048`\n\n**Run with GaiaNet:**\n\n- Quick start: https://docs.gaianet.ai/node-guide/quick-start\n\n- Customize your node: https://docs.gaianet.ai/node-guide/customize\n\n*Quantized with llama.cpp b5501*\n",
"metadata": "\"N/A\"",
"depth": 2,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM2-500M-Video-Instruct"
],
"base_model": null,
"base_model_relation": null
},
{
"model_id": "DevQuasar/HuggingFaceTB.SmolVLM2-500M-Video-Instruct-GGUF",
"gated": "unknown",
"card": "---\nbase_model:\n- HuggingFaceTB/SmolVLM2-500M-Video-Instruct\npipeline_tag: image-text-to-text\n---\n\n[<img src=\"https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png\" width=\"200\"/>](https://devquasar.com)\n\n'Make knowledge free for everyone'\n\nQuantized version of: [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct)\n<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>",
"metadata": "\"N/A\"",
"depth": 2,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM2-500M-Video-Instruct"
],
"base_model": null,
"base_model_relation": null
},
{
"model_id": "AXERA-TECH/SmolVLM2-500M-Video-Instruct",
"gated": "unknown",
"card": "---\nlicense: bsd-3-clause\nlanguage:\n - en\n - zh\nbase_model:\n - HuggingFaceTB/SmolVLM2-500M-Video-Instruct\npipeline_tag: visual-question-answering\ntags:\n - HuggingFaceTB\n - SmolVLM2-500M-Video-Instruct\n---\n\n# SmolVLM2-500M-Video-Instruct-Int8\n\nThis version of SmolVLM2-500M-Video-Instruct has been converted to run on the Axera NPU using **w8a16** quantization.\n\nCompatible with Pulsar2 version: 4.0\n\n## Convert tools links:\n\nFor those who are interested in model conversion, you can try to export axmodel through the original repo:\n- https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct\n\n<!-- - [Github for SmolVLM2-500M-Video-Instruct.axera](https://github.com/AXERA-TECH/SmolVLM2-500M-Video-Instruct.axera) -->\n- [Pulsar2 Link, How to Convert LLM from Huggingface to axmodel](https://pulsar2-docs.readthedocs.io/en/latest/appendix/build_llm.html)\n\n## Support Platform\n- AX650\n - [M4N-Dock(\u7231\u82af\u6d3ePro)](https://wiki.sipeed.com/hardware/zh/maixIV/m4ndock/m4ndock.html)\n\n<!-- ## TODO Model infer time -->\n\n## How to use\n\nDownload all files from this repository to the device.\n\n**Using AX650 Board**\n\n```bash\nai@ai-bj ~/yongqiang/SmolVLM2-500M-Video-Instruct $ tree -L 1\n.\n\u251c\u2500\u2500 assets\n\u251c\u2500\u2500 embeds\n\u251c\u2500\u2500 infer_axmodel.py\n\u251c\u2500\u2500 README.md\n\u251c\u2500\u2500 smolvlm2_axmodel\n\u251c\u2500\u2500 smolvlm2_tokenizer\n\u2514\u2500\u2500 vit_mdoel\n\n5 directories, 2 files\n```\n\n#### Inference with AX650 Host, such as M4N-Dock(\u7231\u82af\u6d3ePro) or AX650N DEMO Board\n\n**Multimodal Understanding**\n\ninput image\n\n\n\ninput text:\n\n```\nCan you describe this image?\n```\n\nlog information:\n\n```bash\nai@ai-bj ~/yongqiang/SmolVLM2-500M-Video-Instruct $ python3 infer_axmodel.py\n\ninput prompt: Can you describe this image?\n\nanswer >> The image captures a close-up view of a pink flower, prominently featuring a bumblebee. The bumblebee, with its black and yellow stripes, is in the center of the frame, its body slightly tilted to the left. The flower, with its petals fully spread, is the main subject of the image. The background is blurred, drawing focus to the flower and the bumblebee. The blurred background suggests a garden or a field, providing a sense of depth to the image. The^@ colors in the image are vibrant, with the pink of the flower contrasting against the green of the leaves and the brown of the stems. The image does not provide enough detail to confidently identify the specific location or landmark referred to as \"sa_16743\".\n```",
"metadata": "\"N/A\"",
"depth": 2,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"HuggingFaceTB/SmolVLM2-500M-Video-Instruct"
],
"base_model": null,
"base_model_relation": null
}
]
} |