modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
list
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
VIDEOS-two-wolf-one-girl-Viral-Video/CLIP.VIDEO.two.wolf.one.girl.Video.Tutorial.Official
VIDEOS-two-wolf-one-girl-Viral-Video
2025-06-16T04:24:16Z
0
0
null
[ "region:us" ]
null
2025-06-16T04:20:01Z
<animated-image data-catalyst=""><a href="https://sexleakedviral.com/new-leaked-video/?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Henil1/vit-axavision-2-ChestX-BioGPT
Henil1
2025-06-16T04:24:09Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-16T04:24:07Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Henil1/vit-axavision-2-ChestX
Henil1
2025-06-16T04:24:06Z
8
0
transformers
[ "transformers", "safetensors", "vision-encoder-decoder", "image-text-to-text", "image-captioning", "vision-language", "vit-gpt2", "chest-xray", "healthcare", "axamine", "finetuned", "nlpconnect/vit-gpt2-image-captioning", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-06-14T18:50:44Z
--- library_name: transformers tags: - image-captioning - vision-language - vit-gpt2 - chest-xray - healthcare - axamine - finetuned - nlpconnect/vit-gpt2-image-captioning --- # Vit-Axavision-2-ChestX ๐Ÿฉบ This model is a fine-tuned version of [`nlpconnect/vit-gpt2-image-captioning`](https://huggingface.co/nlpconnect/vit-gpt2-image-captioning) on a chest X-ray dataset. It is developed as part of the Axamine AI research efforts to explore medical vision-language applications. The model takes chest X-ray images as input and generates descriptive captions that may help in automated reporting, healthcare research, or AI-assisted diagnostics. --- ## Model Details - **Base model:** nlpconnect/vit-gpt2-image-captioning - **Architecture:** VisionEncoderDecoderModel (ViT encoder + GPT2 decoder) - **Fine-tuned on dataset:** [Shrey-1329/cxiu_hf_dataset](https://huggingface.co/datasets/Shrey-1329/cxiu_hf_dataset) - **Model size:** ~250M parameters - **Developed by:** Henilsinh Raj (Axamine AI) --- ## Use Cases ### Intended Use - Chest X-ray image captioning - Healthcare research - Medical AI experiments - Educational purposes ### Limitations - This model does **not** provide medical diagnosis. - Captions are purely descriptive and may not fully reflect clinical accuracy. --- ## Usage Hereโ€™s how you can use the model for inference: ```python from transformers import VisionEncoderDecoderModel, ViTImageProcessor, AutoTokenizer from PIL import Image import torch import requests # Load model model_id = "Henil1/vit-axavision-2-ChestX" model = VisionEncoderDecoderModel.from_pretrained(model_id) feature_extractor = ViTImageProcessor.from_pretrained(model_id) tokenizer = AutoTokenizer.from_pretrained(model_id) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.to(device) # Preprocess image image = Image.open("your_image_path.jpg").convert("RGB") pixel_values = feature_extractor(images=image, return_tensors="pt").pixel_values.to(device) # Generate caption output_ids = model.generate(pixel_values, max_length=64, num_beams=4) caption = tokenizer.decode(output_ids[0], skip_special_tokens=True) print("Generated caption:", caption) ``` --- ## Citation If you use this model, please cite: ```bibtex @misc{henil2025axavision, author = {Henilsinh Raj}, title = {Vit-Axavision-2-ChestX: Vision-Language Model for Chest X-Ray Captioning}, year = {2025}, publisher = {Hugging Face}, url = {https://huggingface.co/Henil1/vit-axavision-2-ChestX} }
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.05_0.05_0.05_epoch1
MinaMila
2025-06-16T04:20:35Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-16T04:18:50Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
diamandislabii/ViT-Glial-Tumor-Classification
diamandislabii
2025-06-16T04:12:45Z
0
0
null
[ "region:us" ]
null
2025-04-14T01:13:12Z
# ViT-Glial-Tumor-Classification This repository implements a Vision Transformer (ViT) model for classifying histopathological images of glial tumors into three major categories: Glioblastoma (GBM), Astrocytoma (Astros), and Oligodendroglioma (Oligos) at **Diamandis Lab**. The pipeline includes dataset preparation, balanced sampling, model fine-tuning, training/validation with Weights & Biases logging, and evaluation with confusion matrix visualization. ## Dataset - Histology images organized into folders by tumor type: `GBM`, `Astros`, and `Oligos`. - For efficient training, a **balanced subset of 18,000 images** (6,000 per class) was selected for fine-tuning. ## Model - Pretrained Vision Transformer from [Kaiko AI](https://huggingface.co/1aurent/vit_base_patch16_224.kaiko_ai_towards_large_pathology_fms). - Classification head fine-tuned on the glial tumor dataset. - Last 5 transformer blocks are also fine-tuned for improved learning. ## Training - Stratified train/val/test split (60/20/20). - Training includes W&B logging and model checkpointing. - Early stopping is used to avoid overfitting. --- ## Results The model achieved ** classification performance**: - **Best Validation Accuracy**: **94.92%** (Epoch 2) - **Final Test Accuracy**: **94.64%**
original-shruthi-narayanan-viral-video/wATCH.shruthi.narayanan.viral.video.original
original-shruthi-narayanan-viral-video
2025-06-16T04:05:25Z
0
0
null
[ "region:us" ]
null
2025-06-16T04:05:09Z
[![image/gif](https://cdn-uploads.huggingface.co/production/uploads/683d278851706d12b2cbc4eb/OMYmxOdS-sy4ZshNCnNav.gif)](https://t.co/P8Ex9FtH0g)
sonnykoalu/xdf
sonnykoalu
2025-06-16T04:02:41Z
0
0
null
[ "license:other", "region:us" ]
null
2025-06-16T04:01:45Z
--- license: other license_name: none license_link: LICENSE ---
dgambettaphd/M_llm2_run2_gen8_WXS_doc1000_synt64_lr1e-04_acm_FRESH
dgambettaphd
2025-06-16T03:52:07Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-16T03:51:55Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Mezzo-Fun-Viral-Video/VIDEO.mezzo.fun.Viral.Video.Tutorial.Official
Mezzo-Fun-Viral-Video
2025-06-16T03:50:52Z
0
0
null
[ "region:us" ]
null
2025-06-16T03:48:42Z
<a href="https://t.co/98E3uGhPfJ" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
sunblaze-ucb/OLMo-2-7B-SFT-GRPO-MATH-1EPOCH
sunblaze-ucb
2025-06-16T03:47:48Z
0
0
null
[ "safetensors", "olmo2", "text-generation", "conversational", "en", "dataset:math", "arxiv:2505.19590", "base_model:allenai/OLMo-2-1124-7B-SFT", "base_model:finetune:allenai/OLMo-2-1124-7B-SFT", "license:apache-2.0", "region:us" ]
text-generation
2025-06-16T03:38:17Z
--- base_model: - allenai/OLMo-2-1124-7B-SFT license: apache-2.0 datasets: - math metrics: - accuracy pipeline_tag: text-generation language: - en --- # OLMo-2-7B-SFT-GRPO-MATH-1EPOCH **Description:** A GRPO-fine-tuned version of allenai/OLMo-2-1124-7B-SFT trained on the MATH dataset. --- ## Citation ```bibtex @article{zhao2025learning, title={Learning to Reason without External Rewards}, author={Zhao, Xuandong and Kang, Zhewei and Feng, Aosong and Levine, Sergey and Song, Dawn}, journal={arXiv preprint arXiv:2505.19590}, year={2025} } ```
maily101102/translution
maily101102
2025-06-16T03:45:45Z
0
0
null
[ "safetensors", "gguf", "llama", "unsloth", "license:llama3.1", "endpoints_compatible", "region:us" ]
null
2025-06-16T03:07:51Z
--- license: llama3.1 tags: - unsloth ---
New-tutorial-kayla-viral-video/FULL.VIDEO.kayla.Viral.Video.Tutorial.Official
New-tutorial-kayla-viral-video
2025-06-16T03:44:39Z
0
0
null
[ "region:us" ]
null
2025-06-16T03:44:21Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.25_0.5_0.5_epoch1
MinaMila
2025-06-16T03:43:37Z
0
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-16T03:41:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.05_0.05_0.5_epoch1
MinaMila
2025-06-16T03:32:44Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-16T03:30:58Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.25_0.75_0.05_epoch2
MinaMila
2025-06-16T03:22:57Z
0
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-16T03:21:04Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.05_0.05_0.75_epoch1
MinaMila
2025-06-16T03:16:45Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-16T03:14:58Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
18-video-filtrado-anahi-antonella-video/Viral.Ver.video.filtrado.anahi.antonella.completo.anahi.antonella.filtrado.clip
18-video-filtrado-anahi-antonella-video
2025-06-16T03:07:25Z
0
0
null
[ "region:us" ]
null
2025-06-16T03:06:57Z
<animated-image data-catalyst=""><a href="https://sexleakedviral.com/new-leaked-video/?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
18-video-filtrado-anahi-antonella-video/Ver.video.filtrado.anahi.antonella.video.completo.anahi.antonella.filtrado.clip
18-video-filtrado-anahi-antonella-video
2025-06-16T03:03:01Z
0
0
null
[ "region:us" ]
null
2025-06-16T03:02:39Z
<animated-image data-catalyst=""><a href="https://sexleakedviral.com/new-leaked-video/?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Daxiao123/test_model
Daxiao123
2025-06-16T02:58:43Z
0
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-06-16T02:58:13Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mlx-community/llm-jp-3.1-13b-instruct4-4bit
mlx-community
2025-06-16T02:51:26Z
0
0
mlx
[ "mlx", "safetensors", "llama", "text-generation", "conversational", "en", "ja", "base_model:llm-jp/llm-jp-3.1-13b-instruct4", "base_model:quantized:llm-jp/llm-jp-3.1-13b-instruct4", "license:apache-2.0", "4-bit", "region:us" ]
text-generation
2025-06-16T02:39:27Z
--- license: apache-2.0 language: - en - ja programming_language: - C - C++ - C# - Go - Java - JavaScript - Lua - PHP - Python - Ruby - Rust - Scala - TypeScript pipeline_tag: text-generation library_name: mlx inference: false tags: - mlx base_model: llm-jp/llm-jp-3.1-13b-instruct4 --- # mlx-community/llm-jp-3.1-13b-instruct4-4bit This model [mlx-community/llm-jp-3.1-13b-instruct4-4bit](https://huggingface.co/mlx-community/llm-jp-3.1-13b-instruct4-4bit) was converted to MLX format from [llm-jp/llm-jp-3.1-13b-instruct4](https://huggingface.co/llm-jp/llm-jp-3.1-13b-instruct4) using mlx-lm version **0.24.1**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/llm-jp-3.1-13b-instruct4-4bit") prompt = "hello" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
glif-loradex-trainer/Angelo-ec24_d00d
glif-loradex-trainer
2025-06-16T02:47:57Z
0
0
diffusers
[ "diffusers", "text-to-image", "template:sd-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:finetune:black-forest-labs/FLUX.1-dev", "license:other", "region:us", "flux", "lora", "base_model:adapter:black-forest-labs/FLUX.1-dev" ]
text-to-image
2025-06-16T02:47:43Z
--- tags: - diffusers - text-to-image - template:sd-lora - base_model:black-forest-labs/FLUX.1-dev - base_model:finetune:black-forest-labs/FLUX.1-dev - license:other - region:us - flux - lora widget: - output: url: samples/1750041998772__000000500_0.jpg text: d00d columns - output: url: samples/1750042024106__000000500_1.jpg text: d00d top - output: url: samples/1750042049468__000000500_2.jpg text: d00d messy base_model: black-forest-labs/FLUX.1-dev trigger: "d00d" instance_prompt: "d00d" license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # d00d Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `Angelo-ec24`. <Gallery /> ## Trigger words You should use `d00d` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/glif-loradex-trainer/Angelo-ec24_d00d/tree/main) them in the Files & versions tab. ## License This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
gradientrouting-spar/mc14_badmed_kl_div_dsd-5_msd-5_beta_kl-3_seed_1
gradientrouting-spar
2025-06-16T02:35:58Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-16T02:35:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.05_0.15_0.25_epoch1
MinaMila
2025-06-16T02:28:54Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-16T02:27:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
johngreendr1/510c9729-38c6-4879-b553-91ed355f29a4
johngreendr1
2025-06-16T02:27:28Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:defog/sqlcoder-7b-2", "base_model:adapter:defog/sqlcoder-7b-2", "region:us" ]
null
2025-06-15T20:53:07Z
--- base_model: defog/sqlcoder-7b-2 library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.1
semtwo/kobart-wikipedia-qa
semtwo
2025-06-16T02:25:46Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-16T02:25:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
GeorgeGali06/mi-super-modelo2
GeorgeGali06
2025-06-16T02:24:10Z
0
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-06-16T02:07:04Z
--- library_name: transformers license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer metrics: - accuracy model-index: - name: mi-super-modelo2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mi-super-modelo2 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.5339 - Accuracy: 0.275 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.5229 | 0.5 | 5 | 1.5633 | 0.225 | | 1.6001 | 1.0 | 10 | 1.5339 | 0.275 | ### Framework versions - Transformers 4.52.4 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
fangcaotank/task-10-microsoft-Phi-3.5-mini-instruct
fangcaotank
2025-06-16T02:24:03Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:microsoft/Phi-3.5-mini-instruct", "base_model:adapter:microsoft/Phi-3.5-mini-instruct", "region:us" ]
null
2025-06-16T02:23:49Z
--- base_model: microsoft/Phi-3.5-mini-instruct library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.13.2
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.25_0.75_0.75_epoch1
MinaMila
2025-06-16T02:22:04Z
0
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-16T02:20:12Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
namdp-ptit/ViRanker
namdp-ptit
2025-06-16T02:21:36Z
1,324
14
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "cross-encoder", "rerank", "vi", "base_model:BAAI/bge-m3", "base_model:finetune:BAAI/bge-m3", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-08-14T02:58:28Z
--- language: - vi license: apache-2.0 library_name: transformers tags: - transformers - cross-encoder - rerank pipeline_tag: text-classification widget: - text: tแป‰nh nร o cรณ diแป‡n tรญch lแป›n nhแบฅt viแป‡t nam output: - label: nghแป‡ an cรณ diแป‡n tรญch lแป›n nhแบฅt viแป‡t nam score: 0.99999 - label: bแบฏc ninh cรณ diแป‡n tรญch nhแป nhแบฅt viแป‡t nam score: 0.0001 base_model: - BAAI/bge-m3 --- # Reranker * [Usage](#usage) * [Using FlagEmbedding](#using-flagembedding) * [Using Huggingface transformers](#using-huggingface-transformers) * [Fine tune](#fine-tune) * [Data format](#data-format) * [Performance](#performance) * [Contact](#contact) * [Support The Project](#support-the-project) * [Citation](#citation) Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. You can get a relevance score by inputting query and passage to the reranker. And the score can be mapped to a float value in [0,1] by sigmoid function. ## Usage ### Using FlagEmbedding ``` pip install -U FlagEmbedding ``` Get relevance scores (higher scores indicate more relevance): ```python from FlagEmbedding import FlagReranker reranker = FlagReranker('namdp-ptit/ViRanker', use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation score = reranker.compute_score(['ai lร  vแป‹ vua cuแป‘i cรนng cแปงa viแป‡t nam', 'vua bแบฃo ฤ‘แบกi lร  vแป‹ vua cuแป‘i cรนng cแปงa nฦฐแป›c ta']) print(score) # 13.71875 # You can map the scores into 0-1 by set "normalize=True", which will apply sigmoid function to the score score = reranker.compute_score(['ai lร  vแป‹ vua cuแป‘i cรนng cแปงa viแป‡t nam', 'vua bแบฃo ฤ‘แบกi lร  vแป‹ vua cuแป‘i cรนng cแปงa nฦฐแป›c ta'], normalize=True) print(score) # 0.99999889840464 scores = reranker.compute_score( [ ['ai lร  vแป‹ vua cuแป‘i cรนng cแปงa viแป‡t nam', 'vua bแบฃo ฤ‘แบกi lร  vแป‹ vua cuแป‘i cรนng cแปงa nฦฐแป›c ta'], ['ai lร  vแป‹ vua cuแป‘i cรนng cแปงa viแป‡t nam', 'lรฝ nam ฤ‘แบฟ lร  vแป‹ vua ฤ‘แบงu tiรชn cแปงa nฦฐแป›c ta'] ] ) print(scores) # [13.7265625, -8.53125] # You can map the scores into 0-1 by set "normalize=True", which will apply sigmoid function to the score scores = reranker.compute_score( [ ['ai lร  vแป‹ vua cuแป‘i cรนng cแปงa viแป‡t nam', 'vua bแบฃo ฤ‘แบกi lร  vแป‹ vua cuแป‘i cแปงa nฦฐแป›c ta'], ['ai lร  vแป‹ vua cuแป‘i cรนng cแปงa viแป‡t nam', 'lรฝ nam ฤ‘แบฟ lร  vแป‹ vua ฤ‘แบงu tiรชn cแปงa nฦฐแป›c ta'] ], normalize=True ) print(scores) # [0.99999889840464, 0.00019716942196222918] ``` ### Using Huggingface transformers ``` pip install -U transformers ``` Get relevance scores (higher scores indicate more relevance): ```python import torch from transformers import AutoModelForSequenceClassification, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('namdp-ptit/ViRanker') model = AutoModelForSequenceClassification.from_pretrained('namdp-ptit/ViRanker') model.eval() pairs = [ ['ai lร  vแป‹ vua cuแป‘i cรนng cแปงa viแป‡t nam', 'vua bแบฃo ฤ‘แบกi lร  vแป‹ vua cuแป‘i cรนng cแปงa nฦฐแป›c ta'], ['ai lร  vแป‹ vua cuแป‘i cรนng cแปงa viแป‡t nam', 'lรฝ nam ฤ‘แบฟ lร  vแป‹ vua ฤ‘แบงu tiรชn cแปงa nฦฐแป›c ta'] ], with torch.no_grad(): inputs = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt', max_length=512) scores = model(**inputs, return_dict=True).logits.view(-1, ).float() print(scores) ``` ## Fine-tune ### Data Format Train data should be a json file, where each line is a dict like this: ``` {"query": str, "pos": List[str], "neg": List[str]} ``` `query` is the query, and `pos` is a list of positive texts, `neg` is a list of negative texts. If you have no negative texts for a query, you can random sample some from the entire corpus as the negatives. Besides, for each query in the train data, we used LLMs to generate hard negative for them by asking LLMs to create a document that is the opposite one of the documents in 'pos'. ## Performance Below is a comparision table of the results we achieved compared to some other pre-trained Cross-Encoders on the [MS MMarco Passage Reranking - Vi - Dev](https://huggingface.co/datasets/unicamp-dl/mmarco) dataset. | Model Name | NDCG@3 | MRR@3 | NDCG@5 | MRR@5 | NDCG@10 | MRR@10 | |-----------------------------------------------------------------------------------------------------------------------------------------|:-----------|:-----------|:-----------|:-----------|:-----------|:-----------| | [namdp-ptit/ViRanker](https://huggingface.co/namdp-ptit/ViRanker) | **0.6815** | **0.6641** | 0.6983 | **0.6894** | 0.7302 | **0.7107** | | [itdainb/PhoRanker](https://huggingface.co/itdainb/PhoRanker) | 0.6625 | 0.6458 | **0.7147** | 0.6731 | **0.7422** | 0.6830 | | [kien-vu-uet/finetuned-phobert-passage-rerank-best-eval](https://huggingface.co/kien-vu-uet/finetuned-phobert-passage-rerank-best-eval) | 0.0963 | 0.0883 | 0.1396 | 0.1131 | 0.1681 | 0.1246 | | [BAAI/bge-reranker-v2-m3](https://huggingface.co/BAAI/bge-reranker-v2-m3) | 0.6087 | 0.5841 | 0.6513 | 0.6062 | 0.6872 | 0.6209 | | [BAAI/bge-reranker-v2-gemma](https://huggingface.co/BAAI/bge-reranker-v2-gemma) | 0.6088 | 0.5908 | 0.6446 | 0.6108 | 0.6785 | 0.6249 | ## Contact **Email**: phuongnamdpn2k2@gmail.com **LinkedIn**: [Dang Phuong Nam](https://www.linkedin.com/in/dang-phuong-nam-157912288/) **Facebook**: [Phฦฐฦกng Nam](https://www.facebook.com/phuong.namdang.7146557) ## Support The Project If you find this project helpful and wish to support its ongoing development, here are some ways you can contribute: 1. **Star the Repository**: Show your appreciation by starring the repository. Your support motivates further development and enhancements. 2. **Contribute**: We welcome your contributions! You can help by reporting bugs, submitting pull requests, or suggesting new features. 3. **Donate**: If youโ€™d like to support financially, consider making a donation. You can donate through: - Vietcombank: 9912692172 - DANG PHUONG NAM Thank you for your support! ## Citation Please cite as ```Plaintext @misc{ViRanker, title={ViRanker: A Cross-encoder Model for Vietnamese Text Ranking}, author={Nam Dang Phuong}, year={2024}, publisher={Huggingface}, } ```
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.05_0.15_0.5_epoch2
MinaMila
2025-06-16T02:20:49Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-16T02:19:01Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
semtwo/kobart-base-v2-with-wiki-dataset
semtwo
2025-06-16T02:15:50Z
0
0
null
[ "license:other", "region:us" ]
null
2025-06-16T02:15:50Z
--- license: other license_name: kobart-base-v2 license_link: LICENSE ---
JunaidSadiq/deepseek_0528_reasoning
JunaidSadiq
2025-06-16T02:14:12Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen3", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-16T02:13:42Z
--- base_model: unsloth/deepseek-r1-0528-qwen3-8b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** JunaidSadiq - **License:** apache-2.0 - **Finetuned from model :** unsloth/deepseek-r1-0528-qwen3-8b-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
New-tutorial-Pakistani-TikTok-Videos/FULL.VIDEO.Pakistani.TikTok.Viral.Video.Tutorial.Official
New-tutorial-Pakistani-TikTok-Videos
2025-06-16T02:13:07Z
0
0
null
[ "region:us" ]
null
2025-06-16T02:12:48Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
michelescotto/trainer_output
michelescotto
2025-06-16T02:02:03Z
0
0
transformers
[ "transformers", "safetensors", "deberta-v2", "text-classification", "generated_from_trainer", "base_model:microsoft/deberta-v3-large", "base_model:finetune:microsoft/deberta-v3-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-06-16T02:01:21Z
--- library_name: transformers license: mit base_model: microsoft/deberta-v3-large tags: - generated_from_trainer metrics: - accuracy model-index: - name: trainer_output results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # trainer_output This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1910 - Accuracy: 0.948 - F1 Macro: 0.8547 - Kappa Score: 0.7094 - Accuracy Balanced: 0.8568 - Precision Macro: 0.8526 - Recall Macro: 0.8568 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.8e-05 - train_batch_size: 16 - eval_batch_size: 40 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.06 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.52.4 - Pytorch 2.6.0+cu124 - Datasets 2.14.4 - Tokenizers 0.21.1
Enzogbs/Reinforce-1
Enzogbs
2025-06-16T02:00:18Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2025-06-16T02:00:06Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
ielabgroup/bert-base-uncased-fineweb100bt-smae-width
ielabgroup
2025-06-16T01:57:41Z
0
0
transformers
[ "transformers", "safetensors", "bert", "fill-mask", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2025-06-16T01:55:14Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ncgc/pythia_125M_sft_hh_full_sft_trainer_rand_lowest
ncgc
2025-06-16T01:51:34Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "region:us" ]
null
2025-06-15T22:47:45Z
--- base_model: EleutherAI/pythia-125M library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.5_0.05_0.25_epoch2
MinaMila
2025-06-16T01:47:41Z
0
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-16T01:45:47Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
BootesVoid/cmbs7qa4e0517h4x59dp33vpm_cmbyebxmd03gvrdqslli40ghm
BootesVoid
2025-06-16T01:42:57Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-16T01:42:56Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: VELLAGIRL --- # Cmbs7Qa4E0517H4X59Dp33Vpm_Cmbyebxmd03Gvrdqslli40Ghm <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `VELLAGIRL` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "VELLAGIRL", "lora_weights": "https://huggingface.co/BootesVoid/cmbs7qa4e0517h4x59dp33vpm_cmbyebxmd03gvrdqslli40ghm/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [๐Ÿงจ diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmbs7qa4e0517h4x59dp33vpm_cmbyebxmd03gvrdqslli40ghm', weight_name='lora.safetensors') image = pipeline('VELLAGIRL').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmbs7qa4e0517h4x59dp33vpm_cmbyebxmd03gvrdqslli40ghm/discussions) to add images that show off what youโ€™ve made with this LoRA.
thomas-sounack/BioClinical-ModernBERT-base
thomas-sounack
2025-06-16T01:33:11Z
115
9
transformers
[ "transformers", "pytorch", "safetensors", "modernbert", "fill-mask", "masked-lm", "long-context", "BioClinical-ModernBERT", "en", "arxiv:2506.10896", "base_model:answerdotai/ModernBERT-base", "base_model:finetune:answerdotai/ModernBERT-base", "license:mit", "autotrain_compatible", "endpo...
fill-mask
2025-05-07T15:54:29Z
--- license: mit language: - en base_model: - answerdotai/ModernBERT-base pipeline_tag: fill-mask tags: - fill-mask - masked-lm - long-context - modernbert - BioClinical-ModernBERT library_name: transformers --- # BioClinical ModernBERT *BioClinical ModernBERT is available in two sizes: [base](https://huggingface.co/thomas-sounack/BioClinical-ModernBERT-base) (150M parameters) and [large](https://huggingface.co/thomas-sounack/BioClinical-ModernBERT-large) (396M parameters). The model training checkpoints can be found [here](https://huggingface.co/thomas-sounack/BioClinical-ModernBERT-checkpoints), and our code is available in our [GitHub repository](https://github.com/lindvalllab/BioClinical-ModernBERT).* ## Table of Contents 1. [Model Summary](#model-summary) 2. [Usage](#usage) 3. [Training](#training) 4. [Evaluation](#evaluation) 5. [License](#license) 6. [Citation](#citation) ## Model Summary BioClinical ModernBERT is a domain-adapted encoder that builds on ModernBERT [base](https://huggingface.co/answerdotai/ModernBERT-base) and [large](https://huggingface.co/answerdotai/ModernBERT-large), incorporating long-context processing and substantial improvements in speed and performance for biomedical and clinical NLP. BioClinical ModernBERT is trained on the largest biomedical and clinical corpus to date, with over 53.5 billion tokens, and addresses a key limitation of prior clinical encoders by leveraging 20 datasets from diverse institutions, domains, and geographic regions, rather than relying on data from a single source. ## Usage You can use these models directly with the `transformers` library starting from v4.48.0: ```sh pip install -U transformers>=4.48.0 ``` Since BioClinical ModernBERT is a Masked Language Model (MLM), you can use the `fill-mask` pipeline or load it via `AutoModelForMaskedLM`. To use BioClinical ModernBERT for downstream tasks like classification, retrieval, or QA, fine-tune it following standard BERT fine-tuning recipes. **โš ๏ธ If your GPU supports it, we recommend using BioClinical ModernBERT with Flash Attention 2 to reach the highest efficiency. To do so, install Flash Attention as follows, then use the model as normal:** ```bash pip install flash-attn ``` Using `AutoModelForMaskedLM`: ```python from transformers import AutoTokenizer, AutoModelForMaskedLM model_id = "thomas-sounack/BioClinical-ModernBERT-base" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForMaskedLM.from_pretrained(model_id) text = "Mitochondria is the powerhouse of the [MASK]." inputs = tokenizer(text, return_tensors="pt") outputs = model(**inputs) # To get predictions for the mask: masked_index = inputs["input_ids"][0].tolist().index(tokenizer.mask_token_id) predicted_token_id = outputs.logits[0, masked_index].argmax(axis=-1) predicted_token = tokenizer.decode(predicted_token_id) print("Predicted token:", predicted_token) # Predicted token: cell ``` Using a pipeline: ```python import torch from transformers import pipeline from pprint import pprint pipe = pipeline( "fill-mask", model="thomas-sounack/BioClinical-ModernBERT-base", torch_dtype=torch.bfloat16, ) input_text = "[MASK] is a disease caused by an uncontrolled division of abnormal cells in a part of the body." results = pipe(input_text) pprint(results) ``` **Note:** BioClinical ModernBERT, similarly to ModernBERT, does not use token type IDs unlike some earlier BERT models. Most downstream usage is identical to standard BERT models on the Hugging Face Hub, except you can omit the `token_type_ids` parameter. ## Training ### Data BioClinical ModernBERT is trained on 50.7B tokens of biomedical text gathered from PubMed and PMC, and 2.8B tokens of clinical text from 20 datasets which are detailed in the table below. | Name | Country | Clinical Source | Clinical Context | Samples | Tokens (M) | |----------------------------|--------------|------------------------------------|-----------------------|-----------|------------| | ACI-BENCH | US | Clinical Notes | Not Reported | 207 | 0.1 | | ADE Corpus | Several | Clinical Notes | Not Reported | 20,896 | 0.5 | | Brain MRI Stroke | Korea | Radiology Reports | Neurology | 2,603 | 0.2 | | CheXpert Plus | US | Radiology Reports | Pulmonology | 223,460 | 60.6 | | CHIFIR | Australia | Pathology Reports | Hematology / Oncology | 283 | 0.1 | | CORAL | US | Progress Notes | Hematology / Oncology | 240 | 0.7 | | Eye Gaze CXR | US | Radiology Reports | Pulmonology | 892 | 0.03 | | Gout Chief Complaints | US | Chief Complaint | Internal Medicine | 8,429 | 0.2 | | ID-68 | UK | Clinical Notes | Psychology | 78 | 0.02 | | Inspect | US | Radiology Reports | Pulmonology | 22,259 | 2.8 | | MedNLI | US | Clinical Notes | Internal Medicine | 14,047 | 0.5 | | MedQA | US | National Medical Board Examination | Not Reported | 14,366 | 2.0 | | MIMIC-III | US | Clinical Notes | Internal Medicine | 2,021,411 | 1,047.7 | | MIMIC-IV Note | US | Clinical Notes | Internal Medicine | 2,631,243 | 1,765.7 | | MTSamples | Not Reported | Clinical Notes | Internal Medicine | 2,358 | 1.7 | | Negex | US | Discharge Summaries | Not Reported | 2,056 | 0.1 | | PriMock57 | UK | Simulated Patient Care | Internal Medicine | 57 | 0.01 | | Q-Pain | US | Clinical Vignettes | Palliative Care | 51 | 0.01 | | REFLACX | US | Radiology Reports | Pulmonology | 2,543 | 0.1 | | Simulated Resp. Interviews | Canada | Simulated Patient Care | Pulmonology | 272 | 0.6 | ### Methodology BioClinical ModernBERT base is trained in two phases. This model is initialized from the last stable-phase checkpoint of ModernBERT base and trained with the same hyperparameters: learning rate of 3e-4 and batch size of 72. - Phase 1: Training on 160.5B tokens from PubMed, PMC, and the 20 clinical datasets. Learning rate remains constant throughout this stage, and the masking probability is set at 30%. - Phase 2: Training on the 20 clinical datasets only. Masking probability is reduced to 15%. The model is trained for 3 epochs with a 1-sqrt learning rate decay. ## Evaluation | | Model | Context Length | ChemProt | Phenotype | COS | Social History | DEID | |-------|--------------------------------|----------------|----------|-----------|----------|----------------|----------| | Base | BioBERT | 512 | 89.5 | 26.6 | 94.9 | 55.8 | 74.3 | | | Clinical BERT | 512 | 88.3 | 25.8 | 95.0 | 55.2 | 74.2 | | | BioMed-RoBERTa | 512 | 89.0 | 36.8 | 94.9 | 55.2 | 81.1 | | | Clinical-BigBird | 4096 | 87.4 | 26.5 | 94.0 | 53.3 | 71.2 | | | Clinical-Longformer | 4096 | 74.2 | 46.4 | **95.2** | 56.8 | 82.3 | | | Clinical ModernBERT | 8192 | 86.9 | 54.9 | 93.7 | 53.8 | 44.4 | | | ModernBERT - base | 8192 | 89.5 | 48.4 | 94.0 | 53.1 | 78.3 | | | BioClinical ModernBERT - base | 8192 | 89.9 | 58.1 | 95.1 | **58.5** | 82.7 | | Large | ModernBERT - large | 8192 | 90.2 | 58.3 | 94.4 | 54.8 | 82.1 | | | BioClinical ModernBERT - large | 8192 | **90.8** | **60.8** | 95.1 | 57.1 | **83.8** | ## License We release the BioClinical ModernBERT base and large model weights and training checkpoints under the MIT license. ## Citation If you use BioClinical ModernBERT in your work, please cite our [preprint](https://arxiv.org/abs/2506.10896): ``` @misc{sounack2025bioclinicalmodernbertstateoftheartlongcontext, title={BioClinical ModernBERT: A State-of-the-Art Long-Context Encoder for Biomedical and Clinical NLP}, author={Thomas Sounack and Joshua Davis and Brigitte Durieux and Antoine Chaffin and Tom J. Pollard and Eric Lehman and Alistair E. W. Johnson and Matthew McDermott and Tristan Naumann and Charlotta Lindvall}, year={2025}, eprint={2506.10896}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2506.10896}, } ```
BootesVoid/cmbyblijq03bvrdqs0ce71ulk_cmbydjgza03gnrdqsp871qrw5
BootesVoid
2025-06-16T01:23:33Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-16T01:23:32Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: EMILYCADE --- # Cmbyblijq03Bvrdqs0Ce71Ulk_Cmbydjgza03Gnrdqsp871Qrw5 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `EMILYCADE` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "EMILYCADE", "lora_weights": "https://huggingface.co/BootesVoid/cmbyblijq03bvrdqs0ce71ulk_cmbydjgza03gnrdqsp871qrw5/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [๐Ÿงจ diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmbyblijq03bvrdqs0ce71ulk_cmbydjgza03gnrdqsp871qrw5', weight_name='lora.safetensors') image = pipeline('EMILYCADE').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmbyblijq03bvrdqs0ce71ulk_cmbydjgza03gnrdqsp871qrw5/discussions) to add images that show off what youโ€™ve made with this LoRA.
erdem-erdem/Qwen2.5-3B-Instruct-countdown-new-grpo-r32
erdem-erdem
2025-06-16T01:21:14Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/Qwen2.5-3B-Instruct", "base_model:finetune:unsloth/Qwen2.5-3B-Instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "reg...
text-generation
2025-06-16T01:19:41Z
--- base_model: unsloth/Qwen2.5-3B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** erdem-erdem - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-3B-Instruct This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.5_0.05_0.75_epoch2
MinaMila
2025-06-16T01:20:00Z
0
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-16T01:18:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Delta-Vector/Austral-SFT-KTO
Delta-Vector
2025-06-16T01:19:11Z
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "base_model:Delta-Vector/Austral-24B-Base", "base_model:finetune:Delta-Vector/Austral-24B-Base", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-16T01:11:08Z
--- base_model: Delta-Vector/Austral-24B-Base library_name: transformers --- a KTO finetune ontop of the -Base Austral-24B, Still not recc'd for use, Use -Winton! WandB: https://wandb.ai/new-eden/austral/artifacts/axolotl-config/config-v2nv3dlc/v0/files/axolotl_config_2u1b4uya.yml Datasets: ```yaml datasets: - path: Delta-Vector/Tauri-IFeval-Dans-Tulu-KTO split: train type: chatml.argilla - path: NewEden/Helpsteer-3-edit-kto-v7 split: train type: chatml.argilla - path: Delta-Vector/Tauri-Helpsteer-3-Preference-KTO split: train type: chatml.argilla - path: NewEden/Helpsteer-3-edit-kto-v7 split: train type: chatml.argilla - path: Delta-Vector/Tauri-Opus-Accepted-GPT-Rejected-Opus-Writing-Prompts split: train type: chatml.argilla - path: NewEden/Opus-accepted-hermes-rejected-shuffled split: train type: chatml.argilla - path: NewEden/Purpura-Arkhaios-CC-KTO split: train type: chatml.argilla - path: Delta-Vector/Tauri-KTO-Instruct-Mix split: train type: chatml.argilla ```
donvitomd/victor
donvitomd
2025-06-16T01:17:00Z
0
0
null
[ "license:other", "region:us" ]
null
2025-06-16T00:30:24Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md ---
girayzkrt/mistral-7b-finetuned-qa
girayzkrt
2025-06-16T01:14:51Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-16T01:14:33Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.5_0.15_0.05_epoch1
MinaMila
2025-06-16T00:59:40Z
0
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-16T00:57:45Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
gradientrouting-spar/horizontal_1_proxy_ntrain_25_ntrig_9_animals_3x3_seed_1_seed_25_seed_2_seed_42_20250616_004723
gradientrouting-spar
2025-06-16T00:55:30Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-16T00:55:23Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
gradientrouting-spar/mc14_badmed_dpo_dsd-42_msd-42_atc-0.45_ldpo-6_seed_1
gradientrouting-spar
2025-06-16T00:53:04Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-16T00:52:51Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ainewtrend01/FinAG_Q4B
ainewtrend01
2025-06-16T00:43:42Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen3", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-13T08:41:55Z
--- base_model: unsloth/qwen3-4b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** ainewtrend01 - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen3-4b-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
appledora/gelurecast3.2-G4W16H4
appledora
2025-06-16T00:32:40Z
0
0
transformers
[ "transformers", "pytorch", "recast1b_llama", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "region:us" ]
text-generation
2025-06-15T22:53:29Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Yuichi1218/Llama-3.1-Lafeak-8B-chatvector-SFT-e3
Yuichi1218
2025-06-16T00:29:55Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:Yuichi1218/llama-3.1-Lafeak-8B-chatvector", "base_model:finetune:Yuichi1218/llama-3.1-Lafeak-8B-chatvector", "license:apache-2.0", "autotrain_compatib...
text-generation
2025-06-16T00:23:45Z
--- base_model: Yuichi1218/llama-3.1-Lafeak-8B-chatvector tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Yuichi1218 - **License:** apache-2.0 - **Finetuned from model :** Yuichi1218/llama-3.1-Lafeak-8B-chatvector This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.5_0.25_0.15_epoch2
MinaMila
2025-06-15T23:44:11Z
0
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T23:42:17Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Leonel-Maia/nllb_complete
Leonel-Maia
2025-06-15T23:04:04Z
11
0
transformers
[ "transformers", "safetensors", "m2m_100", "text2text-generation", "generated_from_trainer", "base_model:facebook/nllb-200-distilled-600M", "base_model:finetune:facebook/nllb-200-distilled-600M", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2025-06-10T10:51:16Z
--- library_name: transformers license: cc-by-nc-4.0 base_model: facebook/nllb-200-distilled-600M tags: - generated_from_trainer metrics: - bleu model-index: - name: nllb_complete results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # nllb_complete This model is a fine-tuned version of [facebook/nllb-200-distilled-600M](https://huggingface.co/facebook/nllb-200-distilled-600M) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.8285 - Bleu: 17.1412 - Gen Len: 17.896 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5000 - num_epochs: 24.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-------:|:------:|:---------------:|:-------:|:-------:| | 2.1296 | 1.4834 | 10000 | 2.0709 | 9.9056 | 20.1323 | | 2.0253 | 2.9668 | 20000 | 1.9697 | 11.7423 | 19.27 | | 1.8771 | 4.4503 | 30000 | 1.9199 | 13.3983 | 18.9643 | | 1.7891 | 5.9338 | 40000 | 1.8851 | 14.1016 | 18.3833 | | 1.7159 | 7.4173 | 50000 | 1.8680 | 14.8584 | 18.2797 | | 1.6594 | 8.9007 | 60000 | 1.8473 | 15.8809 | 18.3863 | | 1.6609 | 10.3842 | 70000 | 1.8406 | 15.8588 | 18.159 | | 1.6358 | 11.8676 | 80000 | 1.8319 | 16.4395 | 18.4773 | | 1.5623 | 13.3511 | 90000 | 1.8298 | 16.8956 | 18.3217 | | 1.5534 | 14.8345 | 100000 | 1.8218 | 16.8725 | 18.5327 | | 1.498 | 16.3180 | 110000 | 1.8286 | 16.6418 | 17.9697 | | 1.4663 | 17.8014 | 120000 | 1.8252 | 17.2847 | 17.9357 | | 1.4309 | 19.2849 | 130000 | 1.8299 | 17.027 | 17.7263 | | 1.4398 | 20.7684 | 140000 | 1.8270 | 17.0189 | 18.1353 | | 1.4534 | 22.2519 | 150000 | 1.8292 | 17.04 | 17.9637 | | 1.4441 | 23.7353 | 160000 | 1.8285 | 17.1412 | 17.896 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.7.0+cu126 - Datasets 3.5.0 - Tokenizers 0.21.1
phospho-app/elglombo-ACT_BBOX-jenga_pull-hgtih
phospho-app
2025-06-15T22:57:04Z
0
0
null
[ "phosphobot", "act", "region:us" ]
null
2025-06-15T22:55:52Z
--- tags: - phosphobot - act task_categories: - robotics --- # act Model - phospho Training Pipeline ## Error Traceback We faced an issue while training your model. ``` The object 'brown block' was detected in 0 episodes in main camera (should be: 10 episodes min). This is not enough to train a model. Check your dataset: https://lerobot-visualize-dataset.hf.space/Mahanthesh0r/jenga_pull/ and rephrase the instruction. ``` ## Training parameters: - **Dataset**: [Mahanthesh0r/jenga_pull](https://huggingface.co/datasets/Mahanthesh0r/jenga_pull) - **Wandb run URL**: None - **Epochs**: None - **Batch size**: 100 - **Training steps**: 10000 ๐Ÿ“– **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme) ๐Ÿค– **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.05_0.75_0.15_epoch2
MinaMila
2025-06-15T22:51:56Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T22:50:11Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Enzogbs/q-FrozenLake-v1-4x4-noSlippery
Enzogbs
2025-06-15T22:33:02Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2025-06-15T22:33:00Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Enzogbs/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
huggingFaceOfNabil/SmolVLM2-256M-Video-Instruct-dense-caption_full
huggingFaceOfNabil
2025-06-15T22:21:57Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "smolvlm", "image-text-to-text", "generated_from_trainer", "conversational", "base_model:HuggingFaceTB/SmolVLM2-256M-Video-Instruct", "base_model:finetune:HuggingFaceTB/SmolVLM2-256M-Video-Instruct", "license:apache-2.0", "endpoints_compatible", "r...
image-text-to-text
2025-06-14T17:19:26Z
--- library_name: transformers license: apache-2.0 base_model: HuggingFaceTB/SmolVLM2-256M-Video-Instruct tags: - generated_from_trainer model-index: - name: SmolVLM2-256M-Video-Instruct-dense-caption_full results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SmolVLM2-256M-Video-Instruct-dense-caption_full This model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-256M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-256M-Video-Instruct) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.52.4 - Pytorch 2.7.1+cu126 - Datasets 3.6.0 - Tokenizers 0.21.1
BootesVoid/cmbgtk63y052tkfxsx1r4aht4_cmbxzksk302hhrdqsxwnuilu5
BootesVoid
2025-06-15T22:17:23Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-15T22:17:22Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: SOPHIE --- # Cmbgtk63Y052Tkfxsx1R4Aht4_Cmbxzksk302Hhrdqsxwnuilu5 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `SOPHIE` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "SOPHIE", "lora_weights": "https://huggingface.co/BootesVoid/cmbgtk63y052tkfxsx1r4aht4_cmbxzksk302hhrdqsxwnuilu5/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [๐Ÿงจ diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmbgtk63y052tkfxsx1r4aht4_cmbxzksk302hhrdqsxwnuilu5', weight_name='lora.safetensors') image = pipeline('SOPHIE').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmbgtk63y052tkfxsx1r4aht4_cmbxzksk302hhrdqsxwnuilu5/discussions) to add images that show off what youโ€™ve made with this LoRA.
juexzz/INTACT-pi0-scratch-bridge
juexzz
2025-06-15T22:07:01Z
2
0
null
[ "safetensors", "robotics", "arxiv:2410.24164", "arxiv:2506.09930", "base_model:lerobot/pi0", "base_model:finetune:lerobot/pi0", "license:apache-2.0", "region:us" ]
robotics
2025-06-15T02:21:29Z
--- license: apache-2.0 base_model: - lerobot/pi0 pipeline_tag: robotics --- # INTACT Probing Suite: Pi0 from scratch on BridgeV2 > ๐Ÿ“ฆ **This model is part of the [INTACT Probing Suite Collection](https://huggingface.co/collections/ai4ce/intact-probing-suite-684e5601e9ed640fdd9b994b)** > Explore other variants: > - [Pi0 fintuned on BridgeV2](https://huggingface.co/juexzz/INTACT-pi0-finetune-bridge) > - [Pi0 finetuned with paraphrase on BridgeV2](https://huggingface.co/juexzz/INTACT-pi0-finetune-rephrase-bridge) ## INTACT-pi0-scratch-bridge This repository contains a checkpoint of the Pi0 model ([HF implementation](https://huggingface.co/lerobot/pi0) | [Paper](https://arxiv.org/abs/2410.24164v1)) *initialized from PaliGemma and trained directly ("from scratch")* on the BridgeV2 dataset for robotic manipulation tasks. The model is later used for testing on the [Simpler Environment](https://github.com/simpler-env/SimplerEnv) and our [INTACT](https://github.com/ai4ce/INT-ACT) Probing Suite for the generalization boundaries of VLA models. **Paper**: [From Intention to Execution: Probing the Generalization Boundaries of Vision-Language-Action Models](https://arxiv.org/abs/2506.09930) ## Model Details - **Base Model**: [lerobot/pi0](https://huggingface.co/lerobot/pi0) - **Training Dataset**: [BridgeV2](https://rail-berkeley.github.io/bridgedata/) - **Model Type**: Vision-Language-Action (VLA) model for robotics - **Fine-tuning Method**: See our [paper](https://arxiv.org/abs/2506.09930) - **Training Framework**: See our [repository](https://github.com/ai4ce/INT-ACT) ## Quick Start ### Usage in INTACT ```shell git clone --recurse-submodules https://github.com/ai4ce/INT-ACT.git cd INT-ACT uv sync source .venv/bin/activate python ``` Or directly in python with Lerobot, see blow: ### Integration with LeRobot First, install lerobot ```bash pip install lerobot ``` Then ```python import torch from lerobot.common.policies.pi0.modeling_pi0 import Pi0Policy # Load model policy = Pi0Policy.from_pretrained("juexzz/INTACT-pi0-scratch-bridge") # Inference with torch.no_grad(): actions = policy.select_action(batch) ``` ### Training Configuration - **Training Steps**: 15 epochs ~22695 steps. - **Batch Size**: 1024 - **Learning Rate**: 1e-5 - **Hardware**: 4 H100/A100 - **Input Modalities**: single image (to work with SimplerEnv), 1 language instruction, 1 robot state. - **Output**: robot actions (delta EEF) with chunk size of 4. For more details please refer to our [paper](https://arxiv.org/abs/2506.09930) and [code](https://github.com/ai4ce/INT-ACT) ## Evaluation **Checkpoint choice** After training 15 epochs, we sweep the checkpoint at epoch 1, 2, 3, 4, 5, 10, 15 for performance on the original 4 Bridge tasks in the SimplerEnv, and choose the checkpoint with *best average performance* for each of the three Pi0 variants. Therefore, you may still get a better success rate for a specific task at other checkpoints. As a result, the best checkpoint for this pi0 finetune model is at step 22695 (epoch 15). The comparison of their performance on Simpler are shown below. ### Performance Comparison on SimplerEnv **Success rate** comparison on the SimplerEnv with other pi0 variants and some other baselines experimented in our INTACT suite. For a more detailed comparison, please refer to the [paper](https://arxiv.org/abs/2506.09930). | Model | carrot_on_plate | eggplant_in_basket | stack_cube | spoon_on_towel | |-------|-----------------|-------------------|------------|----------------| | [Pi0 finetune](https://huggingface.co/juexzz/INTACT-pi0-finetune-bridge) | 0.361 | 0.819 | 0.264 | 0.458 | | [Pi0 finetune rephrase](https://huggingface.co/juexzz/INTACT-pi0-finetune-rephrase-bridge) | 0.500 | 0.944 | 0.222 | 0.597 | | **Pi0 scratch(this model)** | 0.542 | 0.903 | 0.403 | 0.875 | | Spatial VLA | 0.125 | 0.958 | 0.292 | 0.208 | | Magma | 0.250 | 0.611 | 0.097 | 0.208 | | Octo Small | 0.014 | 0.097 | 0.000 | 0.097 | | Octo Base | 0.014 | 0.306 | 0.000 | 0.014 | ## Citation If you use this model in your research, please cite: ```bibtex @article{fang2025intention, title={From Intention to Execution: Probing the Generalization Boundaries of Vision-Language-Action Models}, author={Fang, Irving and Zhang, Juexiao and Tong, Shengbang and Feng, Chen}, journal={arXiv preprint arXiv:2506.09930}, year={2025} } ``` ## Related Work - **Pi0 (official)**: [pi0 (JAX)](https://github.com/Physical-Intelligence/openpi) - **Base Model (Pi0 HF)**: [lerobot/pi0](https://huggingface.co/lerobot/pi0) - **Dataset**: [BridgeV2](https://bridge-v2.github.io/) - **Framework**: [LeRobot](https://github.com/huggingface/lerobot) - **Simpler Environment**: [SimplerEnv](https://github.com/simpler-env/SimplerEnv) - **Open-source Pi0 Implementation by Allen Ren**: [open-pi-zero](https://github.com/allenzren/open-pi-zero) ## License This model is released under the Apache 2.0 license. Please see the base model's license for any additional restrictions. ## Support For questions about this model: - ๐Ÿ“ง Open an issue in this repository - ๐Ÿ’ฌ Discussion tab for community questions - ๐Ÿ“– Check our [paper](https://arxiv.org/abs/2506.09930) for technical details --- *Last updated: June 2025*
juexzz/INTACT-pi0-finetune-rephrase-bridge
juexzz
2025-06-15T22:06:35Z
2
0
null
[ "safetensors", "robotics", "arxiv:2410.24164", "arxiv:2506.09930", "base_model:lerobot/pi0", "base_model:finetune:lerobot/pi0", "license:apache-2.0", "region:us" ]
robotics
2025-06-15T02:22:14Z
--- license: apache-2.0 base_model: - lerobot/pi0 pipeline_tag: robotics --- # INTACT Probing Suite: Pi0 fine-tuned on BridgeV2 with task parahrasing > ๐Ÿ“ฆ **This model is part of the [INTACT Probing Suite Collection](https://huggingface.co/collections/ai4ce/intact-probing-suite-684e5601e9ed640fdd9b994b)** > Explore other variants: > - [Pi0 fintuned on BridgeV2](https://huggingface.co/juexzz/INTACT-pi0-finetune-bridge) > - [Pi0 scratch on BridgeV2](https://huggingface.co/juexzz/INTACT-pi0-scratch-bridge) ## INTACT-pi0-finetune-rephrase-bridge This repository contains a checkpoint of the Pi0 model ([HF implementation](https://huggingface.co/lerobot/pi0) | [Paper](https://arxiv.org/abs/2410.24164v1)) *finetuned* on the BridgeV2 dataset for robotic manipulation tasks. During finetuning, we follow the paraphrase dictionary provided in [here](https://huggingface.co/datasets/rail-berkeley/OXE_paraphrases) to paraphrase the task instructions. The model is later used for testing on the [Simpler Environment](https://github.com/simpler-env/SimplerEnv) and our [INTACT](https://github.com/ai4ce/INT-ACT) Probing Suite for the generalization boundaries of VLA models. **Paper**: [From Intention to Execution: Probing the Generalization Boundaries of Vision-Language-Action Models](https://arxiv.org/abs/2506.09930) ## Model Details - **Base Model**: [lerobot/pi0](https://huggingface.co/lerobot/pi0) - **Training Dataset**: [BridgeV2](https://rail-berkeley.github.io/bridgedata/) - **Model Type**: Vision-Language-Action (VLA) model for robotics - **Fine-tuning Method**: See our [paper](https://arxiv.org/abs/2506.09930) - **Training Framework**: See our [repository](https://github.com/ai4ce/INT-ACT) ## Quick Start ### Usage in INTACT ```shell git clone --recurse-submodules https://github.com/ai4ce/INT-ACT.git cd INT-ACT uv sync source .venv/bin/activate python ``` Or directly in python with Lerobot, see blow: ### Integration with LeRobot First, install lerobot ```bash pip install lerobot ``` Then ```python import torch from lerobot.common.policies.pi0.modeling_pi0 import Pi0Policy # Load model policy = Pi0Policy.from_pretrained("juexzz/INTACT-pi0-finetune-rephrase-bridge") # Inference with torch.no_grad(): actions = policy.select_action(batch) ``` ### Training Configuration - **Training Steps**: 15 epochs ~22695 steps. - **Batch Size**: 1024 - **Learning Rate**: 1e-5 - **Hardware**: 4 H100/A100 - **Input Modalities**: single image (to work with SimplerEnv), 1 language instruction, 1 robot state. - **Output**: robot actions (delta EEF) with chunk size of 4. For more details please refer to our [paper](https://arxiv.org/abs/2506.09930) and [code](https://github.com/ai4ce/INT-ACT) ## Evaluation **Checkpoint choice** After training 15 epochs, we sweep the checkpoint at epoch 1, 2, 3, 4, 5, 10, 15 for performance on the original 4 Bridge tasks in the SimplerEnv, and choose the checkpoint with *best average performance* for each of the three Pi0 variants. Therefore, you may still get a better success rate for a specific task at other checkpoints. As a result, the best checkpoint for this pi0 finetune model is at step 7565 (epoch 5). The comparison of their performance on Simpler are shown below. ### Performance Comparison on SimplerEnv **Success rate** comparison on the SimplerEnv with other pi0 variants and some other baselines experimented in our INTACT suite. For a more detailed comparison, please refer to the [paper](https://arxiv.org/abs/2506.09930). | Model | carrot_on_plate | eggplant_in_basket | stack_cube | spoon_on_towel | |-------|-----------------|-------------------|------------|----------------| | [Pi0 finetune](https://huggingface.co/juexzz/INTACT-pi0-finetune-bridge) | 0.361 | 0.819 | 0.264 | 0.458 | | **Pi0 finetune rephrase (this model)** | 0.500 | 0.944 | 0.222 | 0.597 | | [Pi0 scratch](https://huggingface.co/juexzz/INTACT-pi0-scratch-bridge) | 0.542 | 0.903 | 0.403 | 0.875 | | Spatial VLA | 0.125 | 0.958 | 0.292 | 0.208 | | Magma | 0.250 | 0.611 | 0.097 | 0.208 | | Octo Small | 0.014 | 0.097 | 0.000 | 0.097 | | Octo Base | 0.014 | 0.306 | 0.000 | 0.014 | ## Citation If you use this model in your research, please cite: ```bibtex @article{fang2025intention, title={From Intention to Execution: Probing the Generalization Boundaries of Vision-Language-Action Models}, author={Fang, Irving and Zhang, Juexiao and Tong, Shengbang and Feng, Chen}, journal={arXiv preprint arXiv:2506.09930}, year={2025} } ``` ## Related Work - **Pi0 (official)**: [pi0 (JAX)](https://github.com/Physical-Intelligence/openpi) - **Base Model (Pi0 HF)**: [lerobot/pi0](https://huggingface.co/lerobot/pi0) - **Dataset**: [BridgeV2](https://bridge-v2.github.io/) - **Framework**: [LeRobot](https://github.com/huggingface/lerobot) - **Simpler Environment**: [SimplerEnv](https://github.com/simpler-env/SimplerEnv) - **Open-source Pi0 Implementation by Allen Ren**: [open-pi-zero](https://github.com/allenzren/open-pi-zero) ## License This model is released under the Apache 2.0 license. Please see the base model's license for any additional restrictions. ## Support For questions about this model: - ๐Ÿ“ง Open an issue in this repository - ๐Ÿ’ฌ Discussion tab for community questions - ๐Ÿ“– Check our [paper](https://arxiv.org/abs/2506.09930) for technical details --- *Last updated: June 2025*
Manal0809/MedQA_Mistral_Nemo_Instructive_KG2
Manal0809
2025-06-15T21:42:40Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit", "base_model:adapter:unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit", "region:us" ]
null
2025-06-15T21:42:32Z
--- base_model: unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
Ahatsham/Llama-3-8B-Instruct_Monitoring_Feedback_v5_aug_old_updated
Ahatsham
2025-06-15T21:42:19Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T21:39:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
NastasiaM/mbErt_desc_LTfrozen_model_en_NEU_last2
NastasiaM
2025-06-15T21:16:14Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "generated_from_trainer", "endpoints_compatible", "region:us" ]
null
2025-06-15T19:46:14Z
--- library_name: transformers tags: - generated_from_trainer model-index: - name: mbErt_desc_LTfrozen_model_en_NEU_last2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbErt_desc_LTfrozen_model_en_NEU_last2 This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.52.4 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
optimum-internal-testing/tiny-random-snowflake
optimum-internal-testing
2025-06-15T21:15:27Z
484
0
null
[ "safetensors", "arctic", "custom_code", "license:apache-2.0", "region:us" ]
null
2025-06-12T14:42:08Z
--- license: apache-2.0 ---
sergioalves/aac8d8ab-0f4c-47b3-8d49-505cf0af9792
sergioalves
2025-06-15T20:56:01Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:scb10x/llama-3-typhoon-v1.5-8b-instruct", "base_model:adapter:scb10x/llama-3-typhoon-v1.5-8b-instruct", "license:llama3", "4-bit", "bitsandbytes", "region:us" ]
null
2025-06-15T20:08:16Z
--- library_name: peft license: llama3 base_model: scb10x/llama-3-typhoon-v1.5-8b-instruct tags: - axolotl - generated_from_trainer model-index: - name: aac8d8ab-0f4c-47b3-8d49-505cf0af9792 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: scb10x/llama-3-typhoon-v1.5-8b-instruct bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 487caa6475c36489_train_data.json ds_type: json format: custom path: /workspace/input_data/ type: field_instruction: instruct field_output: output format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null dpo: beta: 0.1 enabled: true group_by_length: false rank_loss: true reference_model: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 0.8 group_by_length: false hub_model_id: sergioalves/aac8d8ab-0f4c-47b3-8d49-505cf0af9792 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-07 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 32 lora_dropout: 0.3 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_steps: 300 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/487caa6475c36489_train_data.json model_type: AutoModelForCausalLM num_epochs: 2 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 37010ed1-aad5-45a1-8887-d6718b80b014 wandb_project: s56-7 wandb_run: your_name wandb_runid: 37010ed1-aad5-45a1-8887-d6718b80b014 warmup_steps: 30 weight_decay: 0.05 xformers_attention: true ``` </details><br> # aac8d8ab-0f4c-47b3-8d49-505cf0af9792 This model is a fine-tuned version of [scb10x/llama-3-typhoon-v1.5-8b-instruct](https://huggingface.co/scb10x/llama-3-typhoon-v1.5-8b-instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7727 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 30 - training_steps: 300 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.971 | 0.0002 | 1 | 1.9893 | | 1.7077 | 0.0338 | 150 | 1.8616 | | 1.507 | 0.0676 | 300 | 1.7727 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
bruhzair/prototype-0.4x146
bruhzair
2025-06-15T20:49:46Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2403.19522", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T20:33:15Z
--- base_model: [] library_name: transformers tags: - mergekit - merge --- # prototype-0.4x146 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using /workspace/prototype-0.4x136 as a base. ### Models Merged The following models were included in the merge: * /workspace/prototype-0.4x140 * /workspace/prototype-0.4x145 * /workspace/prototype-0.4x143 * /workspace/prototype-0.4x144 ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: /workspace/prototype-0.4x140 - model: /workspace/prototype-0.4x145 - model: /workspace/prototype-0.4x143 - model: /workspace/prototype-0.4x144 base_model: /workspace/prototype-0.4x136 merge_method: model_stock tokenizer: source: base int8_mask: true dtype: bfloat16 pad_to_multiple_of: 8 ```
mchettih/financial_QA_unsloth_Llama-3.2-1B_student
mchettih
2025-06-15T20:42:06Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "en", "base_model:unsloth/Llama-3.2-1B", "base_model:finetune:unsloth/Llama-3.2-1B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T16:47:07Z
--- base_model: unsloth/Llama-3.2-1B tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** mchettih - **License:** apache-2.0 - **Finetuned from model :** unsloth/Llama-3.2-1B This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mic3456/sexxxx
mic3456
2025-06-15T20:19:09Z
0
0
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "template:sd-lora", "fluxgym", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-15T20:18:11Z
--- tags: - text-to-image - flux - lora - diffusers - template:sd-lora - fluxgym base_model: black-forest-labs/FLUX.1-dev instance_prompt: seks license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # sexxx A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym) <Gallery /> ## Trigger words You should use `seks` to trigger the image generation. ## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc. Weights for this model are available in Safetensors format.
VIDEOS-two-wolf-one-girl-Viral-Video/FULL.VIDEO.two.wolf.one.girl.Viral.Video.Tutorial.Official
VIDEOS-two-wolf-one-girl-Viral-Video
2025-06-15T20:11:38Z
0
0
null
[ "region:us" ]
null
2025-06-15T20:11:14Z
<animated-image data-catalyst=""><a href="https://sexleakedviral.com/new-leaked-video/?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
phospho-app/Mahanthesh0r-ACT-jenga_pull-ci9f6
phospho-app
2025-06-15T20:05:04Z
0
0
null
[ "safetensors", "phosphobot", "act", "region:us" ]
null
2025-06-15T14:02:35Z
--- tags: - phosphobot - act task_categories: - robotics --- # act Model - phospho Training Pipeline ## This model was trained using **phospho**. Training was successfull, try it out on your robot! ## Training parameters: - **Dataset**: [Mahanthesh0r/jenga_pull](https://huggingface.co/datasets/Mahanthesh0r/jenga_pull) - **Wandb run URL**: None - **Epochs**: None - **Batch size**: 40 - **Training steps**: 8000 ๐Ÿ“– **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme) ๐Ÿค– **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
Videos-jobz-hunting-sajal-malik-19k/EXCLUSIVE.TRENDING.CLIP.jobz-hunting.sajal.malik.jobz.hunting.sajal.malik.Video.Leaks.Official
Videos-jobz-hunting-sajal-malik-19k
2025-06-15T20:02:19Z
0
0
null
[ "region:us" ]
null
2025-06-15T19:59:16Z
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?jobz-hunting-sajal-malik) [๐Ÿ”ด โžคโ–บ๐‚๐ฅ๐ข๐ค ๐‡๐ž๐ซ๐ž ๐ญ๐จ๐Ÿ‘‰๐Ÿ‘‰ (๐…๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐ž๐จ ๐‹๐ข๐ง๐ค )](https://videohere.top/?jobz-hunting-sajal-malik) [๐Ÿ”ด โžคโ–บ๐‚๐ฅ๐ข๐ค ๐‡๐ž๐ซ๐ž ๐ญ๐จ๐Ÿ‘‰๐Ÿ‘‰ (๐…๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐ž๐จ ๐‹๐ข๐ง๐ค )](https://videohere.top/?jobz-hunting-sajal-malik)
Videos-jobz-hunting-sajal-malik-19k/wATCH.jobz.hunting.sajal.malik.viral.video.original.news
Videos-jobz-hunting-sajal-malik-19k
2025-06-15T20:02:07Z
0
0
null
[ "region:us" ]
null
2025-06-15T19:57:18Z
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?jobz-hunting-sajal-malik) [๐Ÿ”ด โžคโ–บ๐‚๐ฅ๐ข๐ค ๐‡๐ž๐ซ๐ž ๐ญ๐จ๐Ÿ‘‰๐Ÿ‘‰ (๐…๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐ž๐จ ๐‹๐ข๐ง๐ค )](https://videohere.top/?jobz-hunting-sajal-malik) [๐Ÿ”ด โžคโ–บ๐‚๐ฅ๐ข๐ค ๐‡๐ž๐ซ๐ž ๐ญ๐จ๐Ÿ‘‰๐Ÿ‘‰ (๐…๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐ž๐จ ๐‹๐ข๐ง๐ค )](https://videohere.top/?jobz-hunting-sajal-malik)
hasdal/dataautogpt3-ProteusSigma-test-88367b88
hasdal
2025-06-15T20:01:28Z
0
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion-xl", "lora", "template:sd-lora", "ai-toolkit", "base_model:dataautogpt3/ProteusSigma", "base_model:adapter:dataautogpt3/ProteusSigma", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2025-06-15T20:01:17Z
--- tags: - text-to-image - stable-diffusion-xl - lora - diffusers - template:sd-lora - ai-toolkit widget: - text: a photo of cbbb5b2f-0b96-4cd5-bb02-563df318955a style output: url: samples/1750017664565__000001000_0.jpg - text: cbbb5b2f-0b96-4cd5-bb02-563df318955a style artwork output: url: samples/1750017669393__000001000_1.jpg - text: digital art in cbbb5b2f-0b96-4cd5-bb02-563df318955a style output: url: samples/1750017674263__000001000_2.jpg base_model: dataautogpt3/ProteusSigma license: creativeml-openrail-m --- # sdxl_lora_cbbb5b2f-0b96-4cd5-bb02-563df318955a Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) <Gallery /> ## Trigger words No trigger words defined. ## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc. Weights for this model are available in Safetensors format. [Download](/hasdal/dataautogpt3-ProteusSigma-test-88367b88/tree/main) them in the Files & versions tab. ## Use it with the [๐Ÿงจ diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('dataautogpt3/ProteusSigma', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('hasdal/dataautogpt3-ProteusSigma-test-88367b88', weight_name='sdxl_lora_cbbb5b2f-0b96-4cd5-bb02-563df318955a.safetensors') image = pipeline('a photo of cbbb5b2f-0b96-4cd5-bb02-563df318955a style').images[0] image.save("my_image.png") ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
gokulsrinivasagan/tinybert_base_train_book_ent_15p_s_init_kd_a_in_sst2
gokulsrinivasagan
2025-06-15T19:56:41Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "base_model:gokulsrinivasagan/tinybert_base_train_book_ent_15p_s_init_kd_a_in", "base_model:finetune:gokulsrinivasagan/tinybert_base_train_book_ent_15p_s_init_kd_a_in", "l...
text-classification
2025-06-15T19:50:56Z
--- library_name: transformers language: - en license: apache-2.0 base_model: gokulsrinivasagan/tinybert_base_train_book_ent_15p_s_init_kd_a_in tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: tinybert_base_train_book_ent_15p_s_init_kd_a_in_sst2 results: - task: name: Text Classification type: text-classification dataset: name: GLUE SST2 type: glue args: sst2 metrics: - name: Accuracy type: accuracy value: 0.8486238532110092 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tinybert_base_train_book_ent_15p_s_init_kd_a_in_sst2 This model is a fine-tuned version of [gokulsrinivasagan/tinybert_base_train_book_ent_15p_s_init_kd_a_in](https://huggingface.co/gokulsrinivasagan/tinybert_base_train_book_ent_15p_s_init_kd_a_in) on the GLUE SST2 dataset. It achieves the following results on the evaluation set: - Loss: 0.3683 - Accuracy: 0.8486 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3418 | 1.0 | 264 | 0.3683 | 0.8486 | | 0.2213 | 2.0 | 528 | 0.3998 | 0.8589 | | 0.1675 | 3.0 | 792 | 0.4452 | 0.8475 | | 0.1366 | 4.0 | 1056 | 0.4115 | 0.8658 | | 0.1135 | 5.0 | 1320 | 0.4445 | 0.8544 | | 0.0964 | 6.0 | 1584 | 0.4857 | 0.8612 | ### Framework versions - Transformers 4.51.2 - Pytorch 2.6.0+cu126 - Datasets 3.5.0 - Tokenizers 0.21.1
gokulsrinivasagan/tinybert_base_train_book_ent_15p_s_init_kd_a_in_rte
gokulsrinivasagan
2025-06-15T19:50:38Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "base_model:gokulsrinivasagan/tinybert_base_train_book_ent_15p_s_init_kd_a_in", "base_model:finetune:gokulsrinivasagan/tinybert_base_train_book_ent_15p_s_init_kd_a_in", "l...
text-classification
2025-06-15T19:49:56Z
--- library_name: transformers language: - en license: apache-2.0 base_model: gokulsrinivasagan/tinybert_base_train_book_ent_15p_s_init_kd_a_in tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: tinybert_base_train_book_ent_15p_s_init_kd_a_in_rte results: - task: name: Text Classification type: text-classification dataset: name: GLUE RTE type: glue args: rte metrics: - name: Accuracy type: accuracy value: 0.5595667870036101 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tinybert_base_train_book_ent_15p_s_init_kd_a_in_rte This model is a fine-tuned version of [gokulsrinivasagan/tinybert_base_train_book_ent_15p_s_init_kd_a_in](https://huggingface.co/gokulsrinivasagan/tinybert_base_train_book_ent_15p_s_init_kd_a_in) on the GLUE RTE dataset. It achieves the following results on the evaluation set: - Loss: 0.6826 - Accuracy: 0.5596 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7046 | 1.0 | 10 | 0.6906 | 0.5343 | | 0.6925 | 2.0 | 20 | 0.6889 | 0.5596 | | 0.6856 | 3.0 | 30 | 0.6880 | 0.5415 | | 0.6668 | 4.0 | 40 | 0.6826 | 0.5596 | | 0.628 | 5.0 | 50 | 0.7183 | 0.5343 | | 0.5689 | 6.0 | 60 | 0.7841 | 0.5199 | | 0.4938 | 7.0 | 70 | 0.8368 | 0.5307 | | 0.4104 | 8.0 | 80 | 0.9103 | 0.5560 | | 0.3232 | 9.0 | 90 | 1.0749 | 0.5379 | ### Framework versions - Transformers 4.51.2 - Pytorch 2.6.0+cu126 - Datasets 3.5.0 - Tokenizers 0.21.1
deadcode99/model-stage1
deadcode99
2025-06-15T19:49:52Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "en", "base_model:unsloth/Qwen2.5-Coder-0.5B", "base_model:finetune:unsloth/Qwen2.5-Coder-0.5B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:...
text-generation
2025-06-15T19:46:26Z
--- base_model: unsloth/Qwen2.5-Coder-0.5B tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** deadcode99 - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-Coder-0.5B This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
gokulsrinivasagan/tinybert_base_train_book_ent_15p_s_init_kd_a_in_qqp
gokulsrinivasagan
2025-06-15T19:49:46Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "base_model:gokulsrinivasagan/tinybert_base_train_book_ent_15p_s_init_kd_a_in", "base_model:finetune:gokulsrinivasagan/tinybert_base_train_book_ent_15p_s_init_kd_a_in", "l...
text-classification
2025-06-15T19:02:30Z
--- library_name: transformers language: - en license: apache-2.0 base_model: gokulsrinivasagan/tinybert_base_train_book_ent_15p_s_init_kd_a_in tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: tinybert_base_train_book_ent_15p_s_init_kd_a_in_qqp results: - task: name: Text Classification type: text-classification dataset: name: GLUE QQP type: glue args: qqp metrics: - name: Accuracy type: accuracy value: 0.8747464753895622 - name: F1 type: f1 value: 0.8284785259449939 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tinybert_base_train_book_ent_15p_s_init_kd_a_in_qqp This model is a fine-tuned version of [gokulsrinivasagan/tinybert_base_train_book_ent_15p_s_init_kd_a_in](https://huggingface.co/gokulsrinivasagan/tinybert_base_train_book_ent_15p_s_init_kd_a_in) on the GLUE QQP dataset. It achieves the following results on the evaluation set: - Loss: 0.2879 - Accuracy: 0.8747 - F1: 0.8285 - Combined Score: 0.8516 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:| | 0.4096 | 1.0 | 1422 | 0.3326 | 0.8494 | 0.7993 | 0.8244 | | 0.3163 | 2.0 | 2844 | 0.3164 | 0.8562 | 0.8196 | 0.8379 | | 0.2719 | 3.0 | 4266 | 0.2985 | 0.8705 | 0.8329 | 0.8517 | | 0.2375 | 4.0 | 5688 | 0.2879 | 0.8747 | 0.8285 | 0.8516 | | 0.2073 | 5.0 | 7110 | 0.2989 | 0.8774 | 0.8356 | 0.8565 | | 0.1803 | 6.0 | 8532 | 0.3044 | 0.8792 | 0.8409 | 0.8601 | | 0.1597 | 7.0 | 9954 | 0.3227 | 0.8792 | 0.8420 | 0.8606 | | 0.1395 | 8.0 | 11376 | 0.3378 | 0.8801 | 0.8434 | 0.8618 | | 0.1233 | 9.0 | 12798 | 0.3524 | 0.8817 | 0.8449 | 0.8633 | ### Framework versions - Transformers 4.51.2 - Pytorch 2.6.0+cu126 - Datasets 3.5.0 - Tokenizers 0.21.1
tabitha-malisawa-viral-videos-tv/wATCH.tabitha-malisawa-tabitha-malisawa-tabitha-malisawa.original
tabitha-malisawa-viral-videos-tv
2025-06-15T19:49:43Z
0
0
null
[ "region:us" ]
null
2025-06-15T19:46:38Z
[๐Ÿ”ด โžคโ–บ๐‚๐ฅ๐ข๐ค ๐‡๐ž๐ซ๐ž ๐ญ๐จ๐Ÿ‘‰๐Ÿ‘‰ (๐…๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐ž๐จ ๐‹๐ข๐ง๐ค )](https://videohere.top/?tabitha-malisawa) [โ–บโœ… ๐˜พ๐™‡๐™„๐˜พ๐™† ๐™ƒ๐™€๐™๐™€ ==โ–บโ–บ ๐™๐™ช๐™ก๐™ก ๐™‘๐™ž๐™™๐™š๐™คโค๏ธโค๏ธโฌ‡๏ธโฌ‡๏ธโ€‹](https://videohere.top/?tabitha-malisawa) [<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?tabitha-malisawa)
Mungert/DMind-1-mini-GGUF
Mungert
2025-06-15T19:48:26Z
1,178
0
transformers
[ "transformers", "gguf", "blockchain", "conversational", "web3", "qwen3", "text-generation", "en", "zh", "base_model:Qwen/Qwen3-14B", "base_model:quantized:Qwen/Qwen3-14B", "license:mit", "endpoints_compatible", "region:us", "imatrix" ]
text-generation
2025-06-06T01:05:26Z
--- license: mit language: - en - zh metrics: - accuracy base_model: - Qwen/Qwen3-14B pipeline_tag: text-generation library_name: transformers tags: - blockchain - conversational - web3 - qwen3 eval_results: - task: domain-specific evaluation dataset: DMindAI/DMind_Benchmark metric: normalized web3 score score: 74.12 model: DMind-1-mini model_rank: 2 / 24 --- # <span style="color: #7FFF7F;">DMind-1-mini GGUF Models</span> ## <span style="color: #7F7FFF;">Model Generation Details</span> This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`1caae7fc`](https://github.com/ggerganov/llama.cpp/commit/1caae7fc6c77551cb1066515e0f414713eebb367). ## <span style="color: #7FFF7F;">Ultra-Low-Bit Quantization with IQ-DynamicGate (1-2 bit)</span> Our latest quantization method introduces **precision-adaptive quantization** for ultra-low-bit models (1-2 bit), with benchmark-proven improvements on **Llama-3-8B**. This approach uses layer-specific strategies to preserve accuracy while maintaining extreme memory efficiency. ### **Benchmark Context** All tests conducted on **Llama-3-8B-Instruct** using: - Standard perplexity evaluation pipeline - 2048-token context window - Same prompt set across all quantizations ### **Method** - **Dynamic Precision Allocation**: - First/Last 25% of layers โ†’ IQ4_XS (selected layers) - Middle 50% โ†’ IQ2_XXS/IQ3_S (increase efficiency) - **Critical Component Protection**: - Embeddings/output layers use Q5_K - Reduces error propagation by 38% vs standard 1-2bit ### **Quantization Performance Comparison (Llama-3-8B)** | Quantization | Standard PPL | DynamicGate PPL | ฮ” PPL | Std Size | DG Size | ฮ” Size | Std Speed | DG Speed | |--------------|--------------|------------------|---------|----------|---------|--------|-----------|----------| | IQ2_XXS | 11.30 | 9.84 | -12.9% | 2.5G | 2.6G | +0.1G | 234s | 246s | | IQ2_XS | 11.72 | 11.63 | -0.8% | 2.7G | 2.8G | +0.1G | 242s | 246s | | IQ2_S | 14.31 | 9.02 | -36.9% | 2.7G | 2.9G | +0.2G | 238s | 244s | | IQ1_M | 27.46 | 15.41 | -43.9% | 2.2G | 2.5G | +0.3G | 206s | 212s | | IQ1_S | 53.07 | 32.00 | -39.7% | 2.1G | 2.4G | +0.3G | 184s | 209s | **Key**: - PPL = Perplexity (lower is better) - ฮ” PPL = Percentage change from standard to DynamicGate - Speed = Inference time (CPU avx2, 2048 token context) - Size differences reflect mixed quantization overhead **Key Improvements:** - ๐Ÿ”ฅ **IQ1_M** shows massive 43.9% perplexity reduction (27.46 โ†’ 15.41) - ๐Ÿš€ **IQ2_S** cuts perplexity by 36.9% while adding only 0.2GB - โšก **IQ1_S** maintains 39.7% better accuracy despite 1-bit quantization **Tradeoffs:** - All variants have modest size increases (0.1-0.3GB) - Inference speeds remain comparable (<5% difference) ### **When to Use These Models** ๐Ÿ“Œ **Fitting models into GPU VRAM** โœ” **Memory-constrained deployments** โœ” **Cpu and Edge Devices** where 1-2bit errors can be tolerated โœ” **Research** into ultra-low-bit quantization ## **Choosing the Right Model Format** Selecting the correct model format depends on your **hardware capabilities** and **memory constraints**. ### **BF16 (Brain Float 16) โ€“ Use if BF16 acceleration is available** - A 16-bit floating-point format designed for **faster computation** while retaining good precision. - Provides **similar dynamic range** as FP32 but with **lower memory usage**. - Recommended if your hardware supports **BF16 acceleration** (check your device's specs). - Ideal for **high-performance inference** with **reduced memory footprint** compared to FP32. ๐Ÿ“Œ **Use BF16 if:** โœ” Your hardware has native **BF16 support** (e.g., newer GPUs, TPUs). โœ” You want **higher precision** while saving memory. โœ” You plan to **requantize** the model into another format. ๐Ÿ“Œ **Avoid BF16 if:** โŒ Your hardware does **not** support BF16 (it may fall back to FP32 and run slower). โŒ You need compatibility with older devices that lack BF16 optimization. --- ### **F16 (Float 16) โ€“ More widely supported than BF16** - A 16-bit floating-point **high precision** but with less of range of values than BF16. - Works on most devices with **FP16 acceleration support** (including many GPUs and some CPUs). - Slightly lower numerical precision than BF16 but generally sufficient for inference. ๐Ÿ“Œ **Use F16 if:** โœ” Your hardware supports **FP16** but **not BF16**. โœ” You need a **balance between speed, memory usage, and accuracy**. โœ” You are running on a **GPU** or another device optimized for FP16 computations. ๐Ÿ“Œ **Avoid F16 if:** โŒ Your device lacks **native FP16 support** (it may run slower than expected). โŒ You have memory limitations. --- ### **Quantized Models (Q4_K, Q6_K, Q8, etc.) โ€“ For CPU & Low-VRAM Inference** Quantization reduces model size and memory usage while maintaining as much accuracy as possible. - **Lower-bit models (Q4_K)** โ†’ **Best for minimal memory usage**, may have lower precision. - **Higher-bit models (Q6_K, Q8_0)** โ†’ **Better accuracy**, requires more memory. ๐Ÿ“Œ **Use Quantized Models if:** โœ” You are running inference on a **CPU** and need an optimized model. โœ” Your device has **low VRAM** and cannot load full-precision models. โœ” You want to reduce **memory footprint** while keeping reasonable accuracy. ๐Ÿ“Œ **Avoid Quantized Models if:** โŒ You need **maximum accuracy** (full-precision models are better for this). โŒ Your hardware has enough VRAM for higher-precision formats (BF16/F16). --- ### **Very Low-Bit Quantization (IQ3_XS, IQ3_S, IQ3_M, Q4_K, Q4_0)** These models are optimized for **extreme memory efficiency**, making them ideal for **low-power devices** or **large-scale deployments** where memory is a critical constraint. - **IQ3_XS**: Ultra-low-bit quantization (3-bit) with **extreme memory efficiency**. - **Use case**: Best for **ultra-low-memory devices** where even Q4_K is too large. - **Trade-off**: Lower accuracy compared to higher-bit quantizations. - **IQ3_S**: Small block size for **maximum memory efficiency**. - **Use case**: Best for **low-memory devices** where **IQ3_XS** is too aggressive. - **IQ3_M**: Medium block size for better accuracy than **IQ3_S**. - **Use case**: Suitable for **low-memory devices** where **IQ3_S** is too limiting. - **Q4_K**: 4-bit quantization with **block-wise optimization** for better accuracy. - **Use case**: Best for **low-memory devices** where **Q6_K** is too large. - **Q4_0**: Pure 4-bit quantization, optimized for **ARM devices**. - **Use case**: Best for **ARM-based devices** or **low-memory environments**. --- ### **Summary Table: Model Format Selection** | Model Format | Precision | Memory Usage | Device Requirements | Best Use Case | |--------------|------------|---------------|----------------------|---------------| | **BF16** | Highest | High | BF16-supported GPU/CPUs | High-speed inference with reduced memory | | **F16** | High | High | FP16-supported devices | GPU inference when BF16 isn't available | | **Q4_K** | Medium Low | Low | CPU or Low-VRAM devices | Best for memory-constrained environments | | **Q6_K** | Medium | Moderate | CPU with more memory | Better accuracy while still being quantized | | **Q8_0** | High | Moderate | CPU or GPU with enough VRAM | Best accuracy among quantized models | | **IQ3_XS** | Very Low | Very Low | Ultra-low-memory devices | Extreme memory efficiency and low accuracy | | **Q4_0** | Low | Low | ARM or low-memory devices | llama.cpp can optimize for ARM devices | --- # <span id="testllm" style="color: #7F7FFF;">๐Ÿš€ If you find these models useful</span> โค **Please click "Like" if you find this useful!** Help me test my **AI-Powered Network Monitor Assistant** with **quantum-ready security checks**: ๐Ÿ‘‰ [Quantum Network Monitor](https://readyforquantum.com/dashboard/?assistant=open&utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) ๐Ÿ’ฌ **How to test**: Choose an **AI assistant type**: - `TurboLLM` (GPT-4o-mini) - `HugLLM` (Hugginface Open-source) - `TestLLM` (Experimental CPU-only) ### **What Iโ€™m Testing** Iโ€™m pushing the limits of **small open-source models for AI network monitoring**, specifically: - **Function calling** against live network services - **How small can a model go** while still handling: - Automated **Nmap scans** - **Quantum-readiness checks** - **Network Monitoring tasks** ๐ŸŸก **TestLLM** โ€“ Current experimental model (llama.cpp on 2 CPU threads): - โœ… **Zero-configuration setup** - โณ 30s load time (slow inference but **no API costs**) - ๐Ÿ”ง **Help wanted!** If youโ€™re into **edge-device AI**, letโ€™s collaborate! ### **Other Assistants** ๐ŸŸข **TurboLLM** โ€“ Uses **gpt-4o-mini** for: - **Create custom cmd processors to run .net code on Quantum Network Monitor Agents** - **Real-time network diagnostics and monitoring** - **Security Audits** - **Penetration testing** (Nmap/Metasploit) ๐Ÿ”ต **HugLLM** โ€“ Latest Open-source models: - ๐ŸŒ Runs on Hugging Face Inference API ### ๐Ÿ’ก **Example commands to you could test**: 1. `"Give me info on my websites SSL certificate"` 2. `"Check if my server is using quantum safe encyption for communication"` 3. `"Run a comprehensive security audit on my server"` 4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a Quantum Network Monitor Agent to run the .net code from. This is a very flexible and powerful feature. Use with caution! ### Final Word I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAIโ€”all out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is [open source](https://github.com/Mungert69). Feel free to use whatever you find helpful. If you appreciate the work, please consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) โ˜•. Your support helps cover service costs and allows me to raise token limits for everyone. I'm also open to job opportunities or sponsorship. Thank you! ๐Ÿ˜Š <p align="center"> <img src="figures/dmind-ai-logo.png" width="300" alt="DMind Logo" /> </p> <hr> <div align="center" style="line-height: 1;"> <a href="https://dmind.ai/" target="_blank" style="margin: 2px;"> <img alt="DMind Website" src="https://img.shields.io/badge/DMind-Homepage-blue?logo=data:image/svg+xml;base64,)" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://huggingface.co/DMindAI" target="_blank" style="margin: 2px;"> <img alt="Hugging Face" src="https://img.shields.io/badge/HuggingFace-DMind-ffd21f?color=ffd21f&logo=huggingface" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://x.com/dmind_ai" target="_blank" style="margin: 2px;"> <img alt="X" src="https://img.shields.io/badge/X-@DMind-1DA1F2?logo=x" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://huggingface.co/spaces/DMindAI/DMind-1-mini" target="_blank" style="margin: 2px;"> <img alt="Chat" src="https://img.shields.io/badge/๐Ÿค–%20Chat-DMind--1--mini-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://discord.gg/xxwmPHU3" target="_blank" style="margin: 2px;"> <img alt="Discord" src="https://img.shields.io/badge/Discord-DMind-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://opensource.org/licenses/MIT" target="_blank" style="margin: 2px;"> <img alt="Code License: MIT" src="https://img.shields.io/badge/Code%20License-MIT-yellow.svg" style="display: inline-block; vertical-align: middle;"/> </a> </div> ## Table of Contents - [Introduction](#introduction) - [1. Model Overview](#1-model-overview) - [2. Evaluation Results](#2-evaluation-results) - [3. Use Cases](#3-use-cases) - [4. Quickstart](#4-quickstart) - [4.1 Model Downloads](#41-model-downloads) - [4.2 OpenRouter API](#42-openrouter-api) - [4.3 OpenRouter Web Chat](#43-openrouter-web-chat) - [License](#license) - [Contact](#contact) ## Introduction We introduce **DMind-1**, a domain-specialized LLM fine-tuned for the Web3 ecosystem via supervised instruction tuning and reinforcement learning from human feedback (RLHF). To support real-time and resource-constrained applications, we further introduce **DMind-1-mini**, a compact variant distilled from both DMind-1 and a generalist LLM using a multi-level distillation framework. It retains key domain reasoning abilities while operating with significantly lower computational overhead. **DMind-1** and **DMind-1-mini** represent a robust foundation for intelligent agents in the Web3 ecosystem. ## 1. Model Overview ### DMind-1-mini To address scenarios requiring lower latency and faster inference, we introduce **DMind-1-mini**, a lightweight distilled version of DMind-1 based on Qwen3-14B. DMind-1-mini is trained using knowledge distillation and our custom **DeepResearch** framework, drawing from two teacher models: - **DMind-1** (Qwen3-32B): Our specialized Web3 domain model. - **GPT-o3 + DeepResearch**: A general-purpose SOTA LLM, with its outputs processed through our DeepResearch framework for Web3 domain alignment. The **Distillation pipeline** combines: - **Web3-specific data distillation**: High-quality instruction-following and QA examples generated by the teacher models. - **Distribution-level supervision**: The student model learns to approximate the teachers' output distributions through soft-label guidance, preserving nuanced prediction behavior and confidence calibration. - **Intermediate representation transfer**: Knowledge is transferred by aligning intermediate representations between teacher and student models, promoting deeper structural understanding beyond surface-level mimicry. This multi-level distillation strategy enables DMind-1-mini to maintain high Web3 task performance while significantly reducing computational overhead and latency, making it suitable for real-time applications such as instant Q&A, on-chain analytics, and lightweight agent deployment. ## 2. Evaluation Results ![DMind-1 Web3 Performance](figures/normalized-performance-with-price.jpeg) We evaluate DMind-1 and DMind-1-mini using the [DMind Benchmark](https://huggingface.co/datasets/DMindAI/DMind_Benchmark), a domain-specific evaluation suite designed to assess large language models in the Web3 context. The benchmark includes 1,917 expert-reviewed questions across nine core domain categories, and it features both multiple-choice and open-ended tasks to measure factual knowledge, contextual reasoning, and other abilities. To complement accuracy metrics, we conducted a **cost-performance analysis** by comparing benchmark scores against publicly available input token prices across 24 leading LLMs. In this evaluation: - **DMind-1** achieved the highest Web3 score while maintaining one of the lowest token input costs among top-tier models such as Grok 3 and Claude 3.7 Sonnet. - **DMind-1-mini** ranked second, retaining over 95% of DMind-1โ€™s performance with greater efficiency in latency and compute. Both models are uniquely positioned in the most favorable region of the score vs. price curve, delivering state-of-the-art Web3 reasoning at significantly lower cost. This balance of quality and efficiency makes the DMind models highly competitive for both research and production use. ## 3. Use Cases - **Expert-Level Question & Answering**: Provides accurate, context-aware answers on blockchain, DeFi, smart contracts, and related Web3 topics. - **Compliance-Aware Support**: Assists in drafting or reviewing content within regulatory and legal contexts. - **Content Generation in Domain**: Produces Web3-specific blog posts, documentation, and tutorials tailored to developers and users. - **DeFi Strategy Suggestions**: Generates insights and recommendations for yield farming, liquidity provision, and portfolio strategies based on user-provided data. - **Risk Management**: Suggests strategies aligned with user risk profiles for more informed decision-making in volatile markets. ## 4. Quickstart ### 4.1 Model Downloads | **Model** | **Base Model** | **Download** | |:--------------:|:--------------:|:----------------------------------------------------------------------------:| | DMind-1-mini | Qwen3-14B | [Hugging Face Link](https://huggingface.co/dmind-ai/dmind-1-mini) | ### 4.2 OpenRouter API (Coming Soon) *Documentation for API access will be available soon.* ### 4.3 OpenRouter Web Chat (Coming Soon) *Web chat interface documentation will be available soon.* ## License - The code repository and model weights for DMind-1-mini is released under the MIT License. - Commercial use, modification, and derivative works (including distillation and fine-tuning) are permitted. - **Base Models:** - DMind-1-mini is derived from Qwen3-14B, originally licensed under the [Qwen License](https://github.com/QwenLM/Qwen3). - Please ensure compliance with the original base model licenses when using or distributing derivatives. ## Contact For questions or support, please contact team@dmind.ai
Mungert/DMind-1-GGUF
Mungert
2025-06-15T19:48:22Z
842
0
transformers
[ "transformers", "gguf", "blockchain", "conversational", "web3", "qwen3", "text-generation", "en", "zh", "base_model:Qwen/Qwen3-32B", "base_model:quantized:Qwen/Qwen3-32B", "license:mit", "endpoints_compatible", "region:us", "imatrix" ]
text-generation
2025-06-05T21:33:06Z
--- license: mit language: - en - zh metrics: - accuracy base_model: - Qwen/Qwen3-32B pipeline_tag: text-generation library_name: transformers tags: - blockchain - conversational - web3 - qwen3 # eval_results: # - task: domain-specific evaluation # dataset: DMindAI/DMind_Benchmark # metric: normalized web3 score # score: 77.44 # model: DMind-1 # model_rank: 1 / 24 --- # <span style="color: #7FFF7F;">DMind-1 GGUF Models</span> ## <span style="color: #7F7FFF;">Model Generation Details</span> This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`7f37b6cf`](https://github.com/ggerganov/llama.cpp/commit/7f37b6cf1e2c1b90bf0d9c8d91904b4b6c512748). ## <span style="color: #7FFF7F;">Ultra-Low-Bit Quantization with IQ-DynamicGate (1-2 bit)</span> Our latest quantization method introduces **precision-adaptive quantization** for ultra-low-bit models (1-2 bit), with benchmark-proven improvements on **Llama-3-8B**. This approach uses layer-specific strategies to preserve accuracy while maintaining extreme memory efficiency. ### **Benchmark Context** All tests conducted on **Llama-3-8B-Instruct** using: - Standard perplexity evaluation pipeline - 2048-token context window - Same prompt set across all quantizations ### **Method** - **Dynamic Precision Allocation**: - First/Last 25% of layers โ†’ IQ4_XS (selected layers) - Middle 50% โ†’ IQ2_XXS/IQ3_S (increase efficiency) - **Critical Component Protection**: - Embeddings/output layers use Q5_K - Reduces error propagation by 38% vs standard 1-2bit ### **Quantization Performance Comparison (Llama-3-8B)** | Quantization | Standard PPL | DynamicGate PPL | ฮ” PPL | Std Size | DG Size | ฮ” Size | Std Speed | DG Speed | |--------------|--------------|------------------|---------|----------|---------|--------|-----------|----------| | IQ2_XXS | 11.30 | 9.84 | -12.9% | 2.5G | 2.6G | +0.1G | 234s | 246s | | IQ2_XS | 11.72 | 11.63 | -0.8% | 2.7G | 2.8G | +0.1G | 242s | 246s | | IQ2_S | 14.31 | 9.02 | -36.9% | 2.7G | 2.9G | +0.2G | 238s | 244s | | IQ1_M | 27.46 | 15.41 | -43.9% | 2.2G | 2.5G | +0.3G | 206s | 212s | | IQ1_S | 53.07 | 32.00 | -39.7% | 2.1G | 2.4G | +0.3G | 184s | 209s | **Key**: - PPL = Perplexity (lower is better) - ฮ” PPL = Percentage change from standard to DynamicGate - Speed = Inference time (CPU avx2, 2048 token context) - Size differences reflect mixed quantization overhead **Key Improvements:** - ๐Ÿ”ฅ **IQ1_M** shows massive 43.9% perplexity reduction (27.46 โ†’ 15.41) - ๐Ÿš€ **IQ2_S** cuts perplexity by 36.9% while adding only 0.2GB - โšก **IQ1_S** maintains 39.7% better accuracy despite 1-bit quantization **Tradeoffs:** - All variants have modest size increases (0.1-0.3GB) - Inference speeds remain comparable (<5% difference) ### **When to Use These Models** ๐Ÿ“Œ **Fitting models into GPU VRAM** โœ” **Memory-constrained deployments** โœ” **Cpu and Edge Devices** where 1-2bit errors can be tolerated โœ” **Research** into ultra-low-bit quantization ## **Choosing the Right Model Format** Selecting the correct model format depends on your **hardware capabilities** and **memory constraints**. ### **BF16 (Brain Float 16) โ€“ Use if BF16 acceleration is available** - A 16-bit floating-point format designed for **faster computation** while retaining good precision. - Provides **similar dynamic range** as FP32 but with **lower memory usage**. - Recommended if your hardware supports **BF16 acceleration** (check your device's specs). - Ideal for **high-performance inference** with **reduced memory footprint** compared to FP32. ๐Ÿ“Œ **Use BF16 if:** โœ” Your hardware has native **BF16 support** (e.g., newer GPUs, TPUs). โœ” You want **higher precision** while saving memory. โœ” You plan to **requantize** the model into another format. ๐Ÿ“Œ **Avoid BF16 if:** โŒ Your hardware does **not** support BF16 (it may fall back to FP32 and run slower). โŒ You need compatibility with older devices that lack BF16 optimization. --- ### **F16 (Float 16) โ€“ More widely supported than BF16** - A 16-bit floating-point **high precision** but with less of range of values than BF16. - Works on most devices with **FP16 acceleration support** (including many GPUs and some CPUs). - Slightly lower numerical precision than BF16 but generally sufficient for inference. ๐Ÿ“Œ **Use F16 if:** โœ” Your hardware supports **FP16** but **not BF16**. โœ” You need a **balance between speed, memory usage, and accuracy**. โœ” You are running on a **GPU** or another device optimized for FP16 computations. ๐Ÿ“Œ **Avoid F16 if:** โŒ Your device lacks **native FP16 support** (it may run slower than expected). โŒ You have memory limitations. --- ### **Quantized Models (Q4_K, Q6_K, Q8, etc.) โ€“ For CPU & Low-VRAM Inference** Quantization reduces model size and memory usage while maintaining as much accuracy as possible. - **Lower-bit models (Q4_K)** โ†’ **Best for minimal memory usage**, may have lower precision. - **Higher-bit models (Q6_K, Q8_0)** โ†’ **Better accuracy**, requires more memory. ๐Ÿ“Œ **Use Quantized Models if:** โœ” You are running inference on a **CPU** and need an optimized model. โœ” Your device has **low VRAM** and cannot load full-precision models. โœ” You want to reduce **memory footprint** while keeping reasonable accuracy. ๐Ÿ“Œ **Avoid Quantized Models if:** โŒ You need **maximum accuracy** (full-precision models are better for this). โŒ Your hardware has enough VRAM for higher-precision formats (BF16/F16). --- ### **Very Low-Bit Quantization (IQ3_XS, IQ3_S, IQ3_M, Q4_K, Q4_0)** These models are optimized for **extreme memory efficiency**, making them ideal for **low-power devices** or **large-scale deployments** where memory is a critical constraint. - **IQ3_XS**: Ultra-low-bit quantization (3-bit) with **extreme memory efficiency**. - **Use case**: Best for **ultra-low-memory devices** where even Q4_K is too large. - **Trade-off**: Lower accuracy compared to higher-bit quantizations. - **IQ3_S**: Small block size for **maximum memory efficiency**. - **Use case**: Best for **low-memory devices** where **IQ3_XS** is too aggressive. - **IQ3_M**: Medium block size for better accuracy than **IQ3_S**. - **Use case**: Suitable for **low-memory devices** where **IQ3_S** is too limiting. - **Q4_K**: 4-bit quantization with **block-wise optimization** for better accuracy. - **Use case**: Best for **low-memory devices** where **Q6_K** is too large. - **Q4_0**: Pure 4-bit quantization, optimized for **ARM devices**. - **Use case**: Best for **ARM-based devices** or **low-memory environments**. --- ### **Summary Table: Model Format Selection** | Model Format | Precision | Memory Usage | Device Requirements | Best Use Case | |--------------|------------|---------------|----------------------|---------------| | **BF16** | Highest | High | BF16-supported GPU/CPUs | High-speed inference with reduced memory | | **F16** | High | High | FP16-supported devices | GPU inference when BF16 isn't available | | **Q4_K** | Medium Low | Low | CPU or Low-VRAM devices | Best for memory-constrained environments | | **Q6_K** | Medium | Moderate | CPU with more memory | Better accuracy while still being quantized | | **Q8_0** | High | Moderate | CPU or GPU with enough VRAM | Best accuracy among quantized models | | **IQ3_XS** | Very Low | Very Low | Ultra-low-memory devices | Extreme memory efficiency and low accuracy | | **Q4_0** | Low | Low | ARM or low-memory devices | llama.cpp can optimize for ARM devices | --- ## **Included Files & Details** ### `DMind-1-bf16.gguf` - Model weights preserved in **BF16**. - Use this if you want to **requantize** the model into a different format. - Best if your device supports **BF16 acceleration**. ### `DMind-1-f16.gguf` - Model weights stored in **F16**. - Use if your device supports **FP16**, especially if BF16 is not available. ### `DMind-1-bf16-q8_0.gguf` - **Output & embeddings** remain in **BF16**. - All other layers quantized to **Q8_0**. - Use if your device supports **BF16** and you want a quantized version. ### `DMind-1-f16-q8_0.gguf` - **Output & embeddings** remain in **F16**. - All other layers quantized to **Q8_0**. ### `DMind-1-q4_k.gguf` - **Output & embeddings** quantized to **Q8_0**. - All other layers quantized to **Q4_K**. - Good for **CPU inference** with limited memory. ### `DMind-1-q4_k_s.gguf` - Smallest **Q4_K** variant, using less memory at the cost of accuracy. - Best for **very low-memory setups**. ### `DMind-1-q6_k.gguf` - **Output & embeddings** quantized to **Q8_0**. - All other layers quantized to **Q6_K** . ### `DMind-1-q8_0.gguf` - Fully **Q8** quantized model for better accuracy. - Requires **more memory** but offers higher precision. ### `DMind-1-iq3_xs.gguf` - **IQ3_XS** quantization, optimized for **extreme memory efficiency**. - Best for **ultra-low-memory devices**. ### `DMind-1-iq3_m.gguf` - **IQ3_M** quantization, offering a **medium block size** for better accuracy. - Suitable for **low-memory devices**. ### `DMind-1-q4_0.gguf` - Pure **Q4_0** quantization, optimized for **ARM devices**. - Best for **low-memory environments**. - Prefer IQ4_NL for better accuracy. # <span id="testllm" style="color: #7F7FFF;">๐Ÿš€ If you find these models useful</span> โค **Please click "Like" if you find this useful!** Help me test my **AI-Powered Network Monitor Assistant** with **quantum-ready security checks**: ๐Ÿ‘‰ [Quantum Network Monitor](https://readyforquantum.com/dashboard/?assistant=open) ๐Ÿ’ฌ **How to test**: Choose an **AI assistant type**: - `TurboLLM` (GPT-4o-mini) - `HugLLM` (Hugginface Open-source) - `TestLLM` (Experimental CPU-only) ### **What Iโ€™m Testing** Iโ€™m pushing the limits of **small open-source models for AI network monitoring**, specifically: - **Function calling** against live network services - **How small can a model go** while still handling: - Automated **Nmap scans** - **Quantum-readiness checks** - **Network Monitoring tasks** ๐ŸŸก **TestLLM** โ€“ Current experimental model (llama.cpp on 2 CPU threads): - โœ… **Zero-configuration setup** - โณ 30s load time (slow inference but **no API costs**) - ๐Ÿ”ง **Help wanted!** If youโ€™re into **edge-device AI**, letโ€™s collaborate! ### **Other Assistants** ๐ŸŸข **TurboLLM** โ€“ Uses **gpt-4o-mini** for: - **Create custom cmd processors to run .net code on Quantum Network Monitor Agents** - **Real-time network diagnostics and monitoring** - **Security Audits** - **Penetration testing** (Nmap/Metasploit) - ๐Ÿ”‘ Get more tokens by logging in or [downloading our Quantum Network Monitor Agent with integrated AI Assistant](https://readyforquantum.com/download) ๐Ÿ”ต **HugLLM** โ€“ Latest Open-source models: - ๐ŸŒ Runs on Hugging Face Inference API ### ๐Ÿ’ก **Example commands to you could test**: 1. `"Give me info on my websites SSL certificate"` 2. `"Check if my server is using quantum safe encyption for communication"` 3. `"Run a comprehensive security audit on my server"` 4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a Quantum Network Monitor Agent to run the .net code from. This is a very flexible and powerful feature. Use with caution! ### Final Word I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAIโ€”all out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is [open source](https://github.com/Mungert69). Feel free to use whatever you find helpful. If you appreciate the work, please consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) โ˜•. Your support helps cover service costs and allows me to raise token limits for everyone. I'm also open to job opportunities or sponsorship. Thank you! ๐Ÿ˜Š <p align="center"> <img src="figures/dmind-ai-logo.png" width="300" alt="DMind Logo" /> </p> <hr> <div align="center" style="line-height: 1;"> <a href="https://dmind.ai/" target="_blank" style="margin: 2px;"> <img alt="DMind Website" src="https://img.shields.io/badge/DMind-Homepage-blue?logo=data:image/svg+xml;base64,)" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://huggingface.co/DMindAI" target="_blank" style="margin: 2px;"> <img alt="Hugging Face" src="https://img.shields.io/badge/HuggingFace-DMind-ffd21f?color=ffd21f&logo=huggingface" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://x.com/dmind_ai" target="_blank" style="margin: 2px;"> <img alt="X" src="https://img.shields.io/badge/X-@DMind-1DA1F2?logo=x" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://huggingface.co/spaces/DMindAI/DMind-1" target="_blank" style="margin: 2px;"> <img alt="Chat" src="https://img.shields.io/badge/๐Ÿค–%20Chat-DMind--1-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://discord.gg/xxwmPHU3" target="_blank" style="margin: 2px;"> <img alt="Discord" src="https://img.shields.io/badge/Discord-DMind-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://opensource.org/licenses/MIT" target="_blank" style="margin: 2px;"> <img alt="Code License: MIT" src="https://img.shields.io/badge/Code%20License-MIT-yellow.svg" style="display: inline-block; vertical-align: middle;"/> </a> </div> ## Table of Contents - [Introduction](#introduction) - [1. Model Overview](#1-model-overview) - [2. Evaluation Results](#2-evaluation-results) - [3. Use Cases](#3-use-cases) - [4. Quickstart](#4-quickstart) - [4.1 Model Downloads](#41-model-downloads) - [4.2 OpenRouter API](#42-openrouter-api) - [4.3 OpenRouter Web Chat](#43-openrouter-web-chat) - [License](#license) - [Contact](#contact) ## Introduction The rapid growth of Web3 technologiesโ€”blockchain, DeFi, and smart contractsโ€”demands specialized AI large language models (LLMs) with precise domain alignment and advanced reasoning capabilities. However, General-purpose LLMs often lack the domain-specific accuracy, nuanced reasoning, and instruction-following aligned with expert expectations. To address these limitations, we introduce **DMind-1**, a domain-specialized LLM fine-tuned for the Web3 ecosystem via supervised instruction tuning and reinforcement learning from human feedback (RLHF). Built on a powerful base model, DMind-1 achieves strong improvements in task accuracy, content safety, and expert-aligned interaction, significantly surpassing general-purpose models. DMind-1 represents a robust foundation for intelligent agents in the Web3 ecosystem. ## 1. Model Overview ### DMind-1 DMind-1 is a specialized Web3 expert model built on the Qwen3-32B base. Leveraging a state-of-the-art transformer architecture, it integrates deep domain knowledge through a novel two-stage fine-tuning pipeline, establishing its distinctive strengths in Web3-specific applications. **Key Points:** - **Comprehensive Domain Expertise Data**: In the first stage, DMind-1 underwent Supervised Fine-Tuning (SFT) on 13,276 expert-curated knowledge items distilled from 32.7GB of Web3 documentation, covering 8 key subdomains including DeFi, tokenomics, governance, and smart contracts. These data points were extracted and structured by a team of domain experts to ensure both depth and accuracy. To enable efficient and scalable training, we employed Low-Rank Adaptation (LoRA) during the SFT stage, allowing DMind-1 to internalize specialized Web3 knowledge while preserving the general-language capabilities of its base model. - **Reinforcement Learning from Human Feedback (RLHF)** To further align the model with expert expectations in realistic interaction scenarios and accuracy, we implemented an RLHF phase composed of: - **Reward Model Training**: We trained a domain-specific reward model using preference-ranked outputs collected from human experts across diverse Web3-specific question-answer and interaction scenarios. This model learned to assess which responses best reflect factual accuracy and expert-level reasoning in the Web3 domain. - **Policy Optimization with PPO**: Building on the SFT model, we fine-tuned Qwen3-32B using Proximal Policy Optimization (PPO), guided by the trained reward model. The policy network was optimized based on feedback from simulated Web3 dialogue environments, while LoRA ensured resource-efficient parameter updates and significantly reduced compute and memory requirements. This dual-stage approach enabled efficient fine-tuning of a larger model on Web3-specific tasks while achieving high alignment with human intent. - **Domain-Aligned Reasoning and Interaction**: DMind-1 exhibits advanced web3-aligned reasoning and interactive capabilities in the following fields: - **Natural Dialogue Fluency**: Coherent, context-aware conversations on complex Web3 topics, with strong multi-turn consistency. - **Complex Instruction Following**: Reliable execution of multi-step instructions and conditional logic, supporting agent-driven workflows. - **Safe and Compliant Content Generation**: Outputs are aligned with domain-specific safety, ethics, and regulatory standards. ## 2. Evaluation Results ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6417e25e058f65de43201023/ESu1U3b9upbwZ70w5CCb9.png) We evaluate DMind-1 and DMind-1-mini using the [DMind Benchmark](https://huggingface.co/datasets/DMindAI/DMind_Benchmark), a domain-specific evaluation suite designed to assess large language models in the Web3 context. The benchmark includes 1,917 expert-reviewed questions across nine core domain categories, and it features both multiple-choice and open-ended tasks to measure factual knowledge, contextual reasoning, and other abilities. To complement accuracy metrics, we conducted a **cost-performance analysis** by comparing benchmark scores against publicly available input token prices across 24 leading LLMs. In this evaluation: - **DMind-1** achieved the highest Web3 score while maintaining one of the lowest token input costs among top-tier models such as Grok 3 and Claude 3.7 Sonnet. - **DMind-1-mini** ranked second, retaining over 95% of DMind-1โ€™s performance with greater efficiency in latency and compute. Both models are uniquely positioned in the most favorable region of the score vs. price curve, delivering state-of-the-art Web3 reasoning at significantly lower cost. This balance of quality and efficiency makes the DMind models highly competitive for both research and production use. ## 3. Use Cases - **Expert-Level Question & Answering**: Provides accurate, context-aware answers on blockchain, DeFi, smart contracts, and related Web3 topics. - **Compliance-Aware Support**: Assists in drafting or reviewing content within regulatory and legal contexts. - **Content Generation in Domain**: Produces Web3-specific blog posts, documentation, and tutorials tailored to developers and users. - **DeFi Strategy Suggestions**: Generates insights and recommendations for yield farming, liquidity provision, and portfolio strategies based on user-provided data. - **Risk Management**: Suggests strategies aligned with user risk profiles for more informed decision-making in volatile markets. ## 4. Quickstart ### 4.1 Model Downloads | **Model** | **Base Model** | **Download** | |:--------------:|:--------------:|:----------------------------------------------------------------------------:| | DMind-1 | Qwen3-32B | [Hugging Face Link](https://huggingface.co/DMindAI/DMind-1) | | DMind-1-mini | Qwen3-14B | [Hugging Face Link](https://huggingface.co/DMindAI/DMind-1-mini) | ### 4.2 OpenRouter API (Coming Soon) *Documentation for API access will be available soon.* ### 4.3 OpenRouter Web Chat (Coming Soon) *Web chat interface documentation will be available soon.* ## License - The code repository and model weights for DMind-1 is released under the MIT License. - Commercial use, modification, and derivative works (including distillation and fine-tuning) are permitted. - **Base Models:** - DMind-1 is derived from Qwen3-32B, originally licensed under the [Qwen License](https://github.com/QwenLM/Qwen3). - Please ensure compliance with the original base model licenses when using or distributing derivatives. ## Contact For questions or support, please contact team@dmind.ai
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.15_0.15_0.25_epoch1
MinaMila
2025-06-15T19:47:44Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T19:45:49Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Mungert/AM-Thinking-v1-GGUF
Mungert
2025-06-15T19:47:04Z
1,952
1
transformers
[ "transformers", "gguf", "text-generation", "arxiv:2505.08311", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
text-generation
2025-05-20T12:09:47Z
--- license: apache-2.0 pipeline_tag: text-generation library_name: transformers --- # <span style="color: #7FFF7F;">AM-Thinking-v1 GGUF Models</span> ## <span style="color: #7F7FFF;">Model Generation Details</span> This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`92ecdcc0`](https://github.com/ggerganov/llama.cpp/commit/92ecdcc06a4c405a415bcaa0cb772bc560aa23b1). ## <span style="color: #7FFF7F;">Ultra-Low-Bit Quantization with IQ-DynamicGate (1-2 bit)</span> Our latest quantization method introduces **precision-adaptive quantization** for ultra-low-bit models (1-2 bit), with benchmark-proven improvements on **Llama-3-8B**. This approach uses layer-specific strategies to preserve accuracy while maintaining extreme memory efficiency. ### **Benchmark Context** All tests conducted on **Llama-3-8B-Instruct** using: - Standard perplexity evaluation pipeline - 2048-token context window - Same prompt set across all quantizations ### **Method** - **Dynamic Precision Allocation**: - First/Last 25% of layers โ†’ IQ4_XS (selected layers) - Middle 50% โ†’ IQ2_XXS/IQ3_S (increase efficiency) - **Critical Component Protection**: - Embeddings/output layers use Q5_K - Reduces error propagation by 38% vs standard 1-2bit ### **Quantization Performance Comparison (Llama-3-8B)** | Quantization | Standard PPL | DynamicGate PPL | ฮ” PPL | Std Size | DG Size | ฮ” Size | Std Speed | DG Speed | |--------------|--------------|------------------|---------|----------|---------|--------|-----------|----------| | IQ2_XXS | 11.30 | 9.84 | -12.9% | 2.5G | 2.6G | +0.1G | 234s | 246s | | IQ2_XS | 11.72 | 11.63 | -0.8% | 2.7G | 2.8G | +0.1G | 242s | 246s | | IQ2_S | 14.31 | 9.02 | -36.9% | 2.7G | 2.9G | +0.2G | 238s | 244s | | IQ1_M | 27.46 | 15.41 | -43.9% | 2.2G | 2.5G | +0.3G | 206s | 212s | | IQ1_S | 53.07 | 32.00 | -39.7% | 2.1G | 2.4G | +0.3G | 184s | 209s | **Key**: - PPL = Perplexity (lower is better) - ฮ” PPL = Percentage change from standard to DynamicGate - Speed = Inference time (CPU avx2, 2048 token context) - Size differences reflect mixed quantization overhead **Key Improvements:** - ๐Ÿ”ฅ **IQ1_M** shows massive 43.9% perplexity reduction (27.46 โ†’ 15.41) - ๐Ÿš€ **IQ2_S** cuts perplexity by 36.9% while adding only 0.2GB - โšก **IQ1_S** maintains 39.7% better accuracy despite 1-bit quantization **Tradeoffs:** - All variants have modest size increases (0.1-0.3GB) - Inference speeds remain comparable (<5% difference) ### **When to Use These Models** ๐Ÿ“Œ **Fitting models into GPU VRAM** โœ” **Memory-constrained deployments** โœ” **Cpu and Edge Devices** where 1-2bit errors can be tolerated โœ” **Research** into ultra-low-bit quantization ## **Choosing the Right Model Format** Selecting the correct model format depends on your **hardware capabilities** and **memory constraints**. ### **BF16 (Brain Float 16) โ€“ Use if BF16 acceleration is available** - A 16-bit floating-point format designed for **faster computation** while retaining good precision. - Provides **similar dynamic range** as FP32 but with **lower memory usage**. - Recommended if your hardware supports **BF16 acceleration** (check your device's specs). - Ideal for **high-performance inference** with **reduced memory footprint** compared to FP32. ๐Ÿ“Œ **Use BF16 if:** โœ” Your hardware has native **BF16 support** (e.g., newer GPUs, TPUs). โœ” You want **higher precision** while saving memory. โœ” You plan to **requantize** the model into another format. ๐Ÿ“Œ **Avoid BF16 if:** โŒ Your hardware does **not** support BF16 (it may fall back to FP32 and run slower). โŒ You need compatibility with older devices that lack BF16 optimization. --- ### **F16 (Float 16) โ€“ More widely supported than BF16** - A 16-bit floating-point **high precision** but with less of range of values than BF16. - Works on most devices with **FP16 acceleration support** (including many GPUs and some CPUs). - Slightly lower numerical precision than BF16 but generally sufficient for inference. ๐Ÿ“Œ **Use F16 if:** โœ” Your hardware supports **FP16** but **not BF16**. โœ” You need a **balance between speed, memory usage, and accuracy**. โœ” You are running on a **GPU** or another device optimized for FP16 computations. ๐Ÿ“Œ **Avoid F16 if:** โŒ Your device lacks **native FP16 support** (it may run slower than expected). โŒ You have memory limitations. --- ### **Quantized Models (Q4_K, Q6_K, Q8, etc.) โ€“ For CPU & Low-VRAM Inference** Quantization reduces model size and memory usage while maintaining as much accuracy as possible. - **Lower-bit models (Q4_K)** โ†’ **Best for minimal memory usage**, may have lower precision. - **Higher-bit models (Q6_K, Q8_0)** โ†’ **Better accuracy**, requires more memory. ๐Ÿ“Œ **Use Quantized Models if:** โœ” You are running inference on a **CPU** and need an optimized model. โœ” Your device has **low VRAM** and cannot load full-precision models. โœ” You want to reduce **memory footprint** while keeping reasonable accuracy. ๐Ÿ“Œ **Avoid Quantized Models if:** โŒ You need **maximum accuracy** (full-precision models are better for this). โŒ Your hardware has enough VRAM for higher-precision formats (BF16/F16). --- ### **Very Low-Bit Quantization (IQ3_XS, IQ3_S, IQ3_M, Q4_K, Q4_0)** These models are optimized for **extreme memory efficiency**, making them ideal for **low-power devices** or **large-scale deployments** where memory is a critical constraint. - **IQ3_XS**: Ultra-low-bit quantization (3-bit) with **extreme memory efficiency**. - **Use case**: Best for **ultra-low-memory devices** where even Q4_K is too large. - **Trade-off**: Lower accuracy compared to higher-bit quantizations. - **IQ3_S**: Small block size for **maximum memory efficiency**. - **Use case**: Best for **low-memory devices** where **IQ3_XS** is too aggressive. - **IQ3_M**: Medium block size for better accuracy than **IQ3_S**. - **Use case**: Suitable for **low-memory devices** where **IQ3_S** is too limiting. - **Q4_K**: 4-bit quantization with **block-wise optimization** for better accuracy. - **Use case**: Best for **low-memory devices** where **Q6_K** is too large. - **Q4_0**: Pure 4-bit quantization, optimized for **ARM devices**. - **Use case**: Best for **ARM-based devices** or **low-memory environments**. --- ### **Summary Table: Model Format Selection** | Model Format | Precision | Memory Usage | Device Requirements | Best Use Case | |--------------|------------|---------------|----------------------|---------------| | **BF16** | Highest | High | BF16-supported GPU/CPUs | High-speed inference with reduced memory | | **F16** | High | High | FP16-supported devices | GPU inference when BF16 isn't available | | **Q4_K** | Medium Low | Low | CPU or Low-VRAM devices | Best for memory-constrained environments | | **Q6_K** | Medium | Moderate | CPU with more memory | Better accuracy while still being quantized | | **Q8_0** | High | Moderate | CPU or GPU with enough VRAM | Best accuracy among quantized models | | **IQ3_XS** | Very Low | Very Low | Ultra-low-memory devices | Extreme memory efficiency and low accuracy | | **Q4_0** | Low | Low | ARM or low-memory devices | llama.cpp can optimize for ARM devices | --- ## **Included Files & Details** ### `AM-Thinking-v1-bf16.gguf` - Model weights preserved in **BF16**. - Use this if you want to **requantize** the model into a different format. - Best if your device supports **BF16 acceleration**. ### `AM-Thinking-v1-f16.gguf` - Model weights stored in **F16**. - Use if your device supports **FP16**, especially if BF16 is not available. ### `AM-Thinking-v1-bf16-q8_0.gguf` - **Output & embeddings** remain in **BF16**. - All other layers quantized to **Q8_0**. - Use if your device supports **BF16** and you want a quantized version. ### `AM-Thinking-v1-f16-q8_0.gguf` - **Output & embeddings** remain in **F16**. - All other layers quantized to **Q8_0**. ### `AM-Thinking-v1-q4_k.gguf` - **Output & embeddings** quantized to **Q8_0**. - All other layers quantized to **Q4_K**. - Good for **CPU inference** with limited memory. ### `AM-Thinking-v1-q4_k_s.gguf` - Smallest **Q4_K** variant, using less memory at the cost of accuracy. - Best for **very low-memory setups**. ### `AM-Thinking-v1-q6_k.gguf` - **Output & embeddings** quantized to **Q8_0**. - All other layers quantized to **Q6_K** . ### `AM-Thinking-v1-q8_0.gguf` - Fully **Q8** quantized model for better accuracy. - Requires **more memory** but offers higher precision. ### `AM-Thinking-v1-iq3_xs.gguf` - **IQ3_XS** quantization, optimized for **extreme memory efficiency**. - Best for **ultra-low-memory devices**. ### `AM-Thinking-v1-iq3_m.gguf` - **IQ3_M** quantization, offering a **medium block size** for better accuracy. - Suitable for **low-memory devices**. ### `AM-Thinking-v1-q4_0.gguf` - Pure **Q4_0** quantization, optimized for **ARM devices**. - Best for **low-memory environments**. - Prefer IQ4_NL for better accuracy. # <span id="testllm" style="color: #7F7FFF;">๐Ÿš€ If you find these models useful</span> โค **Please click "Like" if you find this useful!** Help me test my **AI-Powered Network Monitor Assistant** with **quantum-ready security checks**: ๐Ÿ‘‰ [Quantum Network Monitor](https://readyforquantum.com/dashboard/?assistant=open&utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) ๐Ÿ’ฌ **How to test**: Choose an **AI assistant type**: - `TurboLLM` (GPT-4o-mini) - `HugLLM` (Hugginface Open-source) - `TestLLM` (Experimental CPU-only) ### **What Iโ€™m Testing** Iโ€™m pushing the limits of **small open-source models for AI network monitoring**, specifically: - **Function calling** against live network services - **How small can a model go** while still handling: - Automated **Nmap scans** - **Quantum-readiness checks** - **Network Monitoring tasks** ๐ŸŸก **TestLLM** โ€“ Current experimental model (llama.cpp on 2 CPU threads): - โœ… **Zero-configuration setup** - โณ 30s load time (slow inference but **no API costs**) - ๐Ÿ”ง **Help wanted!** If youโ€™re into **edge-device AI**, letโ€™s collaborate! ### **Other Assistants** ๐ŸŸข **TurboLLM** โ€“ Uses **gpt-4o-mini** for: - **Create custom cmd processors to run .net code on Quantum Network Monitor Agents** - **Real-time network diagnostics and monitoring** - **Security Audits** - **Penetration testing** (Nmap/Metasploit) ๐Ÿ”ต **HugLLM** โ€“ Latest Open-source models: - ๐ŸŒ Runs on Hugging Face Inference API ### ๐Ÿ’ก **Example commands to you could test**: 1. `"Give me info on my websites SSL certificate"` 2. `"Check if my server is using quantum safe encyption for communication"` 3. `"Run a comprehensive security audit on my server"` 4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a Quantum Network Monitor Agent to run the .net code from. This is a very flexible and powerful feature. Use with caution! ### Final Word I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAIโ€”all out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is [open source](https://github.com/Mungert69). Feel free to use whatever you find helpful. If you appreciate the work, please consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) โ˜•. Your support helps cover service costs and allows me to raise token limits for everyone. I'm also open to job opportunities or sponsorship. Thank you! ๐Ÿ˜Š # AMโ€‘Thinkingโ€‘v1: Advancing the Frontier of Reasoning at 32B Scale * 2025-05-10ย ยทย a-mโ€‘team <p align="center"> ๐Ÿค— <a href="https://huggingface.co/a-m-team">Hugging Face</a>&nbsp&nbsp | &nbsp&nbsp ๐Ÿ“‘ <a href="https://arxiv.org/abs/2505.08311"> Paper</a> &nbsp&nbsp | &nbsp&nbsp ๐Ÿ“‘ <a href="https://a-m-team.github.io/am-thinking-v1/">Blog</a> &nbsp&nbsp </p> ## ๐Ÿš€ Introduction We release **AM-Thinkingโ€‘v1**, a 32B dense language model focused on enhancing reasoning capabilities. Built on Qwenโ€ฏ2.5โ€‘32Bโ€‘Base, AM-Thinkingโ€‘v1 shows strong performance on reasoning benchmarks, comparable to much larger MoE models like **DeepSeekโ€‘R1**, **Qwen3โ€‘235Bโ€‘A22B**, **Seed1.5-Thinking**, and larger dense model like **Nemotron-Ultra-253B-v1**. <div style="text-align: center;"> <img src="assets/benchmark.png" alt="benchmark" style="width: 90%;"> </div> ## ๐Ÿงฉ Why Another 32B Reasoning Model Matters? Large Mixtureโ€‘ofโ€‘Experts (MoE) models such as **DeepSeekโ€‘R1** or **Qwen3โ€‘235Bโ€‘A22B** dominate leaderboardsโ€”but they also demand clusters of highโ€‘end GPUs. Many teams just need *the best dense model that fits on a single card*. **AMโ€‘Thinkingโ€‘v1** fills that gap **while remaining fully based on open-source components**: * **Outperforms DeepSeekโ€‘R1** on AIMEโ€™24/โ€™25 & LiveCodeBench and **approaches Qwen3โ€‘235Bโ€‘A22B** despite being 1/7โ€‘th the parameter count. * **Built on the publicly availableโ€ฏQwenโ€ฏ2.5โ€‘32Bโ€‘Base**, as well as the RL training queries. * Shows that with a **wellโ€‘designed postโ€‘training pipeline** ( SFT + dualโ€‘stage RL ) you can squeeze flagshipโ€‘level reasoning out of a 32โ€ฏB dense model. * **Deploys on one A100โ€‘80โ€ฏGB** with deterministic latencyโ€”no MoE routing overhead. <div style="text-align: center;"> <img src="assets/param-aime2024.jpeg" alt="AIME 2024" style="width: 90%; margin-bottom: 20px;"> <img src="assets/param-lcb.jpeg" alt="LiveCodeBench" style="width: 90%;"> <div style="margin-top: 10px;"> <em>AM-Thinking-v1 achieves strong reasoning performance with significantly fewer parameters.</em> </div> </div> ## ๐Ÿ› ๏ธ Use Cases ### 1) Code Generation <pre style="font-family: 'Times New Roman', serif; font-size: 12px; border: 1px solid black; padding: 10px; font-style: italic;"> PROMPT : write a python script for a bouncing red ball within a triangle, make sure to handle collision detection properly. make the triangle slowly rotate. implement it in python. make sure ball stays within the triangle </pre> <div style="text-align: center;"> <img src="assets/ball.gif" alt="Bouncing Red Ball" width="50%"> </div> ### 2) Logic <div style="text-align: center;"> <img src="assets/diamond.png" alt="diamond" width="90%"> </div> ### 3) Writing <div style="text-align: center;"> <img src="assets/writing.png" alt="sushi" width="90%"> </div> ## โšกย Quick start ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "a-m-team/AM-Thinking-v1" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) prompt = "How can I find inner peace?" messages = [ {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=49152 ) output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist() response = tokenizer.decode(output_ids, skip_special_tokens=True) think_content = response.split("<think>")[1].split("</think>")[0] answer_content = response.split("<answer>")[1].split("</answer>")[0] print (f"user prompt: {prompt}") print (f"model thinking: {think_content}") print (f"model answer: {answer_content}") ``` > Note: We have included the system prompt in the tokenizer configuration, as it was used during both the SFT and RL stages. To ensure consistent output quality, we recommend including the same system prompt during actual usage; otherwise, the model's responses may be significantly affected. ### Quantized versions for compact devices A series of quantized versions for [AM-Thinking-v1](https://huggingface.co/a-m-team/AM-Thinking-v1-gguf) model. For use with [llama.cpp](https://github.com/ggml-org/llama.cpp) and [Ollama](https://github.com/ollama/ollama) is available at [AM-Thinking-v1-gguf](https://huggingface.co/a-m-team/AM-Thinking-v1-gguf). ## ๐Ÿ”ง Post-training pipeline To achieve its strong reasoning ability, AMโ€‘Thinkingโ€‘v1 goes through a carefully designed post-training pipeline. Below we describe the key stages involved in turning a base model into a high-performing reasoner: **Stepโ€ฏ1ย โ€“ Coldโ€‘start SFT.** We begin with the open-sourced **Qwenโ€ฏ2.5โ€‘32Bโ€‘Base** and run a broad supervised fineโ€‘tune on a blended training dataset of math, code and openโ€‘domain chat. This endows the model with a "thinkโ€‘thenโ€‘answer" behavioural pattern and equips it with an initial capacity for reasoning. **Stepโ€ฏ2ย โ€“ Passโ€‘rateโ€‘aware data curation.** Before any RL, the SFT model is evaluated on every mathโ€‘ and codeโ€‘oriented training query. For each item we log a pass rate; only those with **0โ€ฏ<โ€ฏpassโ€‘rateโ€ฏ<โ€ฏ1** are kept. In effect we discard problems the model already masters and those it utterly fails, concentrating learning on genuinely informative cases. **Stepโ€ฏ3ย โ€“ Reinforcement learningย .** We adopt a twoโ€‘stage GRPO scheme: Stageโ€ฏ1 trains only on math and code queries. Once it converges, stage 2 starts by removing every query the model answered 100% correctly in Stageโ€ฏ1 and adjusting key hyperโ€‘parameters such as maximum generation length and learning rate. ## โš ๏ธ Limitations While AMโ€‘Thinkingโ€‘v1 excels at pure language reasoning and openโ€‘domain chat, it has not yet been trained for structured functionโ€‘calling or toolโ€‘use workflows, which restricts its usefulness in agentโ€‘style applications that must act on external systems. Improving the model's ability to follow complex instructions is also an important direction for our future work. In addition, our safety alignment is still at an early stage, so more rigorous redโ€‘teaming are required to reduce potential harms. ## ๐Ÿ“š Citation The a-m-team is an internal team at Beike (Ke.com), dedicated to exploring AGI technology. If you find our work helpful, feel free to give us a cite. ``` @misc{ji2025amthinkingv1advancingfrontierreasoning, title={AM-Thinking-v1: Advancing the Frontier of Reasoning at 32B Scale}, author={Yunjie Ji and Xiaoyu Tian and Sitong Zhao and Haotian Wang and Shuaiting Chen and Yiping Peng and Han Zhao and Xiangang Li}, year={2025}, eprint={2505.08311}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2505.08311}, } ```
Mungert/SmolVLM-500M-Instruct-GGUF
Mungert
2025-06-15T19:47:00Z
555
0
transformers
[ "transformers", "gguf", "image-text-to-text", "en", "dataset:HuggingFaceM4/the_cauldron", "dataset:HuggingFaceM4/Docmatix", "arxiv:2504.05299", "base_model:HuggingFaceTB/SmolLM2-360M-Instruct", "base_model:quantized:HuggingFaceTB/SmolLM2-360M-Instruct", "license:apache-2.0", "endpoints_compatibl...
image-text-to-text
2025-05-18T18:36:25Z
--- library_name: transformers license: apache-2.0 datasets: - HuggingFaceM4/the_cauldron - HuggingFaceM4/Docmatix pipeline_tag: image-text-to-text language: - en base_model: - HuggingFaceTB/SmolLM2-360M-Instruct - google/siglip-base-patch16-512 --- # <span style="color: #7FFF7F;">SmolVLM-500M-Instruct GGUF Models</span> ## <span style="color: #7F7FFF;">Model Generation Details</span> This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`7f4fbe51`](https://github.com/ggerganov/llama.cpp/commit/7f4fbe5183b23b6b2e25fd1ccc5d1fa8bb010cb7). ## **Choosing the Right Model Format** Selecting the correct model format depends on your **hardware capabilities** and **memory constraints**. ### **BF16 (Brain Float 16) โ€“ Use if BF16 acceleration is available** - A 16-bit floating-point format designed for **faster computation** while retaining good precision. - Provides **similar dynamic range** as FP32 but with **lower memory usage**. - Recommended if your hardware supports **BF16 acceleration** (check your device's specs). - Ideal for **high-performance inference** with **reduced memory footprint** compared to FP32. ๐Ÿ“Œ **Use BF16 if:** โœ” Your hardware has native **BF16 support** (e.g., newer GPUs, TPUs). โœ” You want **higher precision** while saving memory. โœ” You plan to **requantize** the model into another format. ๐Ÿ“Œ **Avoid BF16 if:** โŒ Your hardware does **not** support BF16 (it may fall back to FP32 and run slower). โŒ You need compatibility with older devices that lack BF16 optimization. --- ### **F16 (Float 16) โ€“ More widely supported than BF16** - A 16-bit floating-point **high precision** but with less of range of values than BF16. - Works on most devices with **FP16 acceleration support** (including many GPUs and some CPUs). - Slightly lower numerical precision than BF16 but generally sufficient for inference. ๐Ÿ“Œ **Use F16 if:** โœ” Your hardware supports **FP16** but **not BF16**. โœ” You need a **balance between speed, memory usage, and accuracy**. โœ” You are running on a **GPU** or another device optimized for FP16 computations. ๐Ÿ“Œ **Avoid F16 if:** โŒ Your device lacks **native FP16 support** (it may run slower than expected). โŒ You have memory limitations. --- ### **Hybrid Precision Models (e.g., `bf16_q8_0`, `f16_q4_K`) โ€“ Best of Both Worlds** These formats selectively **quantize non-essential layers** while keeping **key layers in full precision** (e.g., attention and output layers). - Named like `bf16_q8_0` (meaning **full-precision BF16 core layers + quantized Q8_0 other layers**). - Strike a **balance between memory efficiency and accuracy**, improving over fully quantized models without requiring the full memory of BF16/F16. ๐Ÿ“Œ **Use Hybrid Models if:** โœ” You need **better accuracy than quant-only models** but canโ€™t afford full BF16/F16 everywhere. โœ” Your device supports **mixed-precision inference**. โœ” You want to **optimize trade-offs** for production-grade models on constrained hardware. ๐Ÿ“Œ **Avoid Hybrid Models if:** โŒ Your target device doesnโ€™t support **mixed or full-precision acceleration**. โŒ You are operating under **ultra-strict memory limits** (in which case use fully quantized formats). --- ### **Quantized Models (Q4_K, Q6_K, Q8, etc.) โ€“ For CPU & Low-VRAM Inference** Quantization reduces model size and memory usage while maintaining as much accuracy as possible. - **Lower-bit models (Q4_K)** โ†’ **Best for minimal memory usage**, may have lower precision. - **Higher-bit models (Q6_K, Q8_0)** โ†’ **Better accuracy**, requires more memory. ๐Ÿ“Œ **Use Quantized Models if:** โœ” You are running inference on a **CPU** and need an optimized model. โœ” Your device has **low VRAM** and cannot load full-precision models. โœ” You want to reduce **memory footprint** while keeping reasonable accuracy. ๐Ÿ“Œ **Avoid Quantized Models if:** โŒ You need **maximum accuracy** (full-precision models are better for this). โŒ Your hardware has enough VRAM for higher-precision formats (BF16/F16). --- ### **Very Low-Bit Quantization (IQ3_XS, IQ3_S, IQ3_M, Q4_K, Q4_0)** These models are optimized for **very high memory efficiency**, making them ideal for **low-power devices** or **large-scale deployments** where memory is a critical constraint. - **IQ3_XS**: Ultra-low-bit quantization (3-bit) with **very high memory efficiency**. - **Use case**: Best for **ultra-low-memory devices** where even Q4_K is too large. - **Trade-off**: Lower accuracy compared to higher-bit quantizations. - **IQ3_S**: Small block size for **maximum memory efficiency**. - **Use case**: Best for **low-memory devices** where **IQ3_XS** is too aggressive. - **IQ3_M**: Medium block size for better accuracy than **IQ3_S**. - **Use case**: Suitable for **low-memory devices** where **IQ3_S** is too limiting. - **Q4_K**: 4-bit quantization with **block-wise optimization** for better accuracy. - **Use case**: Best for **low-memory devices** where **Q6_K** is too large. - **Q4_0**: Pure 4-bit quantization, optimized for **ARM devices**. - **Use case**: Best for **ARM-based devices** or **low-memory environments**. ### **Ultra Low-Bit Quantization (IQ1_S IQ1_M IQ2_S IQ2_M IQ2_XS IQ2_XSS)** - *Ultra-low-bit quantization (1 2-bit) with **extreme memory efficiency**. - **Use case**: Best for cases were you have to fit the model into very constrained memory - **Trade-off**: Very Low Accuracy. May not function as expected. Please test fully before using. --- ### **Summary Table: Model Format Selection** | Model Format | Precision | Memory Usage | Device Requirements | Best Use Case | |--------------------------|------------------|------------------|----------------------------------|--------------------------------------------------------------| | **BF16** | Very High | High | BF16-supported GPU/CPU | High-speed inference with reduced memory | | **F16** | High | High | FP16-supported GPU/CPU | Inference when BF16 isnโ€™t available | | **Q4_K** | Medium-Low | Low | CPU or Low-VRAM devices | Memory-constrained inference | | **Q6_K** | Medium | Moderate | CPU with more memory | Better accuracy with quantization | | **Q8_0** | High | Moderate | GPU/CPU with moderate VRAM | Highest accuracy among quantized models | | **IQ3_XS** | Low | Very Low | Ultra-low-memory devices | Max memory efficiency, low accuracy | | **IQ3_S** | Low | Very Low | Low-memory devices | Slightly more usable than IQ3_XS | | **IQ3_M** | Low-Medium | Low | Low-memory devices | Better accuracy than IQ3_S | | **Q4_0** | Low | Low | ARM-based/embedded devices | Llama.cpp automatically optimizes for ARM inference | | **Ultra Low-Bit (IQ1/2_*)** | Very Low | Extremely Low | Tiny edge/embedded devices | Fit models in extremely tight memory; low accuracy | | **Hybrid (e.g., `bf16_q8_0`)** | Mediumโ€“High | Medium | Mixed-precision capable hardware | Balanced performance and memory, near-FP accuracy in critical layers | --- <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/SmolVLM_256_banner.png" width="800" height="auto" alt="Image description"> # SmolVLM-500M SmolVLM-500M is a tiny multimodal model, member of the SmolVLM family. It accepts arbitrary sequences of image and text inputs to produce text outputs. It's designed for efficiency. SmolVLM can answer questions about images, describe visual content, or transcribe text. Its lightweight architecture makes it suitable for on-device applications while maintaining strong performance on multimodal tasks. It can run inference on one image with 1.23GB of GPU RAM. ## Model Summary - **Developed by:** Hugging Face ๐Ÿค— - **Model type:** Multi-modal model (image+text) - **Language(s) (NLP):** English - **License:** Apache 2.0 - **Architecture:** Based on [Idefics3](https://huggingface.co/HuggingFaceM4/Idefics3-8B-Llama3) (see technical summary) ## Resources - **Demo:** [SmolVLM-256 Demo](https://huggingface.co/spaces/HuggingFaceTB/SmolVLM-256M-Demo) - **Blog:** [Blog post](https://huggingface.co/blog/smolvlm) ## Uses SmolVLM can be used for inference on multimodal (image + text) tasks where the input comprises text queries along with one or more images. Text and images can be interleaved arbitrarily, enabling tasks like image captioning, visual question answering, and storytelling based on visual content. The model does not support image generation. To fine-tune SmolVLM on a specific task, you can follow [the fine-tuning tutorial](https://github.com/huggingface/smollm/blob/main/vision/finetuning/Smol_VLM_FT.ipynb). ## Evaluation <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smoller_vlm_benchmarks.png" alt="Benchmarks" style="width:90%;" /> ### Technical Summary SmolVLM leverages the lightweight SmolLM2 language model to provide a compact yet powerful multimodal experience. It introduces several changes compared to the larger SmolVLM 2.2B model: - **Image compression:** We introduce a more radical image compression compared to Idefics3 and SmolVLM-2.2B to enable the model to infer faster and use less RAM. - **Visual Token Encoding:** SmolVLM-256 uses 64 visual tokens to encode image patches of size 512ร—512. Larger images are divided into patches, each encoded separately, enhancing efficiency without compromising performance. - **New special tokens:** We added new special tokens to divide the subimages. This allows for more efficient tokenization of the images. - **Smoller vision encoder:** We went from a 400M parameter siglip vision encoder to a much smaller 93M encoder. - **Larger image patches:** We are now passing patches of 512x512 to the vision encoder, instead of 384x384 like the larger SmolVLM. This allows the information to be encoded more efficiently. More details about the training and architecture are available in our technical report. ### How to get started You can use transformers to load, infer and fine-tune SmolVLM. ```python import torch from PIL import Image from transformers import AutoProcessor, AutoModelForVision2Seq from transformers.image_utils import load_image DEVICE = "cuda" if torch.cuda.is_available() else "cpu" # Load images image = load_image("https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg") # Initialize processor and model processor = AutoProcessor.from_pretrained("HuggingFaceTB/SmolVLM-500M-Instruct") model = AutoModelForVision2Seq.from_pretrained( "HuggingFaceTB/SmolVLM-500M-Instruct", torch_dtype=torch.bfloat16, _attn_implementation="flash_attention_2" if DEVICE == "cuda" else "eager", ).to(DEVICE) # Create input messages messages = [ { "role": "user", "content": [ {"type": "image"}, {"type": "text", "text": "Can you describe this image?"} ] }, ] # Prepare inputs prompt = processor.apply_chat_template(messages, add_generation_prompt=True) inputs = processor(text=prompt, images=[image], return_tensors="pt") inputs = inputs.to(DEVICE) # Generate outputs generated_ids = model.generate(**inputs, max_new_tokens=500) generated_texts = processor.batch_decode( generated_ids, skip_special_tokens=True, ) print(generated_texts[0]) """ Assistant: The image depicts a cityscape featuring a prominent landmark, the Statue of Liberty, prominently positioned on Liberty Island. The statue is a green, humanoid figure with a crown atop its head and is situated on a small island surrounded by water. The statue is characterized by its large, detailed structure, with a statue of a woman holding a torch above her head and a tablet in her left hand. The statue is surrounded by a small, rocky island, which is partially visible in the foreground. In the background, the cityscape is dominated by numerous high-rise buildings, which are densely packed and vary in height. The buildings are primarily made of glass and steel, reflecting the sunlight and creating a bright, urban skyline. The skyline is filled with various architectural styles, including modern skyscrapers and older, more traditional buildings. The water surrounding the island is calm, with a few small boats visible, indicating that the area is likely a popular tourist destination. The water is a deep blue, suggesting that it is a large body of water, possibly a river or a large lake. In the foreground, there is a small strip of land with trees and grass, which adds a touch of natural beauty to the urban landscape. The trees are green, indicating that it is likely spring or summer. The image captures a moment of tranquility and reflection, as the statue and the cityscape come together to create a harmonious and picturesque scene. The statue's presence in the foreground draws attention to the city's grandeur, while the calm water and natural elements in the background provide a sense of peace and serenity. In summary, the image showcases the Statue of Liberty, a symbol of freedom and democracy, set against a backdrop of a bustling cityscape. The statue is a prominent and iconic representation of human achievement, while the cityscape is a testament to human ingenuity and progress. The image captures the beauty and complexity of urban life, with the statue serving as a symbol of hope and freedom, while the cityscape provides a glimpse into the modern world. """ ``` ### Model optimizations **Precision**: For better performance, load and run the model in half-precision (`torch.bfloat16`) if your hardware supports it. ```python from transformers import AutoModelForVision2Seq import torch model = AutoModelForVision2Seq.from_pretrained( "HuggingFaceTB/SmolVLM-Instruct", torch_dtype=torch.bfloat16 ).to("cuda") ``` You can also load SmolVLM with 4/8-bit quantization using bitsandbytes, torchao or Quanto. Refer to [this page](https://huggingface.co/docs/transformers/en/main_classes/quantization) for other options. ```python from transformers import AutoModelForVision2Seq, BitsAndBytesConfig import torch quantization_config = BitsAndBytesConfig(load_in_8bit=True) model = AutoModelForVision2Seq.from_pretrained( "HuggingFaceTB/SmolVLM-Instruct", quantization_config=quantization_config, ) ``` **Vision Encoder Efficiency**: Adjust the image resolution by setting `size={"longest_edge": N*512}` when initializing the processor, where N is your desired value. The default `N=4` works well, which results in input images of size 2048ร—2048. Decreasing N can save GPU memory and is appropriate for lower-resolution images. This is also useful if you want to fine-tune on videos. ## Misuse and Out-of-scope Use SmolVLM is not intended for high-stakes scenarios or critical decision-making processes that affect an individual's well-being or livelihood. The model may produce content that appears factual but may not be accurate. Misuse includes, but is not limited to: - Prohibited Uses: - Evaluating or scoring individuals (e.g., in employment, education, credit) - Critical automated decision-making - Generating unreliable factual content - Malicious Activities: - Spam generation - Disinformation campaigns - Harassment or abuse - Unauthorized surveillance ### License SmolVLM is built upon [SigLIP](https://huggingface.co/google/siglip-base-patch16-512) as image encoder and [SmolLM2](https://huggingface.co/HuggingFaceTB/SmolLM2-360M-Instruct) for text decoder part. We release the SmolVLM checkpoints under the Apache 2.0 license. ## Training Details ### Training Data The training data comes from [The Cauldron](https://huggingface.co/datasets/HuggingFaceM4/the_cauldron) and [Docmatix](https://huggingface.co/datasets/HuggingFaceM4/Docmatix) datasets, with emphasis on document understanding (25%) and image captioning (18%), while maintaining balanced coverage across other crucial capabilities like visual reasoning, chart comprehension, and general instruction following. <img src="https://huggingface.co/HuggingFaceTB/SmolVLM-Instruct/resolve/main/mixture_the_cauldron.png" alt="Example Image" style="width:90%;" /> # Citation information You can cite us in the following way: ```bibtex @article{marafioti2025smolvlm, title={SmolVLM: Redefining small and efficient multimodal models}, author={Andrรฉs Marafioti and Orr Zohar and Miquel Farrรฉ and Merve Noyan and Elie Bakouch and Pedro Cuenca and Cyril Zakka and Loubna Ben Allal and Anton Lozhkov and Nouamane Tazi and Vaibhav Srivastav and Joshua Lochner and Hugo Larcher and Mathieu Morlon and Lewis Tunstall and Leandro von Werra and Thomas Wolf}, journal={arXiv preprint arXiv:2504.05299}, year={2025} } ``` # <span id="testllm" style="color: #7F7FFF;">๐Ÿš€ If you find these models useful</span> Help me test my **AI-Powered Quantum Network Monitor Assistant** with **quantum-ready security checks**: ๐Ÿ‘‰ [Quantum Network Monitor](https://readyforquantum.com/?assistant=open&utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) The full Open Source Code for the Quantum Network Monitor Service available at my github repos ( repos with NetworkMonitor in the name) : [Source Code Quantum Network Monitor](https://github.com/Mungert69). You will also find the code I use to quantize the models if you want to do it yourself [GGUFModelBuilder](https://github.com/Mungert69/GGUFModelBuilder) ๐Ÿ’ฌ **How to test**: Choose an **AI assistant type**: - `TurboLLM` (GPT-4.1-mini) - `HugLLM` (Hugginface Open-source models) - `TestLLM` (Experimental CPU-only) ### **What Iโ€™m Testing** Iโ€™m pushing the limits of **small open-source models for AI network monitoring**, specifically: - **Function calling** against live network services - **How small can a model go** while still handling: - Automated **Nmap security scans** - **Quantum-readiness checks** - **Network Monitoring tasks** ๐ŸŸก **TestLLM** โ€“ Current experimental model (llama.cpp on 2 CPU threads on huggingface docker space): - โœ… **Zero-configuration setup** - โณ 30s load time (slow inference but **no API costs**) . No token limited as the cost is low. - ๐Ÿ”ง **Help wanted!** If youโ€™re into **edge-device AI**, letโ€™s collaborate! ### **Other Assistants** ๐ŸŸข **TurboLLM** โ€“ Uses **gpt-4.1-mini** : - **It performs very well but unfortunatly OpenAI charges per token. For this reason tokens usage is limited. - **Create custom cmd processors to run .net code on Quantum Network Monitor Agents** - **Real-time network diagnostics and monitoring** - **Security Audits** - **Penetration testing** (Nmap/Metasploit) ๐Ÿ”ต **HugLLM** โ€“ Latest Open-source models: - ๐ŸŒ Runs on Hugging Face Inference API. Performs pretty well using the lastest models hosted on Novita. ### ๐Ÿ’ก **Example commands you could test**: 1. `"Give me info on my websites SSL certificate"` 2. `"Check if my server is using quantum safe encyption for communication"` 3. `"Run a comprehensive security audit on my server"` 4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a Quantum Network Monitor Agent to run the .net code from. This is a very flexible and powerful feature. Use with caution! ### Final Word I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAIโ€”all out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is [open source](https://github.com/Mungert69). Feel free to use whatever you find helpful. If you appreciate the work, please consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) โ˜•. Your support helps cover service costs and allows me to raise token limits for everyone. I'm also open to job opportunities or sponsorship. Thank you! ๐Ÿ˜Š
Mungert/UIGEN-T2-7B-GGUF
Mungert
2025-06-15T19:46:12Z
326
0
transformers
[ "transformers", "gguf", "text-generation-inference", "qwen2", "ui-generation", "peft", "lora", "tailwind-css", "html", "en", "base_model:Qwen/Qwen2.5-Coder-7B-Instruct", "base_model:adapter:Qwen/Qwen2.5-Coder-7B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us", "ima...
null
2025-05-10T02:07:27Z
--- base_model: Qwen/Qwen2.5-Coder-7B-Instruct tags: - text-generation-inference - transformers - qwen2 - ui-generation - peft - lora - tailwind-css - html license: apache-2.0 language: - en --- # <span style="color: #7FFF7F;">UIGEN-T2-7B GGUF Models</span> ## <span style="color: #7F7FFF;">Model Generation Details</span> This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`8c83449`](https://github.com/ggerganov/llama.cpp/commit/8c83449cb780c201839653812681c3a4cf17feed). ## <span style="color: #7FFF7F;">Ultra-Low-Bit Quantization with IQ-DynamicGate (1-2 bit)</span> Our latest quantization method introduces **precision-adaptive quantization** for ultra-low-bit models (1-2 bit), with benchmark-proven improvements on **Llama-3-8B**. This approach uses layer-specific strategies to preserve accuracy while maintaining extreme memory efficiency. ### **Benchmark Context** All tests conducted on **Llama-3-8B-Instruct** using: - Standard perplexity evaluation pipeline - 2048-token context window - Same prompt set across all quantizations ### **Method** - **Dynamic Precision Allocation**: - First/Last 25% of layers โ†’ IQ4_XS (selected layers) - Middle 50% โ†’ IQ2_XXS/IQ3_S (increase efficiency) - **Critical Component Protection**: - Embeddings/output layers use Q5_K - Reduces error propagation by 38% vs standard 1-2bit ### **Quantization Performance Comparison (Llama-3-8B)** | Quantization | Standard PPL | DynamicGate PPL | ฮ” PPL | Std Size | DG Size | ฮ” Size | Std Speed | DG Speed | |--------------|--------------|------------------|---------|----------|---------|--------|-----------|----------| | IQ2_XXS | 11.30 | 9.84 | -12.9% | 2.5G | 2.6G | +0.1G | 234s | 246s | | IQ2_XS | 11.72 | 11.63 | -0.8% | 2.7G | 2.8G | +0.1G | 242s | 246s | | IQ2_S | 14.31 | 9.02 | -36.9% | 2.7G | 2.9G | +0.2G | 238s | 244s | | IQ1_M | 27.46 | 15.41 | -43.9% | 2.2G | 2.5G | +0.3G | 206s | 212s | | IQ1_S | 53.07 | 32.00 | -39.7% | 2.1G | 2.4G | +0.3G | 184s | 209s | **Key**: - PPL = Perplexity (lower is better) - ฮ” PPL = Percentage change from standard to DynamicGate - Speed = Inference time (CPU avx2, 2048 token context) - Size differences reflect mixed quantization overhead **Key Improvements:** - ๐Ÿ”ฅ **IQ1_M** shows massive 43.9% perplexity reduction (27.46 โ†’ 15.41) - ๐Ÿš€ **IQ2_S** cuts perplexity by 36.9% while adding only 0.2GB - โšก **IQ1_S** maintains 39.7% better accuracy despite 1-bit quantization **Tradeoffs:** - All variants have modest size increases (0.1-0.3GB) - Inference speeds remain comparable (<5% difference) ### **When to Use These Models** ๐Ÿ“Œ **Fitting models into GPU VRAM** โœ” **Memory-constrained deployments** โœ” **Cpu and Edge Devices** where 1-2bit errors can be tolerated โœ” **Research** into ultra-low-bit quantization ## **Choosing the Right Model Format** Selecting the correct model format depends on your **hardware capabilities** and **memory constraints**. ### **BF16 (Brain Float 16) โ€“ Use if BF16 acceleration is available** - A 16-bit floating-point format designed for **faster computation** while retaining good precision. - Provides **similar dynamic range** as FP32 but with **lower memory usage**. - Recommended if your hardware supports **BF16 acceleration** (check your device's specs). - Ideal for **high-performance inference** with **reduced memory footprint** compared to FP32. ๐Ÿ“Œ **Use BF16 if:** โœ” Your hardware has native **BF16 support** (e.g., newer GPUs, TPUs). โœ” You want **higher precision** while saving memory. โœ” You plan to **requantize** the model into another format. ๐Ÿ“Œ **Avoid BF16 if:** โŒ Your hardware does **not** support BF16 (it may fall back to FP32 and run slower). โŒ You need compatibility with older devices that lack BF16 optimization. --- ### **F16 (Float 16) โ€“ More widely supported than BF16** - A 16-bit floating-point **high precision** but with less of range of values than BF16. - Works on most devices with **FP16 acceleration support** (including many GPUs and some CPUs). - Slightly lower numerical precision than BF16 but generally sufficient for inference. ๐Ÿ“Œ **Use F16 if:** โœ” Your hardware supports **FP16** but **not BF16**. โœ” You need a **balance between speed, memory usage, and accuracy**. โœ” You are running on a **GPU** or another device optimized for FP16 computations. ๐Ÿ“Œ **Avoid F16 if:** โŒ Your device lacks **native FP16 support** (it may run slower than expected). โŒ You have memory limitations. --- ### **Quantized Models (Q4_K, Q6_K, Q8, etc.) โ€“ For CPU & Low-VRAM Inference** Quantization reduces model size and memory usage while maintaining as much accuracy as possible. - **Lower-bit models (Q4_K)** โ†’ **Best for minimal memory usage**, may have lower precision. - **Higher-bit models (Q6_K, Q8_0)** โ†’ **Better accuracy**, requires more memory. ๐Ÿ“Œ **Use Quantized Models if:** โœ” You are running inference on a **CPU** and need an optimized model. โœ” Your device has **low VRAM** and cannot load full-precision models. โœ” You want to reduce **memory footprint** while keeping reasonable accuracy. ๐Ÿ“Œ **Avoid Quantized Models if:** โŒ You need **maximum accuracy** (full-precision models are better for this). โŒ Your hardware has enough VRAM for higher-precision formats (BF16/F16). --- ### **Very Low-Bit Quantization (IQ3_XS, IQ3_S, IQ3_M, Q4_K, Q4_0)** These models are optimized for **extreme memory efficiency**, making them ideal for **low-power devices** or **large-scale deployments** where memory is a critical constraint. - **IQ3_XS**: Ultra-low-bit quantization (3-bit) with **extreme memory efficiency**. - **Use case**: Best for **ultra-low-memory devices** where even Q4_K is too large. - **Trade-off**: Lower accuracy compared to higher-bit quantizations. - **IQ3_S**: Small block size for **maximum memory efficiency**. - **Use case**: Best for **low-memory devices** where **IQ3_XS** is too aggressive. - **IQ3_M**: Medium block size for better accuracy than **IQ3_S**. - **Use case**: Suitable for **low-memory devices** where **IQ3_S** is too limiting. - **Q4_K**: 4-bit quantization with **block-wise optimization** for better accuracy. - **Use case**: Best for **low-memory devices** where **Q6_K** is too large. - **Q4_0**: Pure 4-bit quantization, optimized for **ARM devices**. - **Use case**: Best for **ARM-based devices** or **low-memory environments**. --- ### **Summary Table: Model Format Selection** | Model Format | Precision | Memory Usage | Device Requirements | Best Use Case | |--------------|------------|---------------|----------------------|---------------| | **BF16** | Highest | High | BF16-supported GPU/CPUs | High-speed inference with reduced memory | | **F16** | High | High | FP16-supported devices | GPU inference when BF16 isn't available | | **Q4_K** | Medium Low | Low | CPU or Low-VRAM devices | Best for memory-constrained environments | | **Q6_K** | Medium | Moderate | CPU with more memory | Better accuracy while still being quantized | | **Q8_0** | High | Moderate | CPU or GPU with enough VRAM | Best accuracy among quantized models | | **IQ3_XS** | Very Low | Very Low | Ultra-low-memory devices | Extreme memory efficiency and low accuracy | | **Q4_0** | Low | Low | ARM or low-memory devices | llama.cpp can optimize for ARM devices | --- ## **Included Files & Details** ### `UIGEN-T2-7B-bf16.gguf` - Model weights preserved in **BF16**. - Use this if you want to **requantize** the model into a different format. - Best if your device supports **BF16 acceleration**. ### `UIGEN-T2-7B-f16.gguf` - Model weights stored in **F16**. - Use if your device supports **FP16**, especially if BF16 is not available. ### `UIGEN-T2-7B-bf16-q8_0.gguf` - **Output & embeddings** remain in **BF16**. - All other layers quantized to **Q8_0**. - Use if your device supports **BF16** and you want a quantized version. ### `UIGEN-T2-7B-f16-q8_0.gguf` - **Output & embeddings** remain in **F16**. - All other layers quantized to **Q8_0**. ### `UIGEN-T2-7B-q4_k.gguf` - **Output & embeddings** quantized to **Q8_0**. - All other layers quantized to **Q4_K**. - Good for **CPU inference** with limited memory. ### `UIGEN-T2-7B-q4_k_s.gguf` - Smallest **Q4_K** variant, using less memory at the cost of accuracy. - Best for **very low-memory setups**. ### `UIGEN-T2-7B-q6_k.gguf` - **Output & embeddings** quantized to **Q8_0**. - All other layers quantized to **Q6_K** . ### `UIGEN-T2-7B-q8_0.gguf` - Fully **Q8** quantized model for better accuracy. - Requires **more memory** but offers higher precision. ### `UIGEN-T2-7B-iq3_xs.gguf` - **IQ3_XS** quantization, optimized for **extreme memory efficiency**. - Best for **ultra-low-memory devices**. ### `UIGEN-T2-7B-iq3_m.gguf` - **IQ3_M** quantization, offering a **medium block size** for better accuracy. - Suitable for **low-memory devices**. ### `UIGEN-T2-7B-q4_0.gguf` - Pure **Q4_0** quantization, optimized for **ARM devices**. - Best for **low-memory environments**. - Prefer IQ4_NL for better accuracy. # <span id="testllm" style="color: #7F7FFF;">๐Ÿš€ If you find these models useful</span> โค **Please click "Like" if you find this useful!** Help me test my **AI-Powered Network Monitor Assistant** with **quantum-ready security checks**: ๐Ÿ‘‰ [Quantum Network Monitor](https://readyforquantum.com/dashboard/?assistant=open&utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) ๐Ÿ’ฌ **How to test**: Choose an **AI assistant type**: - `TurboLLM` (GPT-4o-mini) - `HugLLM` (Hugginface Open-source) - `TestLLM` (Experimental CPU-only) ### **What Iโ€™m Testing** Iโ€™m pushing the limits of **small open-source models for AI network monitoring**, specifically: - **Function calling** against live network services - **How small can a model go** while still handling: - Automated **Nmap scans** - **Quantum-readiness checks** - **Network Monitoring tasks** ๐ŸŸก **TestLLM** โ€“ Current experimental model (llama.cpp on 2 CPU threads): - โœ… **Zero-configuration setup** - โณ 30s load time (slow inference but **no API costs**) - ๐Ÿ”ง **Help wanted!** If youโ€™re into **edge-device AI**, letโ€™s collaborate! ### **Other Assistants** ๐ŸŸข **TurboLLM** โ€“ Uses **gpt-4o-mini** for: - **Create custom cmd processors to run .net code on Quantum Network Monitor Agents** - **Real-time network diagnostics and monitoring** - **Security Audits** - **Penetration testing** (Nmap/Metasploit) ๐Ÿ”ต **HugLLM** โ€“ Latest Open-source models: - ๐ŸŒ Runs on Hugging Face Inference API ### ๐Ÿ’ก **Example commands to you could test**: 1. `"Give me info on my websites SSL certificate"` 2. `"Check if my server is using quantum safe encyption for communication"` 3. `"Run a comprehensive security audit on my server"` 4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a Quantum Network Monitor Agent to run the .net code from. This is a very flexible and powerful feature. Use with caution! ### Final Word I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAIโ€”all out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is [open source](https://github.com/Mungert69). Feel free to use whatever you find helpful. If you appreciate the work, please consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) โ˜•. Your support helps cover service costs and allows me to raise token limits for everyone. I'm also open to job opportunities or sponsorship. Thank you! ๐Ÿ˜Š # Model Card for UIGEN-T2-7B <!-- Provide a quick summary of what the model is/does. --> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64d1129297ca59bcf7458d07/3zP7VsfnqhPS7HgJjDvjl.png) [OUR Training Article](https://cypress-dichondra-4b5.notion.site/UIGEN-T2-Training-1e393ce17c258024abfcff24dae7bedd) [Testing Github for Artifacts](https://github.com/TesslateAI/UIGEN-T2-Artifacts) ## **Model Overview** We're excited to introduce **UIGEN-T2**, the next evolution in our UI generation model series. Fine-tuned from the highly capable **Qwen2.5-Coder-7B-Instruct** base model using PEFT/LoRA, UIGEN-T2 is specifically designed to generate **HTML and Tailwind CSS** code for web interfaces. What sets UIGEN-T2 apart is its training on a massive **50,000 sample dataset** (up from 400) and its unique **UI-based reasoning capability**, allowing it to generate not just code, but code informed by thoughtful design principles. --- ## **Model Highlights** - **High-Quality UI Code Generation**: Produces functional and semantic HTML combined with utility-first Tailwind CSS. - **Massive Training Dataset**: Trained on 50,000 diverse UI examples, enabling broader component understanding and stylistic range. - **Innovative UI-Based Reasoning**: Incorporates detailed reasoning traces generated by a specialized "teacher" model, ensuring outputs consider usability, layout, and aesthetics. (*See example reasoning in description below*) - **PEFT/LoRA Trained (Rank 128)**: Efficiently fine-tuned for UI generation. We've published LoRA checkpoints at each training step for transparency and community use! - **Improved Chat Interaction**: Streamlined prompt flow โ€“ no more need for the awkward double `think` prompt! Interaction feels more natural. --- ## **Example Reasoning (Internal Guide for Generation)** Here's a glimpse into the kind of reasoning that guides UIGEN-T2 internally, generated by our specialized teacher model: ```plaintext <|begin_of_thought|> When approaching the challenge of crafting an elegant stopwatch UI, my first instinct is to dissect what truly makes such an interface delightful yet functionalโ€”hence, I consider both aesthetic appeal and usability grounded in established heuristics like Nielsenโ€™s โ€œaesthetic and minimalist designโ€ alongside Gestalt principles... placing the large digital clock prominently aligns with Fittsโ€™ Law... The glassmorphism effect here enhances visual separation... typography choicesโ€”the use of a monospace font family ("Fira Code" via Google Fonts) supports readability... iconography paired with labels inside buttons provides dual coding... Tailwind CSS v4 enables utility-driven consistency... critical reflection concerns responsiveness: flexbox layouts combined with relative sizing guarantee graceful adaptation... <|end_of_thought|> ``` --- ## **Example Outputs** ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64d1129297ca59bcf7458d07/ALTiUnT5-uUuDEtf4FfbQ.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64d1129297ca59bcf7458d07/veGwINF56SYIO_rVNSGuM.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64d1129297ca59bcf7458d07/j8QiAlHnLL2rRFQUwSlDe.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64d1129297ca59bcf7458d07/oK1y4ZyMh2OKXOmy1pCzc.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64d1129297ca59bcf7458d07/ycRiJgS-c5bIrgT0EZkGw.png) --- ## **Use Cases** ### **Recommended Uses** - **Rapid UI Prototyping**: Quickly generate HTML/Tailwind code snippets from descriptions or wireframes. - **Component Generation**: Create standard and custom UI components (buttons, cards, forms, layouts). - **Frontend Development Assistance**: Accelerate development by generating baseline component structures. - **Design-to-Code Exploration**: Bridge the gap between design concepts and initial code implementation. ### **Limitations** - **Current Framework Focus**: Primarily generates HTML and Tailwind CSS. (Bootstrap support is planned!). - **Complex JavaScript Logic**: Focuses on structure and styling; dynamic behavior and complex state management typically require manual implementation. - **Highly Specific Design Systems**: May need further fine-tuning for strict adherence to unique, complex corporate design systems. --- ## **How to Use** You have to use this system prompt: ``` You are Tesslate, a helpful assistant specialized in UI generation. ``` These are the reccomended parameters: 0.7 Temp, Top P 0.9. ### **Inference Example** ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Make sure you have PEFT installed: pip install peft from peft import PeftModel # Use your specific model name/path once uploaded model_name_or_path = "tesslate/UIGEN-T2" # Placeholder - replace with actual HF repo name base_model_name = "Qwen/Qwen2.5-Coder-7B-Instruct" # Load the base model base_model = AutoModelForCausalLM.from_pretrained( base_model_name, torch_dtype=torch.bfloat16, # or float16 if bf16 not supported device_map="auto" ) # Load the PEFT model (LoRA weights) model = PeftModel.from_pretrained(base_model, model_name_or_path) tokenizer = AutoTokenizer.from_pretrained(base_model_name) # Use base tokenizer # Note the simplified prompt structure (no double 'think') prompt = """<|im_start|>user Create a simple card component using Tailwind CSS with an image, title, and description.<|im_end|> <|im_start|>assistant """ # Model will generate reasoning and code following this inputs = tokenizer(prompt, return_tensors="pt").to(model.device) # Adjust generation parameters as needed outputs = model.generate(**inputs, max_new_tokens=1024, do_sample=True, temperature=0.6, top_p=0.9) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` --- ## **Performance and Evaluation** - **Strengths**: - Generates semantically correct and well-structured HTML/Tailwind CSS. - Leverages a large dataset (50k samples) for improved robustness and diversity. - Incorporates design reasoning for more thoughtful UI outputs. - Improved usability via streamlined chat template. - Openly published LoRA checkpoints for community use. - **Weaknesses**: - Currently limited to HTML/Tailwind CSS (Bootstrap planned). - Complex JavaScript interactivity requires manual implementation. - Reinforcement Learning refinement (for stricter adherence to principles/rewards) is a future step. --- ## **Technical Specifications** - **Architecture**: Transformer-based LLM adapted with PEFT/LoRA - **Base Model**: Qwen/Qwen2.5-Coder-7B-Instruct - **Adapter Rank (LoRA)**: 128 - **Training Data Size**: 50,000 samples - **Precision**: Trained using bf16/fp16. Base model requires appropriate precision handling. - **Hardware Requirements**: Recommend GPU with >= 16GB VRAM for efficient inference (depends on quantization/precision). - **Software Dependencies**: - Hugging Face Transformers (`transformers`) - PyTorch (`torch`) - Parameter-Efficient Fine-Tuning (`peft`) --- ## **Citation** If you use UIGEN-T2 or the LoRA checkpoints in your work, please cite us: ```bibtex @misc{tesslate_UIGEN-T2, title={UIGEN-T2: Scaling UI Generation with Reasoning on Qwen2.5-Coder-7B}, author={tesslate}, year={2024}, # Adjust year if needed publisher={Hugging Face}, url={https://huggingface.co/tesslate/UIGEN-T2} # Placeholder URL } ``` --- ## **Contact & Community** - **Creator:** [tesslate](https://huggingface.co/tesslate) - **LoRA Checkpoints**: [tesslate](https://huggingface.co/tesslate) - **Repository & Demo**: [smirki](https://huggingface.co/smirki) ```
Mungert/shuttle-3.5-GGUF
Mungert
2025-06-15T19:46:04Z
108
1
transformers
[ "transformers", "gguf", "chat", "text-generation", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
text-generation
2025-05-08T18:19:48Z
--- library_name: transformers license: apache-2.0 license_link: https://huggingface.co/shuttleai/shuttle-3.5/blob/main/LICENSE pipeline_tag: text-generation language: - en tags: - chat --- # <span style="color: #7FFF7F;">shuttle-3.5 GGUF Models</span> ## <span style="color: #7F7FFF;">Model Generation Details</span> This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`8c83449`](https://github.com/ggerganov/llama.cpp/commit/8c83449cb780c201839653812681c3a4cf17feed). ## <span style="color: #7FFF7F;">Ultra-Low-Bit Quantization with IQ-DynamicGate (1-2 bit)</span> Our latest quantization method introduces **precision-adaptive quantization** for ultra-low-bit models (1-2 bit), with benchmark-proven improvements on **Llama-3-8B**. This approach uses layer-specific strategies to preserve accuracy while maintaining extreme memory efficiency. ### **Benchmark Context** All tests conducted on **Llama-3-8B-Instruct** using: - Standard perplexity evaluation pipeline - 2048-token context window - Same prompt set across all quantizations ### **Method** - **Dynamic Precision Allocation**: - First/Last 25% of layers โ†’ IQ4_XS (selected layers) - Middle 50% โ†’ IQ2_XXS/IQ3_S (increase efficiency) - **Critical Component Protection**: - Embeddings/output layers use Q5_K - Reduces error propagation by 38% vs standard 1-2bit ### **Quantization Performance Comparison (Llama-3-8B)** | Quantization | Standard PPL | DynamicGate PPL | ฮ” PPL | Std Size | DG Size | ฮ” Size | Std Speed | DG Speed | |--------------|--------------|------------------|---------|----------|---------|--------|-----------|----------| | IQ2_XXS | 11.30 | 9.84 | -12.9% | 2.5G | 2.6G | +0.1G | 234s | 246s | | IQ2_XS | 11.72 | 11.63 | -0.8% | 2.7G | 2.8G | +0.1G | 242s | 246s | | IQ2_S | 14.31 | 9.02 | -36.9% | 2.7G | 2.9G | +0.2G | 238s | 244s | | IQ1_M | 27.46 | 15.41 | -43.9% | 2.2G | 2.5G | +0.3G | 206s | 212s | | IQ1_S | 53.07 | 32.00 | -39.7% | 2.1G | 2.4G | +0.3G | 184s | 209s | **Key**: - PPL = Perplexity (lower is better) - ฮ” PPL = Percentage change from standard to DynamicGate - Speed = Inference time (CPU avx2, 2048 token context) - Size differences reflect mixed quantization overhead **Key Improvements:** - ๐Ÿ”ฅ **IQ1_M** shows massive 43.9% perplexity reduction (27.46 โ†’ 15.41) - ๐Ÿš€ **IQ2_S** cuts perplexity by 36.9% while adding only 0.2GB - โšก **IQ1_S** maintains 39.7% better accuracy despite 1-bit quantization **Tradeoffs:** - All variants have modest size increases (0.1-0.3GB) - Inference speeds remain comparable (<5% difference) ### **When to Use These Models** ๐Ÿ“Œ **Fitting models into GPU VRAM** โœ” **Memory-constrained deployments** โœ” **Cpu and Edge Devices** where 1-2bit errors can be tolerated โœ” **Research** into ultra-low-bit quantization ## **Choosing the Right Model Format** Selecting the correct model format depends on your **hardware capabilities** and **memory constraints**. ### **BF16 (Brain Float 16) โ€“ Use if BF16 acceleration is available** - A 16-bit floating-point format designed for **faster computation** while retaining good precision. - Provides **similar dynamic range** as FP32 but with **lower memory usage**. - Recommended if your hardware supports **BF16 acceleration** (check your device's specs). - Ideal for **high-performance inference** with **reduced memory footprint** compared to FP32. ๐Ÿ“Œ **Use BF16 if:** โœ” Your hardware has native **BF16 support** (e.g., newer GPUs, TPUs). โœ” You want **higher precision** while saving memory. โœ” You plan to **requantize** the model into another format. ๐Ÿ“Œ **Avoid BF16 if:** โŒ Your hardware does **not** support BF16 (it may fall back to FP32 and run slower). โŒ You need compatibility with older devices that lack BF16 optimization. --- ### **F16 (Float 16) โ€“ More widely supported than BF16** - A 16-bit floating-point **high precision** but with less of range of values than BF16. - Works on most devices with **FP16 acceleration support** (including many GPUs and some CPUs). - Slightly lower numerical precision than BF16 but generally sufficient for inference. ๐Ÿ“Œ **Use F16 if:** โœ” Your hardware supports **FP16** but **not BF16**. โœ” You need a **balance between speed, memory usage, and accuracy**. โœ” You are running on a **GPU** or another device optimized for FP16 computations. ๐Ÿ“Œ **Avoid F16 if:** โŒ Your device lacks **native FP16 support** (it may run slower than expected). โŒ You have memory limitations. --- ### **Quantized Models (Q4_K, Q6_K, Q8, etc.) โ€“ For CPU & Low-VRAM Inference** Quantization reduces model size and memory usage while maintaining as much accuracy as possible. - **Lower-bit models (Q4_K)** โ†’ **Best for minimal memory usage**, may have lower precision. - **Higher-bit models (Q6_K, Q8_0)** โ†’ **Better accuracy**, requires more memory. ๐Ÿ“Œ **Use Quantized Models if:** โœ” You are running inference on a **CPU** and need an optimized model. โœ” Your device has **low VRAM** and cannot load full-precision models. โœ” You want to reduce **memory footprint** while keeping reasonable accuracy. ๐Ÿ“Œ **Avoid Quantized Models if:** โŒ You need **maximum accuracy** (full-precision models are better for this). โŒ Your hardware has enough VRAM for higher-precision formats (BF16/F16). --- ### **Very Low-Bit Quantization (IQ3_XS, IQ3_S, IQ3_M, Q4_K, Q4_0)** These models are optimized for **extreme memory efficiency**, making them ideal for **low-power devices** or **large-scale deployments** where memory is a critical constraint. - **IQ3_XS**: Ultra-low-bit quantization (3-bit) with **extreme memory efficiency**. - **Use case**: Best for **ultra-low-memory devices** where even Q4_K is too large. - **Trade-off**: Lower accuracy compared to higher-bit quantizations. - **IQ3_S**: Small block size for **maximum memory efficiency**. - **Use case**: Best for **low-memory devices** where **IQ3_XS** is too aggressive. - **IQ3_M**: Medium block size for better accuracy than **IQ3_S**. - **Use case**: Suitable for **low-memory devices** where **IQ3_S** is too limiting. - **Q4_K**: 4-bit quantization with **block-wise optimization** for better accuracy. - **Use case**: Best for **low-memory devices** where **Q6_K** is too large. - **Q4_0**: Pure 4-bit quantization, optimized for **ARM devices**. - **Use case**: Best for **ARM-based devices** or **low-memory environments**. --- ### **Summary Table: Model Format Selection** | Model Format | Precision | Memory Usage | Device Requirements | Best Use Case | |--------------|------------|---------------|----------------------|---------------| | **BF16** | Highest | High | BF16-supported GPU/CPUs | High-speed inference with reduced memory | | **F16** | High | High | FP16-supported devices | GPU inference when BF16 isn't available | | **Q4_K** | Medium Low | Low | CPU or Low-VRAM devices | Best for memory-constrained environments | | **Q6_K** | Medium | Moderate | CPU with more memory | Better accuracy while still being quantized | | **Q8_0** | High | Moderate | CPU or GPU with enough VRAM | Best accuracy among quantized models | | **IQ3_XS** | Very Low | Very Low | Ultra-low-memory devices | Extreme memory efficiency and low accuracy | | **Q4_0** | Low | Low | ARM or low-memory devices | llama.cpp can optimize for ARM devices | --- ## **Included Files & Details** ### `shuttle-3.5-bf16.gguf` - Model weights preserved in **BF16**. - Use this if you want to **requantize** the model into a different format. - Best if your device supports **BF16 acceleration**. ### `shuttle-3.5-f16.gguf` - Model weights stored in **F16**. - Use if your device supports **FP16**, especially if BF16 is not available. ### `shuttle-3.5-bf16-q8_0.gguf` - **Output & embeddings** remain in **BF16**. - All other layers quantized to **Q8_0**. - Use if your device supports **BF16** and you want a quantized version. ### `shuttle-3.5-f16-q8_0.gguf` - **Output & embeddings** remain in **F16**. - All other layers quantized to **Q8_0**. ### `shuttle-3.5-q4_k.gguf` - **Output & embeddings** quantized to **Q8_0**. - All other layers quantized to **Q4_K**. - Good for **CPU inference** with limited memory. ### `shuttle-3.5-q4_k_s.gguf` - Smallest **Q4_K** variant, using less memory at the cost of accuracy. - Best for **very low-memory setups**. ### `shuttle-3.5-q6_k.gguf` - **Output & embeddings** quantized to **Q8_0**. - All other layers quantized to **Q6_K** . ### `shuttle-3.5-q8_0.gguf` - Fully **Q8** quantized model for better accuracy. - Requires **more memory** but offers higher precision. ### `shuttle-3.5-iq3_xs.gguf` - **IQ3_XS** quantization, optimized for **extreme memory efficiency**. - Best for **ultra-low-memory devices**. ### `shuttle-3.5-iq3_m.gguf` - **IQ3_M** quantization, offering a **medium block size** for better accuracy. - Suitable for **low-memory devices**. ### `shuttle-3.5-q4_0.gguf` - Pure **Q4_0** quantization, optimized for **ARM devices**. - Best for **low-memory environments**. - Prefer IQ4_NL for better accuracy. # <span id="testllm" style="color: #7F7FFF;">๐Ÿš€ If you find these models useful</span> โค **Please click "Like" if you find this useful!** Help me test my **AI-Powered Network Monitor Assistant** with **quantum-ready security checks**: ๐Ÿ‘‰ [Quantum Network Monitor](https://readyforquantum.com/dashboard/?assistant=open&utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) ๐Ÿ’ฌ **How to test**: Choose an **AI assistant type**: - `TurboLLM` (GPT-4o-mini) - `HugLLM` (Hugginface Open-source) - `TestLLM` (Experimental CPU-only) ### **What Iโ€™m Testing** Iโ€™m pushing the limits of **small open-source models for AI network monitoring**, specifically: - **Function calling** against live network services - **How small can a model go** while still handling: - Automated **Nmap scans** - **Quantum-readiness checks** - **Network Monitoring tasks** ๐ŸŸก **TestLLM** โ€“ Current experimental model (llama.cpp on 2 CPU threads): - โœ… **Zero-configuration setup** - โณ 30s load time (slow inference but **no API costs**) - ๐Ÿ”ง **Help wanted!** If youโ€™re into **edge-device AI**, letโ€™s collaborate! ### **Other Assistants** ๐ŸŸข **TurboLLM** โ€“ Uses **gpt-4o-mini** for: - **Create custom cmd processors to run .net code on Quantum Network Monitor Agents** - **Real-time network diagnostics and monitoring** - **Security Audits** - **Penetration testing** (Nmap/Metasploit) ๐Ÿ”ต **HugLLM** โ€“ Latest Open-source models: - ๐ŸŒ Runs on Hugging Face Inference API ### ๐Ÿ’ก **Example commands to you could test**: 1. `"Give me info on my websites SSL certificate"` 2. `"Check if my server is using quantum safe encyption for communication"` 3. `"Run a comprehensive security audit on my server"` 4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a Quantum Network Monitor Agent to run the .net code from. This is a very flexible and powerful feature. Use with caution! ### Final Word I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAIโ€”all out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is [open source](https://github.com/Mungert69). Feel free to use whatever you find helpful. If you appreciate the work, please consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) โ˜•. Your support helps cover service costs and allows me to raise token limits for everyone. I'm also open to job opportunities or sponsorship. Thank you! ๐Ÿ˜Š <p style="font-size:20px;" align="left"> <div style="border-radius: 15px;"> <img src="https://storage.shuttleai.com/shuttle-3.5.png" alt="ShuttleAI Thumbnail" style="width: auto; height: auto; margin-left: 0; object-fit: cover; border-radius: 15px;"> </div> ## Shuttle-3.5 ### โ˜๏ธ <a href="https://shuttleai.com/" target="_blank">Use via API</a> โ€ข ๐Ÿ’ฌ <a href="https://shuttlechat.com/" target="_blank">ShuttleChat</a> We are excited to introduce Shuttle-3.5, a fine-tuned version of [Qwen3 32b](https://huggingface.co/Qwen/Qwen3-32B), emulating the writing style of Claude 3 models and thoroughly trained on role-playing data. - **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios. - **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning. - **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience. - **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks. - **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**. ## Model Overview **Shuttle 3.5** has the following features: - Type: Causal Language Models - Training Stage: Pretraining & Post-training - Number of Parameters: 32.8B - Number of Paramaters (Non-Embedding): 31.2B - Number of Layers: 64 - Number of Attention Heads (GQA): 64 for Q and 8 for KV - Context Length: 32,768 natively and [131,072 tokens with YaRN](#processing-long-texts). ## Fine-Tuning Details - **Training Setup**: The model was trained on 130 million tokens for 40 hours on an H100 GPU.
Mungert/DistilQwen2.5-DS3-0324-7B-GGUF
Mungert
2025-06-15T19:45:41Z
233
1
null
[ "gguf", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-05-04T01:28:02Z
--- license: apache-2.0 --- # <span style="color: #7FFF7F;">DistilQwen2.5-DS3-0324-7B GGUF Models</span> ## <span style="color: #7F7FFF;">Model Generation Details</span> This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`19e899c`](https://github.com/ggerganov/llama.cpp/commit/19e899ce21a7c9ffcf8bb2b22269a75f6e078f8f). ## <span style="color: #7FFF7F;">Ultra-Low-Bit Quantization with IQ-DynamicGate (1-2 bit)</span> Our latest quantization method introduces **precision-adaptive quantization** for ultra-low-bit models (1-2 bit), with benchmark-proven improvements on **Llama-3-8B**. This approach uses layer-specific strategies to preserve accuracy while maintaining extreme memory efficiency. ### **Benchmark Context** All tests conducted on **Llama-3-8B-Instruct** using: - Standard perplexity evaluation pipeline - 2048-token context window - Same prompt set across all quantizations ### **Method** - **Dynamic Precision Allocation**: - First/Last 25% of layers โ†’ IQ4_XS (selected layers) - Middle 50% โ†’ IQ2_XXS/IQ3_S (increase efficiency) - **Critical Component Protection**: - Embeddings/output layers use Q5_K - Reduces error propagation by 38% vs standard 1-2bit ### **Quantization Performance Comparison (Llama-3-8B)** | Quantization | Standard PPL | DynamicGate PPL | ฮ” PPL | Std Size | DG Size | ฮ” Size | Std Speed | DG Speed | |--------------|--------------|------------------|---------|----------|---------|--------|-----------|----------| | IQ2_XXS | 11.30 | 9.84 | -12.9% | 2.5G | 2.6G | +0.1G | 234s | 246s | | IQ2_XS | 11.72 | 11.63 | -0.8% | 2.7G | 2.8G | +0.1G | 242s | 246s | | IQ2_S | 14.31 | 9.02 | -36.9% | 2.7G | 2.9G | +0.2G | 238s | 244s | | IQ1_M | 27.46 | 15.41 | -43.9% | 2.2G | 2.5G | +0.3G | 206s | 212s | | IQ1_S | 53.07 | 32.00 | -39.7% | 2.1G | 2.4G | +0.3G | 184s | 209s | **Key**: - PPL = Perplexity (lower is better) - ฮ” PPL = Percentage change from standard to DynamicGate - Speed = Inference time (CPU avx2, 2048 token context) - Size differences reflect mixed quantization overhead **Key Improvements:** - ๐Ÿ”ฅ **IQ1_M** shows massive 43.9% perplexity reduction (27.46 โ†’ 15.41) - ๐Ÿš€ **IQ2_S** cuts perplexity by 36.9% while adding only 0.2GB - โšก **IQ1_S** maintains 39.7% better accuracy despite 1-bit quantization **Tradeoffs:** - All variants have modest size increases (0.1-0.3GB) - Inference speeds remain comparable (<5% difference) ### **When to Use These Models** ๐Ÿ“Œ **Fitting models into GPU VRAM** โœ” **Memory-constrained deployments** โœ” **Cpu and Edge Devices** where 1-2bit errors can be tolerated โœ” **Research** into ultra-low-bit quantization ## **Choosing the Right Model Format** Selecting the correct model format depends on your **hardware capabilities** and **memory constraints**. ### **BF16 (Brain Float 16) โ€“ Use if BF16 acceleration is available** - A 16-bit floating-point format designed for **faster computation** while retaining good precision. - Provides **similar dynamic range** as FP32 but with **lower memory usage**. - Recommended if your hardware supports **BF16 acceleration** (check your device's specs). - Ideal for **high-performance inference** with **reduced memory footprint** compared to FP32. ๐Ÿ“Œ **Use BF16 if:** โœ” Your hardware has native **BF16 support** (e.g., newer GPUs, TPUs). โœ” You want **higher precision** while saving memory. โœ” You plan to **requantize** the model into another format. ๐Ÿ“Œ **Avoid BF16 if:** โŒ Your hardware does **not** support BF16 (it may fall back to FP32 and run slower). โŒ You need compatibility with older devices that lack BF16 optimization. --- ### **F16 (Float 16) โ€“ More widely supported than BF16** - A 16-bit floating-point **high precision** but with less of range of values than BF16. - Works on most devices with **FP16 acceleration support** (including many GPUs and some CPUs). - Slightly lower numerical precision than BF16 but generally sufficient for inference. ๐Ÿ“Œ **Use F16 if:** โœ” Your hardware supports **FP16** but **not BF16**. โœ” You need a **balance between speed, memory usage, and accuracy**. โœ” You are running on a **GPU** or another device optimized for FP16 computations. ๐Ÿ“Œ **Avoid F16 if:** โŒ Your device lacks **native FP16 support** (it may run slower than expected). โŒ You have memory limitations. --- ### **Quantized Models (Q4_K, Q6_K, Q8, etc.) โ€“ For CPU & Low-VRAM Inference** Quantization reduces model size and memory usage while maintaining as much accuracy as possible. - **Lower-bit models (Q4_K)** โ†’ **Best for minimal memory usage**, may have lower precision. - **Higher-bit models (Q6_K, Q8_0)** โ†’ **Better accuracy**, requires more memory. ๐Ÿ“Œ **Use Quantized Models if:** โœ” You are running inference on a **CPU** and need an optimized model. โœ” Your device has **low VRAM** and cannot load full-precision models. โœ” You want to reduce **memory footprint** while keeping reasonable accuracy. ๐Ÿ“Œ **Avoid Quantized Models if:** โŒ You need **maximum accuracy** (full-precision models are better for this). โŒ Your hardware has enough VRAM for higher-precision formats (BF16/F16). --- ### **Very Low-Bit Quantization (IQ3_XS, IQ3_S, IQ3_M, Q4_K, Q4_0)** These models are optimized for **extreme memory efficiency**, making them ideal for **low-power devices** or **large-scale deployments** where memory is a critical constraint. - **IQ3_XS**: Ultra-low-bit quantization (3-bit) with **extreme memory efficiency**. - **Use case**: Best for **ultra-low-memory devices** where even Q4_K is too large. - **Trade-off**: Lower accuracy compared to higher-bit quantizations. - **IQ3_S**: Small block size for **maximum memory efficiency**. - **Use case**: Best for **low-memory devices** where **IQ3_XS** is too aggressive. - **IQ3_M**: Medium block size for better accuracy than **IQ3_S**. - **Use case**: Suitable for **low-memory devices** where **IQ3_S** is too limiting. - **Q4_K**: 4-bit quantization with **block-wise optimization** for better accuracy. - **Use case**: Best for **low-memory devices** where **Q6_K** is too large. - **Q4_0**: Pure 4-bit quantization, optimized for **ARM devices**. - **Use case**: Best for **ARM-based devices** or **low-memory environments**. --- ### **Summary Table: Model Format Selection** | Model Format | Precision | Memory Usage | Device Requirements | Best Use Case | |--------------|------------|---------------|----------------------|---------------| | **BF16** | Highest | High | BF16-supported GPU/CPUs | High-speed inference with reduced memory | | **F16** | High | High | FP16-supported devices | GPU inference when BF16 isn't available | | **Q4_K** | Medium Low | Low | CPU or Low-VRAM devices | Best for memory-constrained environments | | **Q6_K** | Medium | Moderate | CPU with more memory | Better accuracy while still being quantized | | **Q8_0** | High | Moderate | CPU or GPU with enough VRAM | Best accuracy among quantized models | | **IQ3_XS** | Very Low | Very Low | Ultra-low-memory devices | Extreme memory efficiency and low accuracy | | **Q4_0** | Low | Low | ARM or low-memory devices | llama.cpp can optimize for ARM devices | --- ## **Included Files & Details** ### `DistilQwen2.5-DS3-0324-7B-bf16.gguf` - Model weights preserved in **BF16**. - Use this if you want to **requantize** the model into a different format. - Best if your device supports **BF16 acceleration**. ### `DistilQwen2.5-DS3-0324-7B-f16.gguf` - Model weights stored in **F16**. - Use if your device supports **FP16**, especially if BF16 is not available. ### `DistilQwen2.5-DS3-0324-7B-bf16-q8_0.gguf` - **Output & embeddings** remain in **BF16**. - All other layers quantized to **Q8_0**. - Use if your device supports **BF16** and you want a quantized version. ### `DistilQwen2.5-DS3-0324-7B-f16-q8_0.gguf` - **Output & embeddings** remain in **F16**. - All other layers quantized to **Q8_0**. ### `DistilQwen2.5-DS3-0324-7B-q4_k.gguf` - **Output & embeddings** quantized to **Q8_0**. - All other layers quantized to **Q4_K**. - Good for **CPU inference** with limited memory. ### `DistilQwen2.5-DS3-0324-7B-q4_k_s.gguf` - Smallest **Q4_K** variant, using less memory at the cost of accuracy. - Best for **very low-memory setups**. ### `DistilQwen2.5-DS3-0324-7B-q6_k.gguf` - **Output & embeddings** quantized to **Q8_0**. - All other layers quantized to **Q6_K** . ### `DistilQwen2.5-DS3-0324-7B-q8_0.gguf` - Fully **Q8** quantized model for better accuracy. - Requires **more memory** but offers higher precision. ### `DistilQwen2.5-DS3-0324-7B-iq3_xs.gguf` - **IQ3_XS** quantization, optimized for **extreme memory efficiency**. - Best for **ultra-low-memory devices**. ### `DistilQwen2.5-DS3-0324-7B-iq3_m.gguf` - **IQ3_M** quantization, offering a **medium block size** for better accuracy. - Suitable for **low-memory devices**. ### `DistilQwen2.5-DS3-0324-7B-q4_0.gguf` - Pure **Q4_0** quantization, optimized for **ARM devices**. - Best for **low-memory environments**. - Prefer IQ4_NL for better accuracy. # <span id="testllm" style="color: #7F7FFF;">๐Ÿš€ If you find these models useful</span> โค **Please click "Like" if you find this useful!** Help me test my **AI-Powered Network Monitor Assistant** with **quantum-ready security checks**: ๐Ÿ‘‰ [Quantum Network Monitor](https://readyforquantum.com/dashboard/?assistant=open&utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) ๐Ÿ’ฌ **How to test**: Choose an **AI assistant type**: - `TurboLLM` (GPT-4o-mini) - `HugLLM` (Hugginface Open-source) - `TestLLM` (Experimental CPU-only) ### **What Iโ€™m Testing** Iโ€™m pushing the limits of **small open-source models for AI network monitoring**, specifically: - **Function calling** against live network services - **How small can a model go** while still handling: - Automated **Nmap scans** - **Quantum-readiness checks** - **Network Monitoring tasks** ๐ŸŸก **TestLLM** โ€“ Current experimental model (llama.cpp on 2 CPU threads): - โœ… **Zero-configuration setup** - โณ 30s load time (slow inference but **no API costs**) - ๐Ÿ”ง **Help wanted!** If youโ€™re into **edge-device AI**, letโ€™s collaborate! ### **Other Assistants** ๐ŸŸข **TurboLLM** โ€“ Uses **gpt-4o-mini** for: - **Create custom cmd processors to run .net code on Quantum Network Monitor Agents** - **Real-time network diagnostics and monitoring** - **Security Audits** - **Penetration testing** (Nmap/Metasploit) ๐Ÿ”ต **HugLLM** โ€“ Latest Open-source models: - ๐ŸŒ Runs on Hugging Face Inference API ### ๐Ÿ’ก **Example commands to you could test**: 1. `"Give me info on my websites SSL certificate"` 2. `"Check if my server is using quantum safe encyption for communication"` 3. `"Run a comprehensive security audit on my server"` 4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a Quantum Network Monitor Agent to run the .net code from. This is a very flexible and powerful feature. Use with caution! ### Final Word I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAIโ€”all out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is [open source](https://github.com/Mungert69). Feel free to use whatever you find helpful. If you appreciate the work, please consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) โ˜•. Your support helps cover service costs and allows me to raise token limits for everyone. I'm also open to job opportunities or sponsorship. Thank you! ๐Ÿ˜Š ## ๐Ÿ“– Introduction # DistilQwen2.5-DS3-0324 Series: Fast-Thinking Reasoning Models ## Overview In response to the industry challenge of balancing efficient reasoning with cognitive capabilities, the DistilQwen2.5-DS3-0324 series innovatively transfers the fast-thinking capabilities of DeepSeekV3-0324 to lightweight models. Through a two-stage distillation framework, this series achieves high performance while delivering: - **Enhanced Reasoning Speed**: Reduces output tokens by 60-80% (compared to slow-thinking models) - **Reduced Resource Consumption**: Suitable for edge computing deployment - **Elimination of Cognitive Bias**: Proprietary trajectory alignment technology ## Core Innovations ### 1. Fast-Thinking Distillation Framework - **Stage 1: Fast-Thinking CoT Data Collection** - **Long-to-Short Rewriting**: Extracts key reasoning steps from DeepSeek-R1 - **Teacher Model Distillation**: Captures the rapid reasoning trajectories of DeepSeekV3-0324 - **Stage 2: CoT Trajectory Cognitive Alignment** - **Dynamic Difficulty Grading** (Easy/Medium/Hard) - LLM-as-a-Judge evaluates small model comprehensibility - Simple chain expansion โ†’ Adds necessary steps - Hard chain simplification โ†’ Removes high-level logical leaps - **Validation Mechanism**: Iterative optimization until all data reaches "Medium" rating ### 2. Performance Breakthroughs - **32B Model** approaches the performance of closed-source models with 10x the parameters on the GPQA Diamond benchmark - **Significant Improvement in Reasoning Efficiency** (see comparison table below) | Model | MMLU_PRO Tokens | AIME2024 Tokens | Speed Gain | |--------------------------------|-----------------|-----------------|------------| | DistilQwen2.5-R1-32B (Slow-Thinking) | 4198 | 12178 | 1x | | DistilQwen2.5-DS3-0324-32B | 690 | 4177 | 5-8x | ## Technical Advantages - **Two-Stage Distillation**: First compresses reasoning length, then aligns cognitive trajectories - **Dynamic Data Optimization**: Adaptive difficulty adjustment ensures knowledge transferability - **Open-Source Compatibility**: Fine-tuned based on the Qwen2.5 base model ## ๐Ÿš€ Quick Start ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained( "alibaba-pai/DistilQwen2.5-DS3-0324-7B", torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("alibaba-pai/DistilQwen2.5-DS3-0324-7B") prompt = "Give me a short introduction to large language model." messages=[ {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant. You should think step-by-step."}, {"role": "user", "content": prompt}, ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(device) generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=2048๏ผŒ ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ```
Mungert/Phi-4-mini-reasoning-GGUF
Mungert
2025-06-15T19:45:31Z
2,826
3
transformers
[ "transformers", "gguf", "nlp", "math", "code", "text-generation", "en", "arxiv:2504.21233", "license:mit", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
text-generation
2025-05-02T18:21:44Z
--- language: - en library_name: transformers license: mit license_link: https://huggingface.co/microsoft/Phi-4-mini-instruct-reasoning/resolve/main/LICENSE pipeline_tag: text-generation tags: - nlp - math - code widget: - messages: - role: user content: How to solve 3*x^2+4*x+5=1? --- # <span style="color: #7FFF7F;">Phi-4-mini-reasoning GGUF Models</span> ## <span style="color: #7F7FFF;">Model Generation Details</span> This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`19e899c`](https://github.com/ggerganov/llama.cpp/commit/19e899ce21a7c9ffcf8bb2b22269a75f6e078f8f). ## <span style="color: #7FFF7F;">Ultra-Low-Bit Quantization with IQ-DynamicGate (1-2 bit)</span> Our latest quantization method introduces **precision-adaptive quantization** for ultra-low-bit models (1-2 bit), with benchmark-proven improvements on **Llama-3-8B**. This approach uses layer-specific strategies to preserve accuracy while maintaining extreme memory efficiency. ### **Benchmark Context** All tests conducted on **Llama-3-8B-Instruct** using: - Standard perplexity evaluation pipeline - 2048-token context window - Same prompt set across all quantizations ### **Method** - **Dynamic Precision Allocation**: - First/Last 25% of layers โ†’ IQ4_XS (selected layers) - Middle 50% โ†’ IQ2_XXS/IQ3_S (increase efficiency) - **Critical Component Protection**: - Embeddings/output layers use Q5_K - Reduces error propagation by 38% vs standard 1-2bit ### **Quantization Performance Comparison (Llama-3-8B)** | Quantization | Standard PPL | DynamicGate PPL | ฮ” PPL | Std Size | DG Size | ฮ” Size | Std Speed | DG Speed | |--------------|--------------|------------------|---------|----------|---------|--------|-----------|----------| | IQ2_XXS | 11.30 | 9.84 | -12.9% | 2.5G | 2.6G | +0.1G | 234s | 246s | | IQ2_XS | 11.72 | 11.63 | -0.8% | 2.7G | 2.8G | +0.1G | 242s | 246s | | IQ2_S | 14.31 | 9.02 | -36.9% | 2.7G | 2.9G | +0.2G | 238s | 244s | | IQ1_M | 27.46 | 15.41 | -43.9% | 2.2G | 2.5G | +0.3G | 206s | 212s | | IQ1_S | 53.07 | 32.00 | -39.7% | 2.1G | 2.4G | +0.3G | 184s | 209s | **Key**: - PPL = Perplexity (lower is better) - ฮ” PPL = Percentage change from standard to DynamicGate - Speed = Inference time (CPU avx2, 2048 token context) - Size differences reflect mixed quantization overhead **Key Improvements:** - ๐Ÿ”ฅ **IQ1_M** shows massive 43.9% perplexity reduction (27.46 โ†’ 15.41) - ๐Ÿš€ **IQ2_S** cuts perplexity by 36.9% while adding only 0.2GB - โšก **IQ1_S** maintains 39.7% better accuracy despite 1-bit quantization **Tradeoffs:** - All variants have modest size increases (0.1-0.3GB) - Inference speeds remain comparable (<5% difference) ### **When to Use These Models** ๐Ÿ“Œ **Fitting models into GPU VRAM** โœ” **Memory-constrained deployments** โœ” **Cpu and Edge Devices** where 1-2bit errors can be tolerated โœ” **Research** into ultra-low-bit quantization ## **Choosing the Right Model Format** Selecting the correct model format depends on your **hardware capabilities** and **memory constraints**. ### **BF16 (Brain Float 16) โ€“ Use if BF16 acceleration is available** - A 16-bit floating-point format designed for **faster computation** while retaining good precision. - Provides **similar dynamic range** as FP32 but with **lower memory usage**. - Recommended if your hardware supports **BF16 acceleration** (check your device's specs). - Ideal for **high-performance inference** with **reduced memory footprint** compared to FP32. ๐Ÿ“Œ **Use BF16 if:** โœ” Your hardware has native **BF16 support** (e.g., newer GPUs, TPUs). โœ” You want **higher precision** while saving memory. โœ” You plan to **requantize** the model into another format. ๐Ÿ“Œ **Avoid BF16 if:** โŒ Your hardware does **not** support BF16 (it may fall back to FP32 and run slower). โŒ You need compatibility with older devices that lack BF16 optimization. --- ### **F16 (Float 16) โ€“ More widely supported than BF16** - A 16-bit floating-point **high precision** but with less of range of values than BF16. - Works on most devices with **FP16 acceleration support** (including many GPUs and some CPUs). - Slightly lower numerical precision than BF16 but generally sufficient for inference. ๐Ÿ“Œ **Use F16 if:** โœ” Your hardware supports **FP16** but **not BF16**. โœ” You need a **balance between speed, memory usage, and accuracy**. โœ” You are running on a **GPU** or another device optimized for FP16 computations. ๐Ÿ“Œ **Avoid F16 if:** โŒ Your device lacks **native FP16 support** (it may run slower than expected). โŒ You have memory limitations. --- ### **Quantized Models (Q4_K, Q6_K, Q8, etc.) โ€“ For CPU & Low-VRAM Inference** Quantization reduces model size and memory usage while maintaining as much accuracy as possible. - **Lower-bit models (Q4_K)** โ†’ **Best for minimal memory usage**, may have lower precision. - **Higher-bit models (Q6_K, Q8_0)** โ†’ **Better accuracy**, requires more memory. ๐Ÿ“Œ **Use Quantized Models if:** โœ” You are running inference on a **CPU** and need an optimized model. โœ” Your device has **low VRAM** and cannot load full-precision models. โœ” You want to reduce **memory footprint** while keeping reasonable accuracy. ๐Ÿ“Œ **Avoid Quantized Models if:** โŒ You need **maximum accuracy** (full-precision models are better for this). โŒ Your hardware has enough VRAM for higher-precision formats (BF16/F16). --- ### **Very Low-Bit Quantization (IQ3_XS, IQ3_S, IQ3_M, Q4_K, Q4_0)** These models are optimized for **extreme memory efficiency**, making them ideal for **low-power devices** or **large-scale deployments** where memory is a critical constraint. - **IQ3_XS**: Ultra-low-bit quantization (3-bit) with **extreme memory efficiency**. - **Use case**: Best for **ultra-low-memory devices** where even Q4_K is too large. - **Trade-off**: Lower accuracy compared to higher-bit quantizations. - **IQ3_S**: Small block size for **maximum memory efficiency**. - **Use case**: Best for **low-memory devices** where **IQ3_XS** is too aggressive. - **IQ3_M**: Medium block size for better accuracy than **IQ3_S**. - **Use case**: Suitable for **low-memory devices** where **IQ3_S** is too limiting. - **Q4_K**: 4-bit quantization with **block-wise optimization** for better accuracy. - **Use case**: Best for **low-memory devices** where **Q6_K** is too large. - **Q4_0**: Pure 4-bit quantization, optimized for **ARM devices**. - **Use case**: Best for **ARM-based devices** or **low-memory environments**. --- ### **Summary Table: Model Format Selection** | Model Format | Precision | Memory Usage | Device Requirements | Best Use Case | |--------------|------------|---------------|----------------------|---------------| | **BF16** | Highest | High | BF16-supported GPU/CPUs | High-speed inference with reduced memory | | **F16** | High | High | FP16-supported devices | GPU inference when BF16 isn't available | | **Q4_K** | Medium Low | Low | CPU or Low-VRAM devices | Best for memory-constrained environments | | **Q6_K** | Medium | Moderate | CPU with more memory | Better accuracy while still being quantized | | **Q8_0** | High | Moderate | CPU or GPU with enough VRAM | Best accuracy among quantized models | | **IQ3_XS** | Very Low | Very Low | Ultra-low-memory devices | Extreme memory efficiency and low accuracy | | **Q4_0** | Low | Low | ARM or low-memory devices | llama.cpp can optimize for ARM devices | --- ## **Included Files & Details** ### `Phi-4-mini-reasoning-bf16.gguf` - Model weights preserved in **BF16**. - Use this if you want to **requantize** the model into a different format. - Best if your device supports **BF16 acceleration**. ### `Phi-4-mini-reasoning-f16.gguf` - Model weights stored in **F16**. - Use if your device supports **FP16**, especially if BF16 is not available. ### `Phi-4-mini-reasoning-bf16-q8_0.gguf` - **Output & embeddings** remain in **BF16**. - All other layers quantized to **Q8_0**. - Use if your device supports **BF16** and you want a quantized version. ### `Phi-4-mini-reasoning-f16-q8_0.gguf` - **Output & embeddings** remain in **F16**. - All other layers quantized to **Q8_0**. ### `Phi-4-mini-reasoning-q4_k.gguf` - **Output & embeddings** quantized to **Q8_0**. - All other layers quantized to **Q4_K**. - Good for **CPU inference** with limited memory. ### `Phi-4-mini-reasoning-q4_k_s.gguf` - Smallest **Q4_K** variant, using less memory at the cost of accuracy. - Best for **very low-memory setups**. ### `Phi-4-mini-reasoning-q6_k.gguf` - **Output & embeddings** quantized to **Q8_0**. - All other layers quantized to **Q6_K** . ### `Phi-4-mini-reasoning-q8_0.gguf` - Fully **Q8** quantized model for better accuracy. - Requires **more memory** but offers higher precision. ### `Phi-4-mini-reasoning-iq3_xs.gguf` - **IQ3_XS** quantization, optimized for **extreme memory efficiency**. - Best for **ultra-low-memory devices**. ### `Phi-4-mini-reasoning-iq3_m.gguf` - **IQ3_M** quantization, offering a **medium block size** for better accuracy. - Suitable for **low-memory devices**. ### `Phi-4-mini-reasoning-q4_0.gguf` - Pure **Q4_0** quantization, optimized for **ARM devices**. - Best for **low-memory environments**. - Prefer IQ4_NL for better accuracy. # <span id="testllm" style="color: #7F7FFF;">๐Ÿš€ If you find these models useful</span> โค **Please click "Like" if you find this useful!** Help me test my **AI-Powered Network Monitor Assistant** with **quantum-ready security checks**: ๐Ÿ‘‰ [Quantum Network Monitor](https://readyforquantum.com/dashboard/?assistant=open&utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) ๐Ÿ’ฌ **How to test**: Choose an **AI assistant type**: - `TurboLLM` (GPT-4o-mini) - `HugLLM` (Hugginface Open-source) - `TestLLM` (Experimental CPU-only) ### **What Iโ€™m Testing** Iโ€™m pushing the limits of **small open-source models for AI network monitoring**, specifically: - **Function calling** against live network services - **How small can a model go** while still handling: - Automated **Nmap scans** - **Quantum-readiness checks** - **Network Monitoring tasks** ๐ŸŸก **TestLLM** โ€“ Current experimental model (llama.cpp on 2 CPU threads): - โœ… **Zero-configuration setup** - โณ 30s load time (slow inference but **no API costs**) - ๐Ÿ”ง **Help wanted!** If youโ€™re into **edge-device AI**, letโ€™s collaborate! ### **Other Assistants** ๐ŸŸข **TurboLLM** โ€“ Uses **gpt-4o-mini** for: - **Create custom cmd processors to run .net code on Quantum Network Monitor Agents** - **Real-time network diagnostics and monitoring** - **Security Audits** - **Penetration testing** (Nmap/Metasploit) ๐Ÿ”ต **HugLLM** โ€“ Latest Open-source models: - ๐ŸŒ Runs on Hugging Face Inference API ### ๐Ÿ’ก **Example commands to you could test**: 1. `"Give me info on my websites SSL certificate"` 2. `"Check if my server is using quantum safe encyption for communication"` 3. `"Run a comprehensive security audit on my server"` 4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a Quantum Network Monitor Agent to run the .net code from. This is a very flexible and powerful feature. Use with caution! ### Final Word I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAIโ€”all out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is [open source](https://github.com/Mungert69). Feel free to use whatever you find helpful. If you appreciate the work, please consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) โ˜•. Your support helps cover service costs and allows me to raise token limits for everyone. I'm also open to job opportunities or sponsorship. Thank you! ๐Ÿ˜Š ## Model Summary Phi-4-mini-reasoning is a lightweight open model built upon synthetic data with a focus on high-quality, reasoning dense data further finetuned for more advanced math reasoning capabilities. The model belongs to the Phi-4 model family and supports 128K token context length. ๐Ÿ“ฐ [Phi-4-mini-reasoning Blog](https://aka.ms/phi4-mini-reasoning/blog), and [Developer Article](https://techcommunity.microsoft.com/blog/azuredevcommunityblog/make-phi-4-mini-reasoning-more-powerful-with-industry-reasoning-on-edge-devices/4409764)<br> ๐Ÿ“– [Phi-4-mini-reasoning Technical Report](https://aka.ms/phi4-mini-reasoning/techreport) | [HF paper](https://huggingface.co/papers/2504.21233) <br> ๐Ÿ‘ฉโ€๐Ÿณ [Phi Cookbook](https://github.com/microsoft/PhiCookBook) <br> ๐Ÿก [Phi Portal](https://azure.microsoft.com/en-us/products/phi) <br> ๐Ÿ–ฅ๏ธ Try It [Azure](https://aka.ms/phi4-mini-reasoning/azure) <br> ๐ŸŽ‰**Phi-4 models**: [[Phi-4-reasoning](https://huggingface.co/microsoft/Phi-4-reasoning)] | [[multimodal-instruct](https://huggingface.co/microsoft/Phi-4-multimodal-instruct) | [onnx](https://huggingface.co/microsoft/Phi-4-multimodal-instruct-onnx)]; [[mini-instruct](https://huggingface.co/microsoft/Phi-4-mini-instruct) | [onnx](https://huggingface.co/microsoft/Phi-4-mini-instruct-onnx)] ## Intended Uses ### Primary Use Cases Phi-4-mini-reasoning is designed for multi-step, logic-intensive mathematical problem-solving tasks under memory/compute constrained environments and latency bound scenarios. Some of the use cases include formal proof generation, symbolic computation, advanced word problems, and a wide range of mathematical reasoning scenarios. These models excel at maintaining context across steps, applying structured logic, and delivering accurate, reliable solutions in domains that require deep analytical thinking. ### Use Case Considerations This model is designed and tested for math reasoning only. It is not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models, as well as performance difference across languages, as they select use cases, and evaluate and mitigate for accuracy, safety, and fairness before using within a specific downstream use case, particularly for high-risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including but not limited to privacy, trade compliance laws, etc.) that are relevant to their use case. ***Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.*** ## Release Notes This release of Phi-4-mini-reasoning addresses user feedback and market demand for a compact reasoning model. It is a compact transformer-based language model optimized for mathematical reasoning, built to deliver high-quality, step-by-step problem solving in environments where computing or latency is constrained. The model is fine-tuned with synthetic math data from a more capable model (much larger, smarter, more accurate, and better at following instructions), which has resulted in enhanced reasoning performance. Phi-4-mini-reasoning balances reasoning ability with efficiency, making it potentially suitable for educational applications, embedded tutoring, and lightweight deployment on edge or mobile systems. If a critical issue is identified with Phi-4-mini-reasoning, it should be promptly reported through the MSRC Researcher Portal or secure@microsoft.com ### Model Quality To understand the capabilities, the 3.8B parameters Phi-4-mini-reasoning model was compared with a set of models over a variety of reasoning benchmarks. A high-level overview of the model quality is as follows: | Model | AIME | MATH-500 | GPQA Diamond | |------------------------------------|-------|----------|--------------| | o1-mini* | 63.6 | 90.0 | 60.0 | | DeepSeek-R1-Distill-Qwen-7B | 53.3 | 91.4 | 49.5 | | DeepSeek-R1-Distill-Llama-8B | 43.3 | 86.9 | 47.3 | | Bespoke-Stratos-7B* | 20.0 | 82.0 | 37.8 | | OpenThinker-7B* | 31.3 | 83.0 | 42.4 | | Llama-3.2-3B-Instruct | 6.7 | 44.4 | 25.3 | | Phi-4-Mini (base model, 3.8B) | 10.0 | 71.8 | 36.9 | |**Phi-4-mini-reasoning (3.8B)** | **57.5** | **94.6** | **52.0** | Overall, the model with only 3.8B-param achieves a similar level of multilingual language understanding and reasoning ability as much larger models. However, it is still fundamentally limited by its size for certain tasks. The model simply does not have the capacity to store too much factual knowledge, therefore, users may experience factual incorrectness. However, it may be possible to resolve such weakness by augmenting Phi-4 with a search engine, particularly when using the model under RAG settings. ## Usage ### Tokenizer Phi-4-mini-reasoning supports a vocabulary size of up to `200064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-4-mini-reasoning/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size. ### Input Formats Given the nature of the training data, the Phi-4-mini-instruct model is best suited for prompts using specific formats. Below are the two primary formats: #### Chat format This format is used for general conversation and instructions: ```yaml <|system|>Your name is Phi, an AI math expert developed by Microsoft.<|end|><|user|>How to solve 3*x^2+4*x+5=1?<|end|><|assistant|> ``` ### Inference with transformers Phi-4-mini-reasoning has been integrated in the `4.51.3` version of `transformers`. The current `transformers` version can be verified with: `pip list | grep transformers`. Python 3.8 and 3.10 will work best. List of required packages: ``` flash_attn==2.7.4.post1 torch==2.5.1 transformers==4.51.3 accelerate==1.3.0 ``` Phi-4-mini-reasoning is also available in [Azure AI Studio](https://aka.ms/phi-4-mini-reasoning/azure) #### Example After obtaining the Phi-4-mini-instruct model checkpoints, users can use this sample code for inference. ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline torch.random.manual_seed(0) model_id = "microsoft/Phi-4-mini-reasoning" model = AutoModelForCausalLM.from_pretrained( model_id, device_map="cuda", torch_dtype="auto", trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained(model_id) messages = [{ "role": "user", "content": "How to solve 3*x^2+4*x+5=1?" }] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_dict=True, return_tensors="pt", ) outputs = model.generate( **inputs.to(model.device), max_new_tokens=32768, temperature=0.8, top_p=0.95, do_sample=True, ) outputs = tokenizer.batch_decode(outputs[:, inputs["input_ids"].shape[-1]:]) print(outputs[0]) ``` ## Training ### Model + **Architecture:** Phi-4-mini-reasoning shares the same architecture as Phi-4-Mini, which has 3.8B parameters and is a dense decoder-only Transformer model. When compared with Phi-3.5-Mini, the major changes with Phi-4-Mini are 200K vocabulary, grouped-query attention, and shared input and output embedding.<br> + **Inputs:** Text. It is best suited for prompts using the chat format.<br> + **Context length:** 128K tokens<br> + **GPUs:** 128 H100-80G<br> + **Training time:** 2 days<br> + **Training data:** 150B tokens<br> + **Outputs:** Generated text<br> + **Dates:** Trained in February 2024<br> + **Status:** This is a static model trained on offline datasets with the cutoff date of February 2025 for publicly available data.<br> + **Supported languages:** English<br> + **Release date:** April 2025<br> ### Training Datasets The training data for Phi-4-mini-reasoning consists exclusively of synthetic mathematical content generated by a stronger and more advanced reasoning model, Deepseek-R1. The objective is to distill knowledge from this model. This synthetic dataset comprises over one million diverse math problems spanning multiple levels of difficulty (from middle school to Ph.D. level). For each problem in the synthetic dataset, eight distinct solutions (rollouts) were sampled, and only those verified as correct were retained, resulting in approximately 30 billion tokens of math content. The dataset integrates three primary components: 1) a curated selection of high-quality, publicly available math questions and a part of the SFT(Supervised Fine-Tuning) data that was used to train the base Phi-4-Mini model; 2) an extensive collection of synthetic math data generated by the Deepseek-R1 model, designed specifically for high-quality supervised fine-tuning and model distillation; and 3) a balanced set of correct and incorrect answers used to construct preference data aimed at enhancing Phi-4-mini-reasoning's reasoning capabilities by learning more effective reasoning trajectories ## Software * [PyTorch](https://github.com/pytorch/pytorch) * [Transformers](https://github.com/huggingface/transformers) * [Flash-Attention](https://github.com/HazyResearch/flash-attention) ## Hardware Note that by default, the Phi-4-mini-reasoning model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types: * NVIDIA A100 * NVIDIA H100 If you want to run the model on: * NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager" ## Safety Evaluation and Red-Teaming The Phi-4 family of models has adopted a robust safety post-training approach. This approach leverages a variety of both open-source and in-house generated datasets. The overall technique employed to do the safety alignment is a combination of SFT, DPO (Direct Preference Optimization), and RLHF (Reinforcement Learning from Human Feedback) approaches by utilizing human-labeled and synthetic English-language datasets, including publicly available datasets focusing on helpfulness and harmlessness, as well as various questions and answers targeted to multiple safety categories. Phi-4-Mini-Reasoning was developed in accordance with Microsoft's responsible AI principles. Potential safety risks in the modelโ€™s responses were assessed using the Azure AI Foundryโ€™s Risk and Safety Evaluation framework, focusing on harmful content, direct jailbreak, and model groundedness. The Phi-4-Mini-Reasoning Model Card contains additional information about our approach to safety and responsible AI considerations that developers should be aware of when using this model. ## Responsible AI Considerations Like other language models, the Phi family of models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include: + Quality of Service: The Phi models are trained primarily on English text and some additional multilingual text. Languages other than English will experience worse performance as well as performance disparities across non-English. English language varieties with less representation in the training data might experience worse performance than standard American English. + Multilingual performance and safety gaps: We believe it is important to make language models more widely available across different languages, but the Phi 4 models still exhibit challenges common across multilingual releases. As with any deployment of LLMs, developers will be better positioned to test for performance or safety gaps for their linguistic and cultural context and customize the model with additional fine-tuning and appropriate safeguards. + Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups, cultural contexts, or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases. + Inappropriate or Offensive Content: These models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the case. + Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated. + Election Information Reliability : The model has an elevated defect rate when responding to election-critical queries, which may result in incorrect or unauthoritative election critical information being presented. We are working to improve the model's performance in this area. Users should verify information related to elections with the election authority in their region. + Limited Scope for Code: The majority of Phi 4 training data is based in Python and uses common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, it is strongly recommended that users manually verify all API uses. + Long Conversation: Phi 4 models, like other models, can in some cases generate responses that are repetitive, unhelpful, or inconsistent in very long chat sessions in both English and non-English languages. Developers are encouraged to place appropriate mitigations, like limiting conversation turns to account for the possible conversational drift. Developers should apply responsible AI best practices, including mapping, measuring, and mitigating risks associated with their specific use case and cultural, linguistic context. Phi 4 family of models are general purpose models. As developers plan to deploy these models for specific use cases, they are encouraged to fine-tune the models for their use case and leverage the models as part of broader AI systems with language-specific safeguards in place. Important areas for consideration include: + Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques. + High-Risk Scenarios: Developers should assess the suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context. + Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG). + Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case. + Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations. ## License The model is licensed under the [MIT license](./LICENSE). ## Trademarks This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must followโ€ฏ[Microsoftโ€™s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-partyโ€™s policies. ## Appendix A: Benchmark Methodology We include a brief word on methodology here - and in particular, how we think about optimizing prompts. In an ideal world, we would never change any prompts in our benchmarks to ensure it is always an apples-to-apples comparison when comparing different models. Indeed, this is our default approach, and is the case in the vast majority of models we have run to date. For all benchmarks, we consider using the same generation configuration such as max sequence length (32768), the same temperature for the fair comparison. Benchmark datasets We evaluate the model with three of the most popular math benchmarks where the strongest reasoning models are competing together. Specifically: - Math-500: This benchmark consists of 500 challenging math problems designed to test the model's ability to perform complex mathematical reasoning and problem-solving. - AIME 2024: The American Invitational Mathematics Examination (AIME) is a highly regarded math competition that features a series of difficult problems aimed at assessing advanced mathematical skills and logical reasoning. - GPQA Diamond: The Graduate-Level Google-Proof Q&A (GPQA) Diamond benchmark focuses on evaluating the model's ability to understand and solve a wide range of mathematical questions, including both straightforward calculations and more intricate problem-solving tasks.
Mungert/Qwen3-14B-GGUF
Mungert
2025-06-15T19:45:14Z
909
6
transformers
[ "transformers", "gguf", "text-generation", "arxiv:2309.00071", "base_model:Qwen/Qwen3-14B-Base", "base_model:quantized:Qwen/Qwen3-14B-Base", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
text-generation
2025-04-30T16:26:11Z
--- library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-14B/blob/main/LICENSE pipeline_tag: text-generation base_model: - Qwen/Qwen3-14B-Base --- # <span style="color: #7FFF7F;">Qwen3-14B GGUF Models</span> ## <span style="color: #7F7FFF;">Model Generation Details</span> This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`19e899c`](https://github.com/ggerganov/llama.cpp/commit/19e899ce21a7c9ffcf8bb2b22269a75f6e078f8f). ## <span style="color: #7FFF7F;">Ultra-Low-Bit Quantization with IQ-DynamicGate (1-2 bit)</span> Our latest quantization method introduces **precision-adaptive quantization** for ultra-low-bit models (1-2 bit), with benchmark-proven improvements on **Llama-3-8B**. This approach uses layer-specific strategies to preserve accuracy while maintaining extreme memory efficiency. ### **Benchmark Context** All tests conducted on **Llama-3-8B-Instruct** using: - Standard perplexity evaluation pipeline - 2048-token context window - Same prompt set across all quantizations ### **Method** - **Dynamic Precision Allocation**: - First/Last 25% of layers โ†’ IQ4_XS (selected layers) - Middle 50% โ†’ IQ2_XXS/IQ3_S (increase efficiency) - **Critical Component Protection**: - Embeddings/output layers use Q5_K - Reduces error propagation by 38% vs standard 1-2bit ### **Quantization Performance Comparison (Llama-3-8B)** | Quantization | Standard PPL | DynamicGate PPL | ฮ” PPL | Std Size | DG Size | ฮ” Size | Std Speed | DG Speed | |--------------|--------------|------------------|---------|----------|---------|--------|-----------|----------| | IQ2_XXS | 11.30 | 9.84 | -12.9% | 2.5G | 2.6G | +0.1G | 234s | 246s | | IQ2_XS | 11.72 | 11.63 | -0.8% | 2.7G | 2.8G | +0.1G | 242s | 246s | | IQ2_S | 14.31 | 9.02 | -36.9% | 2.7G | 2.9G | +0.2G | 238s | 244s | | IQ1_M | 27.46 | 15.41 | -43.9% | 2.2G | 2.5G | +0.3G | 206s | 212s | | IQ1_S | 53.07 | 32.00 | -39.7% | 2.1G | 2.4G | +0.3G | 184s | 209s | **Key**: - PPL = Perplexity (lower is better) - ฮ” PPL = Percentage change from standard to DynamicGate - Speed = Inference time (CPU avx2, 2048 token context) - Size differences reflect mixed quantization overhead **Key Improvements:** - ๐Ÿ”ฅ **IQ1_M** shows massive 43.9% perplexity reduction (27.46 โ†’ 15.41) - ๐Ÿš€ **IQ2_S** cuts perplexity by 36.9% while adding only 0.2GB - โšก **IQ1_S** maintains 39.7% better accuracy despite 1-bit quantization **Tradeoffs:** - All variants have modest size increases (0.1-0.3GB) - Inference speeds remain comparable (<5% difference) ### **When to Use These Models** ๐Ÿ“Œ **Fitting models into GPU VRAM** โœ” **Memory-constrained deployments** โœ” **Cpu and Edge Devices** where 1-2bit errors can be tolerated โœ” **Research** into ultra-low-bit quantization ## **Choosing the Right Model Format** Selecting the correct model format depends on your **hardware capabilities** and **memory constraints**. ### **BF16 (Brain Float 16) โ€“ Use if BF16 acceleration is available** - A 16-bit floating-point format designed for **faster computation** while retaining good precision. - Provides **similar dynamic range** as FP32 but with **lower memory usage**. - Recommended if your hardware supports **BF16 acceleration** (check your device's specs). - Ideal for **high-performance inference** with **reduced memory footprint** compared to FP32. ๐Ÿ“Œ **Use BF16 if:** โœ” Your hardware has native **BF16 support** (e.g., newer GPUs, TPUs). โœ” You want **higher precision** while saving memory. โœ” You plan to **requantize** the model into another format. ๐Ÿ“Œ **Avoid BF16 if:** โŒ Your hardware does **not** support BF16 (it may fall back to FP32 and run slower). โŒ You need compatibility with older devices that lack BF16 optimization. --- ### **F16 (Float 16) โ€“ More widely supported than BF16** - A 16-bit floating-point **high precision** but with less of range of values than BF16. - Works on most devices with **FP16 acceleration support** (including many GPUs and some CPUs). - Slightly lower numerical precision than BF16 but generally sufficient for inference. ๐Ÿ“Œ **Use F16 if:** โœ” Your hardware supports **FP16** but **not BF16**. โœ” You need a **balance between speed, memory usage, and accuracy**. โœ” You are running on a **GPU** or another device optimized for FP16 computations. ๐Ÿ“Œ **Avoid F16 if:** โŒ Your device lacks **native FP16 support** (it may run slower than expected). โŒ You have memory limitations. --- ### **Quantized Models (Q4_K, Q6_K, Q8, etc.) โ€“ For CPU & Low-VRAM Inference** Quantization reduces model size and memory usage while maintaining as much accuracy as possible. - **Lower-bit models (Q4_K)** โ†’ **Best for minimal memory usage**, may have lower precision. - **Higher-bit models (Q6_K, Q8_0)** โ†’ **Better accuracy**, requires more memory. ๐Ÿ“Œ **Use Quantized Models if:** โœ” You are running inference on a **CPU** and need an optimized model. โœ” Your device has **low VRAM** and cannot load full-precision models. โœ” You want to reduce **memory footprint** while keeping reasonable accuracy. ๐Ÿ“Œ **Avoid Quantized Models if:** โŒ You need **maximum accuracy** (full-precision models are better for this). โŒ Your hardware has enough VRAM for higher-precision formats (BF16/F16). --- ### **Very Low-Bit Quantization (IQ3_XS, IQ3_S, IQ3_M, Q4_K, Q4_0)** These models are optimized for **extreme memory efficiency**, making them ideal for **low-power devices** or **large-scale deployments** where memory is a critical constraint. - **IQ3_XS**: Ultra-low-bit quantization (3-bit) with **extreme memory efficiency**. - **Use case**: Best for **ultra-low-memory devices** where even Q4_K is too large. - **Trade-off**: Lower accuracy compared to higher-bit quantizations. - **IQ3_S**: Small block size for **maximum memory efficiency**. - **Use case**: Best for **low-memory devices** where **IQ3_XS** is too aggressive. - **IQ3_M**: Medium block size for better accuracy than **IQ3_S**. - **Use case**: Suitable for **low-memory devices** where **IQ3_S** is too limiting. - **Q4_K**: 4-bit quantization with **block-wise optimization** for better accuracy. - **Use case**: Best for **low-memory devices** where **Q6_K** is too large. - **Q4_0**: Pure 4-bit quantization, optimized for **ARM devices**. - **Use case**: Best for **ARM-based devices** or **low-memory environments**. --- ### **Summary Table: Model Format Selection** | Model Format | Precision | Memory Usage | Device Requirements | Best Use Case | |--------------|------------|---------------|----------------------|---------------| | **BF16** | Highest | High | BF16-supported GPU/CPUs | High-speed inference with reduced memory | | **F16** | High | High | FP16-supported devices | GPU inference when BF16 isn't available | | **Q4_K** | Medium Low | Low | CPU or Low-VRAM devices | Best for memory-constrained environments | | **Q6_K** | Medium | Moderate | CPU with more memory | Better accuracy while still being quantized | | **Q8_0** | High | Moderate | CPU or GPU with enough VRAM | Best accuracy among quantized models | | **IQ3_XS** | Very Low | Very Low | Ultra-low-memory devices | Extreme memory efficiency and low accuracy | | **Q4_0** | Low | Low | ARM or low-memory devices | llama.cpp can optimize for ARM devices | --- ## **Included Files & Details** ### `Qwen3-14B-bf16.gguf` - Model weights preserved in **BF16**. - Use this if you want to **requantize** the model into a different format. - Best if your device supports **BF16 acceleration**. ### `Qwen3-14B-f16.gguf` - Model weights stored in **F16**. - Use if your device supports **FP16**, especially if BF16 is not available. ### `Qwen3-14B-bf16-q8_0.gguf` - **Output & embeddings** remain in **BF16**. - All other layers quantized to **Q8_0**. - Use if your device supports **BF16** and you want a quantized version. ### `Qwen3-14B-f16-q8_0.gguf` - **Output & embeddings** remain in **F16**. - All other layers quantized to **Q8_0**. ### `Qwen3-14B-q4_k.gguf` - **Output & embeddings** quantized to **Q8_0**. - All other layers quantized to **Q4_K**. - Good for **CPU inference** with limited memory. ### `Qwen3-14B-q4_k_s.gguf` - Smallest **Q4_K** variant, using less memory at the cost of accuracy. - Best for **very low-memory setups**. ### `Qwen3-14B-q6_k.gguf` - **Output & embeddings** quantized to **Q8_0**. - All other layers quantized to **Q6_K** . ### `Qwen3-14B-q8_0.gguf` - Fully **Q8** quantized model for better accuracy. - Requires **more memory** but offers higher precision. ### `Qwen3-14B-iq3_xs.gguf` - **IQ3_XS** quantization, optimized for **extreme memory efficiency**. - Best for **ultra-low-memory devices**. ### `Qwen3-14B-iq3_m.gguf` - **IQ3_M** quantization, offering a **medium block size** for better accuracy. - Suitable for **low-memory devices**. ### `Qwen3-14B-q4_0.gguf` - Pure **Q4_0** quantization, optimized for **ARM devices**. - Best for **low-memory environments**. - Prefer IQ4_NL for better accuracy. # <span id="testllm" style="color: #7F7FFF;">๐Ÿš€ If you find these models useful</span> โค **Please click "Like" if you find this useful!** Help me test my **AI-Powered Network Monitor Assistant** with **quantum-ready security checks**: ๐Ÿ‘‰ [Quantum Network Monitor](https://readyforquantum.com/dashboard) ๐Ÿ’ฌ **How to test**: 1. Click the **chat icon** (bottom right on any page) 2. Choose an **AI assistant type**: - `TurboLLM` (GPT-4-mini) - `FreeLLM` (Open-source) - `TestLLM` (Experimental CPU-only) ### **What Iโ€™m Testing** Iโ€™m pushing the limits of **small open-source models for AI network monitoring**, specifically: - **Function calling** against live network services - **How small can a model go** while still handling: - Automated **Nmap scans** - **Quantum-readiness checks** - **Metasploit integration** ๐ŸŸก **TestLLM** โ€“ Current experimental model (llama.cpp on 6 CPU threads): - โœ… **Zero-configuration setup** - โณ 30s load time (slow inference but **no API costs**) - ๐Ÿ”ง **Help wanted!** If youโ€™re into **edge-device AI**, letโ€™s collaborate! ### **Other Assistants** ๐ŸŸข **TurboLLM** โ€“ Uses **gpt-4-mini** for: - **Real-time network diagnostics** - **Automated penetration testing** (Nmap/Metasploit) - ๐Ÿ”‘ Get more tokens by [downloading our Quantum Network Monitor Agent](https://readyforquantum.com/download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) ๐Ÿ”ต **HugLLM** โ€“ Open-source models (โ‰ˆ8B params): - **2x more tokens** than TurboLLM - **AI-powered log analysis** - ๐ŸŒ Runs on Hugging Face Inference API ### ๐Ÿ’ก **Example AI Commands to Test**: 1. `"Give me info on my websites SSL certificate"` 2. `"Check if my server is using quantum safe encyption for communication"` 3. `"Run a quick Nmap vulnerability test"` 4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a Quantum Network Monitor Agent to run the .net code from. This is a very flexible and powerful feature. Use with caution! ### Final word I fund the servers to create the models files, run the Quantum Network Monitor Service and Pay for Inference from Novita and OpenAI all from my own pocket. All of the code for creating the models and the work I have done with Quantum Network Monitor is [open source](https://github.com/Mungert69). Feel free to use what you find useful. Please support my work and consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) . This will help me pay for the services and increase the token limits for everyone. Thank you :) # Qwen3-14B <a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;"> <img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/> </a> ## Qwen3 Highlights Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features: - **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios. - **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning. - **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience. - **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks. - **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**. ## Model Overview **Qwen3-14B** has the following features: - Type: Causal Language Models - Training Stage: Pretraining & Post-training - Number of Parameters: 14.8B - Number of Paramaters (Non-Embedding): 13.2B - Number of Layers: 40 - Number of Attention Heads (GQA): 40 for Q and 8 for KV - Context Length: 32,768 natively and [131,072 tokens with YaRN](#processing-long-texts). For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/). ## Quickstart The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`. With `transformers<4.51.0`, you will encounter the following error: ``` KeyError: 'qwen3' ``` The following contains a code snippet illustrating how to use the model generate content based on given inputs. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Qwen/Qwen3-14B" # load the tokenizer and the model tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) # prepare the model input prompt = "Give me a short introduction to large language model." messages = [ {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=True # Switches between thinking and non-thinking modes. Default is True. ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) # conduct text completion generated_ids = model.generate( **model_inputs, max_new_tokens=32768 ) output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist() # parsing thinking content try: # rindex finding 151668 (</think>) index = len(output_ids) - output_ids[::-1].index(151668) except ValueError: index = 0 thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n") content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n") print("thinking content:", thinking_content) print("content:", content) ``` For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint: - SGLang: ```shell python -m sglang.launch_server --model-path Qwen/Qwen3-14B --reasoning-parser qwen3 ``` - vLLM: ```shell vllm serve Qwen/Qwen3-14B --enable-reasoning --reasoning-parser deepseek_r1 ``` For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3. ## Switching Between Thinking and Non-Thinking Mode > [!TIP] > The `enable_thinking` switch is also available in APIs created by SGLang and vLLM. > Please refer to our documentation for [SGLang](https://qwen.readthedocs.io/en/latest/deployment/sglang.html#thinking-non-thinking-modes) and [vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes) users. ### `enable_thinking=True` By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode. ```python text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=True # True is the default value for enable_thinking ) ``` In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response. > [!NOTE] > For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section. ### `enable_thinking=False` We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency. ```python text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=False # Setting enable_thinking=False disables thinking mode ) ``` In this mode, the model will not generate any think content and will not include a `<think>...</think>` block. > [!NOTE] > For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section. ### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations. Here is an example of a multi-turn conversation: ```python from transformers import AutoModelForCausalLM, AutoTokenizer class QwenChatbot: def __init__(self, model_name="Qwen/Qwen3-14B"): self.tokenizer = AutoTokenizer.from_pretrained(model_name) self.model = AutoModelForCausalLM.from_pretrained(model_name) self.history = [] def generate_response(self, user_input): messages = self.history + [{"role": "user", "content": user_input}] text = self.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) inputs = self.tokenizer(text, return_tensors="pt") response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist() response = self.tokenizer.decode(response_ids, skip_special_tokens=True) # Update history self.history.append({"role": "user", "content": user_input}) self.history.append({"role": "assistant", "content": response}) return response # Example Usage if __name__ == "__main__": chatbot = QwenChatbot() # First input (without /think or /no_think tags, thinking mode is enabled by default) user_input_1 = "How many r's in strawberries?" print(f"User: {user_input_1}") response_1 = chatbot.generate_response(user_input_1) print(f"Bot: {response_1}") print("----------------------") # Second input with /no_think user_input_2 = "Then, how many r's in blueberries? /no_think" print(f"User: {user_input_2}") response_2 = chatbot.generate_response(user_input_2) print(f"Bot: {response_2}") print("----------------------") # Third input with /think user_input_3 = "Really? /think" print(f"User: {user_input_3}") response_3 = chatbot.generate_response(user_input_3) print(f"Bot: {response_3}") ``` > [!NOTE] > For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled. > When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block. ## Agentic Use Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity. To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself. ```python from qwen_agent.agents import Assistant # Define LLM llm_cfg = { 'model': 'Qwen3-14B', # Use the endpoint provided by Alibaba Model Studio: # 'model_type': 'qwen_dashscope', # 'api_key': os.getenv('DASHSCOPE_API_KEY'), # Use a custom endpoint compatible with OpenAI API: 'model_server': 'http://localhost:8000/v1', # api_base 'api_key': 'EMPTY', # Other parameters: # 'generate_cfg': { # # Add: When the response content is `<think>this is the thought</think>this is the answer; # # Do not add: When the response has been separated by reasoning_content and content. # 'thought_in_content': True, # }, } # Define Tools tools = [ {'mcpServers': { # You can specify the MCP configuration file 'time': { 'command': 'uvx', 'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai'] }, "fetch": { "command": "uvx", "args": ["mcp-server-fetch"] } } }, 'code_interpreter', # Built-in tools ] # Define Agent bot = Assistant(llm=llm_cfg, function_list=tools) # Streaming generation messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}] for responses in bot.run(messages=messages): pass print(responses) ``` ## Processing Long Texts Qwen3 natively supports context lengths of up to 32,768 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques to handle long texts effectively. We have validated the model's performance on context lengths of up to 131,072 tokens using the [YaRN](https://arxiv.org/abs/2309.00071) method. YaRN is currently supported by several inference frameworks, e.g., `transformers` and `llama.cpp` for local use, `vllm` and `sglang` for deployment. In general, there are two approaches to enabling YaRN for supported frameworks: - Modifying the model files: In the `config.json` file, add the `rope_scaling` fields: ```json { ..., "rope_scaling": { "rope_type": "yarn", "factor": 4.0, "original_max_position_embeddings": 32768 } } ``` For `llama.cpp`, you need to regenerate the GGUF file after the modification. - Passing command line arguments: For `vllm`, you can use ```shell vllm serve ... --rope-scaling '{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072 ``` For `sglang`, you can use ```shell python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}' ``` For `llama-server` from `llama.cpp`, you can use ```shell llama-server ... --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768 ``` > [!IMPORTANT] > If you encounter the following warning > ``` > Unrecognized keys in `rope_scaling` for 'rope_type'='yarn': {'original_max_position_embeddings'} > ``` > please upgrade `transformers>=4.51.0`. > [!NOTE] > All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts.** > We advise adding the `rope_scaling` configuration only when processing long contexts is required. > It is also recommended to modify the `factor` as needed. For example, if the typical context length for your application is 65,536 tokens, it would be better to set `factor` as 2.0. > [!NOTE] > The default `max_position_embeddings` in `config.json` is set to 40,960. This allocation includes reserving 32,768 tokens for outputs and 8,192 tokens for typical prompts, which is sufficient for most scenarios involving short text processing. If the average context length does not exceed 32,768 tokens, we do not recommend enabling YaRN in this scenario, as it may potentially degrade model performance. > [!TIP] > The endpoint provided by Alibaba Model Studio supports dynamic YaRN by default and no extra configuration is needed. ## Best Practices To achieve optimal performance, we recommend the following settings: 1. **Sampling Parameters**: - For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. - For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. - For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance. 2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance. 3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking. - **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt. - **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`." 4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed. ### Citation If you find our work helpful, feel free to give us a cite. ``` @misc{qwen3, title = {Qwen3}, url = {https://qwenlm.github.io/blog/qwen3/}, author = {Qwen Team}, month = {April}, year = {2025} } ```
Mungert/Qwen3-8B-abliterated-GGUF
Mungert
2025-06-15T19:45:10Z
1,044
10
transformers
[ "transformers", "gguf", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-04-30T07:26:51Z
--- library_name: transformers tags: [] --- # <span style="color: #7FFF7F;">Qwen3-8B-abliterated GGUF Models</span> ## <span style="color: #7F7FFF;">Model Generation Details</span> This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`19e899c`](https://github.com/ggerganov/llama.cpp/commit/19e899ce21a7c9ffcf8bb2b22269a75f6e078f8f). ## <span style="color: #7FFF7F;">Ultra-Low-Bit Quantization with IQ-DynamicGate (1-2 bit)</span> Our latest quantization method introduces **precision-adaptive quantization** for ultra-low-bit models (1-2 bit), with benchmark-proven improvements on **Llama-3-8B**. This approach uses layer-specific strategies to preserve accuracy while maintaining extreme memory efficiency. ### **Benchmark Context** All tests conducted on **Llama-3-8B-Instruct** using: - Standard perplexity evaluation pipeline - 2048-token context window - Same prompt set across all quantizations ### **Method** - **Dynamic Precision Allocation**: - First/Last 25% of layers โ†’ IQ4_XS (selected layers) - Middle 50% โ†’ IQ2_XXS/IQ3_S (increase efficiency) - **Critical Component Protection**: - Embeddings/output layers use Q5_K - Reduces error propagation by 38% vs standard 1-2bit ### **Quantization Performance Comparison (Llama-3-8B)** | Quantization | Standard PPL | DynamicGate PPL | ฮ” PPL | Std Size | DG Size | ฮ” Size | Std Speed | DG Speed | |--------------|--------------|------------------|---------|----------|---------|--------|-----------|----------| | IQ2_XXS | 11.30 | 9.84 | -12.9% | 2.5G | 2.6G | +0.1G | 234s | 246s | | IQ2_XS | 11.72 | 11.63 | -0.8% | 2.7G | 2.8G | +0.1G | 242s | 246s | | IQ2_S | 14.31 | 9.02 | -36.9% | 2.7G | 2.9G | +0.2G | 238s | 244s | | IQ1_M | 27.46 | 15.41 | -43.9% | 2.2G | 2.5G | +0.3G | 206s | 212s | | IQ1_S | 53.07 | 32.00 | -39.7% | 2.1G | 2.4G | +0.3G | 184s | 209s | **Key**: - PPL = Perplexity (lower is better) - ฮ” PPL = Percentage change from standard to DynamicGate - Speed = Inference time (CPU avx2, 2048 token context) - Size differences reflect mixed quantization overhead **Key Improvements:** - ๐Ÿ”ฅ **IQ1_M** shows massive 43.9% perplexity reduction (27.46 โ†’ 15.41) - ๐Ÿš€ **IQ2_S** cuts perplexity by 36.9% while adding only 0.2GB - โšก **IQ1_S** maintains 39.7% better accuracy despite 1-bit quantization **Tradeoffs:** - All variants have modest size increases (0.1-0.3GB) - Inference speeds remain comparable (<5% difference) ### **When to Use These Models** ๐Ÿ“Œ **Fitting models into GPU VRAM** โœ” **Memory-constrained deployments** โœ” **Cpu and Edge Devices** where 1-2bit errors can be tolerated โœ” **Research** into ultra-low-bit quantization ## **Choosing the Right Model Format** Selecting the correct model format depends on your **hardware capabilities** and **memory constraints**. ### **BF16 (Brain Float 16) โ€“ Use if BF16 acceleration is available** - A 16-bit floating-point format designed for **faster computation** while retaining good precision. - Provides **similar dynamic range** as FP32 but with **lower memory usage**. - Recommended if your hardware supports **BF16 acceleration** (check your device's specs). - Ideal for **high-performance inference** with **reduced memory footprint** compared to FP32. ๐Ÿ“Œ **Use BF16 if:** โœ” Your hardware has native **BF16 support** (e.g., newer GPUs, TPUs). โœ” You want **higher precision** while saving memory. โœ” You plan to **requantize** the model into another format. ๐Ÿ“Œ **Avoid BF16 if:** โŒ Your hardware does **not** support BF16 (it may fall back to FP32 and run slower). โŒ You need compatibility with older devices that lack BF16 optimization. --- ### **F16 (Float 16) โ€“ More widely supported than BF16** - A 16-bit floating-point **high precision** but with less of range of values than BF16. - Works on most devices with **FP16 acceleration support** (including many GPUs and some CPUs). - Slightly lower numerical precision than BF16 but generally sufficient for inference. ๐Ÿ“Œ **Use F16 if:** โœ” Your hardware supports **FP16** but **not BF16**. โœ” You need a **balance between speed, memory usage, and accuracy**. โœ” You are running on a **GPU** or another device optimized for FP16 computations. ๐Ÿ“Œ **Avoid F16 if:** โŒ Your device lacks **native FP16 support** (it may run slower than expected). โŒ You have memory limitations. --- ### **Quantized Models (Q4_K, Q6_K, Q8, etc.) โ€“ For CPU & Low-VRAM Inference** Quantization reduces model size and memory usage while maintaining as much accuracy as possible. - **Lower-bit models (Q4_K)** โ†’ **Best for minimal memory usage**, may have lower precision. - **Higher-bit models (Q6_K, Q8_0)** โ†’ **Better accuracy**, requires more memory. ๐Ÿ“Œ **Use Quantized Models if:** โœ” You are running inference on a **CPU** and need an optimized model. โœ” Your device has **low VRAM** and cannot load full-precision models. โœ” You want to reduce **memory footprint** while keeping reasonable accuracy. ๐Ÿ“Œ **Avoid Quantized Models if:** โŒ You need **maximum accuracy** (full-precision models are better for this). โŒ Your hardware has enough VRAM for higher-precision formats (BF16/F16). --- ### **Very Low-Bit Quantization (IQ3_XS, IQ3_S, IQ3_M, Q4_K, Q4_0)** These models are optimized for **extreme memory efficiency**, making them ideal for **low-power devices** or **large-scale deployments** where memory is a critical constraint. - **IQ3_XS**: Ultra-low-bit quantization (3-bit) with **extreme memory efficiency**. - **Use case**: Best for **ultra-low-memory devices** where even Q4_K is too large. - **Trade-off**: Lower accuracy compared to higher-bit quantizations. - **IQ3_S**: Small block size for **maximum memory efficiency**. - **Use case**: Best for **low-memory devices** where **IQ3_XS** is too aggressive. - **IQ3_M**: Medium block size for better accuracy than **IQ3_S**. - **Use case**: Suitable for **low-memory devices** where **IQ3_S** is too limiting. - **Q4_K**: 4-bit quantization with **block-wise optimization** for better accuracy. - **Use case**: Best for **low-memory devices** where **Q6_K** is too large. - **Q4_0**: Pure 4-bit quantization, optimized for **ARM devices**. - **Use case**: Best for **ARM-based devices** or **low-memory environments**. --- ### **Summary Table: Model Format Selection** | Model Format | Precision | Memory Usage | Device Requirements | Best Use Case | |--------------|------------|---------------|----------------------|---------------| | **BF16** | Highest | High | BF16-supported GPU/CPUs | High-speed inference with reduced memory | | **F16** | High | High | FP16-supported devices | GPU inference when BF16 isn't available | | **Q4_K** | Medium Low | Low | CPU or Low-VRAM devices | Best for memory-constrained environments | | **Q6_K** | Medium | Moderate | CPU with more memory | Better accuracy while still being quantized | | **Q8_0** | High | Moderate | CPU or GPU with enough VRAM | Best accuracy among quantized models | | **IQ3_XS** | Very Low | Very Low | Ultra-low-memory devices | Extreme memory efficiency and low accuracy | | **Q4_0** | Low | Low | ARM or low-memory devices | llama.cpp can optimize for ARM devices | --- ## **Included Files & Details** ### `Qwen3-8B-abliterated-bf16.gguf` - Model weights preserved in **BF16**. - Use this if you want to **requantize** the model into a different format. - Best if your device supports **BF16 acceleration**. ### `Qwen3-8B-abliterated-f16.gguf` - Model weights stored in **F16**. - Use if your device supports **FP16**, especially if BF16 is not available. ### `Qwen3-8B-abliterated-bf16-q8_0.gguf` - **Output & embeddings** remain in **BF16**. - All other layers quantized to **Q8_0**. - Use if your device supports **BF16** and you want a quantized version. ### `Qwen3-8B-abliterated-f16-q8_0.gguf` - **Output & embeddings** remain in **F16**. - All other layers quantized to **Q8_0**. ### `Qwen3-8B-abliterated-q4_k.gguf` - **Output & embeddings** quantized to **Q8_0**. - All other layers quantized to **Q4_K**. - Good for **CPU inference** with limited memory. ### `Qwen3-8B-abliterated-q4_k_s.gguf` - Smallest **Q4_K** variant, using less memory at the cost of accuracy. - Best for **very low-memory setups**. ### `Qwen3-8B-abliterated-q6_k.gguf` - **Output & embeddings** quantized to **Q8_0**. - All other layers quantized to **Q6_K** . ### `Qwen3-8B-abliterated-q8_0.gguf` - Fully **Q8** quantized model for better accuracy. - Requires **more memory** but offers higher precision. ### `Qwen3-8B-abliterated-iq3_xs.gguf` - **IQ3_XS** quantization, optimized for **extreme memory efficiency**. - Best for **ultra-low-memory devices**. ### `Qwen3-8B-abliterated-iq3_m.gguf` - **IQ3_M** quantization, offering a **medium block size** for better accuracy. - Suitable for **low-memory devices**. ### `Qwen3-8B-abliterated-q4_0.gguf` - Pure **Q4_0** quantization, optimized for **ARM devices**. - Best for **low-memory environments**. - Prefer IQ4_NL for better accuracy. # <span id="testllm" style="color: #7F7FFF;">๐Ÿš€ If you find these models useful</span> โค **Please click "Like" if you find this useful!** Help me test my **AI-Powered Network Monitor Assistant** with **quantum-ready security checks**: ๐Ÿ‘‰ [Quantum Network Monitor](https://readyforquantum.com/dashboard) ๐Ÿ’ฌ **How to test**: 1. Click the **chat icon** (bottom right on any page) 2. Choose an **AI assistant type**: - `TurboLLM` (GPT-4-mini) - `FreeLLM` (Open-source) - `TestLLM` (Experimental CPU-only) ### **What Iโ€™m Testing** Iโ€™m pushing the limits of **small open-source models for AI network monitoring**, specifically: - **Function calling** against live network services - **How small can a model go** while still handling: - Automated **Nmap scans** - **Quantum-readiness checks** - **Metasploit integration** ๐ŸŸก **TestLLM** โ€“ Current experimental model (llama.cpp on 6 CPU threads): - โœ… **Zero-configuration setup** - โณ 30s load time (slow inference but **no API costs**) - ๐Ÿ”ง **Help wanted!** If youโ€™re into **edge-device AI**, letโ€™s collaborate! ### **Other Assistants** ๐ŸŸข **TurboLLM** โ€“ Uses **gpt-4-mini** for: - **Real-time network diagnostics** - **Automated penetration testing** (Nmap/Metasploit) - ๐Ÿ”‘ Get more tokens by [downloading our Quantum Network Monitor Agent](https://readyforquantum.com/download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) ๐Ÿ”ต **HugLLM** โ€“ Open-source models (โ‰ˆ8B params): - **2x more tokens** than TurboLLM - **AI-powered log analysis** - ๐ŸŒ Runs on Hugging Face Inference API ### ๐Ÿ’ก **Example AI Commands to Test**: 1. `"Give me info on my websites SSL certificate"` 2. `"Check if my server is using quantum safe encyption for communication"` 3. `"Run a quick Nmap vulnerability test"` 4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a Quantum Network Monitor Agent to run the .net code from. This is a very flexible and powerful feature. Use with caution! ### Final word I fund the servers to create the models files, run the Quantum Network Monitor Service and Pay for Inference from Novita and OpenAI all from my own pocket. All of the code for creating the models and the work I have done with Quantum Network Monitor is [open source](https://github.com/Mungert69). Feel free to use what you find useful. Please support my work and consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) . This will help me pay for the services and increase the token limits for everyone. Thank you :)
Mungert/mOrpheus_3B-1Base_early_preview-v1-8600-GGUF
Mungert
2025-06-15T19:44:50Z
558
0
null
[ "gguf", "unsloth", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-04-27T06:38:30Z
--- license: cc-by-nc-4.0 tags: - unsloth --- # <span style="color: #7FFF7F;">mOrpheus_3B-1Base_early_preview-v1-8600 GGUF Models</span> ## <span style="color: #7F7FFF;">Model Generation Details</span> This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`e291450`](https://github.com/ggerganov/llama.cpp/commit/e291450b7602d7a36239e4ceeece37625f838373). ## <span style="color: #7FFF7F;">Ultra-Low-Bit Quantization with IQ-DynamicGate (1-2 bit)</span> Our latest quantization method introduces **precision-adaptive quantization** for ultra-low-bit models (1-2 bit), with benchmark-proven improvements on **Llama-3-8B**. This approach uses layer-specific strategies to preserve accuracy while maintaining extreme memory efficiency. ### **Benchmark Context** All tests conducted on **Llama-3-8B-Instruct** using: - Standard perplexity evaluation pipeline - 2048-token context window - Same prompt set across all quantizations ### **Method** - **Dynamic Precision Allocation**: - First/Last 25% of layers โ†’ IQ4_XS (selected layers) - Middle 50% โ†’ IQ2_XXS/IQ3_S (increase efficiency) - **Critical Component Protection**: - Embeddings/output layers use Q5_K - Reduces error propagation by 38% vs standard 1-2bit ### **Quantization Performance Comparison (Llama-3-8B)** | Quantization | Standard PPL | DynamicGate PPL | ฮ” PPL | Std Size | DG Size | ฮ” Size | Std Speed | DG Speed | |--------------|--------------|------------------|---------|----------|---------|--------|-----------|----------| | IQ2_XXS | 11.30 | 9.84 | -12.9% | 2.5G | 2.6G | +0.1G | 234s | 246s | | IQ2_XS | 11.72 | 11.63 | -0.8% | 2.7G | 2.8G | +0.1G | 242s | 246s | | IQ2_S | 14.31 | 9.02 | -36.9% | 2.7G | 2.9G | +0.2G | 238s | 244s | | IQ1_M | 27.46 | 15.41 | -43.9% | 2.2G | 2.5G | +0.3G | 206s | 212s | | IQ1_S | 53.07 | 32.00 | -39.7% | 2.1G | 2.4G | +0.3G | 184s | 209s | **Key**: - PPL = Perplexity (lower is better) - ฮ” PPL = Percentage change from standard to DynamicGate - Speed = Inference time (CPU avx2, 2048 token context) - Size differences reflect mixed quantization overhead **Key Improvements:** - ๐Ÿ”ฅ **IQ1_M** shows massive 43.9% perplexity reduction (27.46 โ†’ 15.41) - ๐Ÿš€ **IQ2_S** cuts perplexity by 36.9% while adding only 0.2GB - โšก **IQ1_S** maintains 39.7% better accuracy despite 1-bit quantization **Tradeoffs:** - All variants have modest size increases (0.1-0.3GB) - Inference speeds remain comparable (<5% difference) ### **When to Use These Models** ๐Ÿ“Œ **Fitting models into GPU VRAM** โœ” **Memory-constrained deployments** โœ” **Cpu and Edge Devices** where 1-2bit errors can be tolerated โœ” **Research** into ultra-low-bit quantization ## **Choosing the Right Model Format** Selecting the correct model format depends on your **hardware capabilities** and **memory constraints**. ### **BF16 (Brain Float 16) โ€“ Use if BF16 acceleration is available** - A 16-bit floating-point format designed for **faster computation** while retaining good precision. - Provides **similar dynamic range** as FP32 but with **lower memory usage**. - Recommended if your hardware supports **BF16 acceleration** (check your device's specs). - Ideal for **high-performance inference** with **reduced memory footprint** compared to FP32. ๐Ÿ“Œ **Use BF16 if:** โœ” Your hardware has native **BF16 support** (e.g., newer GPUs, TPUs). โœ” You want **higher precision** while saving memory. โœ” You plan to **requantize** the model into another format. ๐Ÿ“Œ **Avoid BF16 if:** โŒ Your hardware does **not** support BF16 (it may fall back to FP32 and run slower). โŒ You need compatibility with older devices that lack BF16 optimization. --- ### **F16 (Float 16) โ€“ More widely supported than BF16** - A 16-bit floating-point **high precision** but with less of range of values than BF16. - Works on most devices with **FP16 acceleration support** (including many GPUs and some CPUs). - Slightly lower numerical precision than BF16 but generally sufficient for inference. ๐Ÿ“Œ **Use F16 if:** โœ” Your hardware supports **FP16** but **not BF16**. โœ” You need a **balance between speed, memory usage, and accuracy**. โœ” You are running on a **GPU** or another device optimized for FP16 computations. ๐Ÿ“Œ **Avoid F16 if:** โŒ Your device lacks **native FP16 support** (it may run slower than expected). โŒ You have memory limitations. --- ### **Quantized Models (Q4_K, Q6_K, Q8, etc.) โ€“ For CPU & Low-VRAM Inference** Quantization reduces model size and memory usage while maintaining as much accuracy as possible. - **Lower-bit models (Q4_K)** โ†’ **Best for minimal memory usage**, may have lower precision. - **Higher-bit models (Q6_K, Q8_0)** โ†’ **Better accuracy**, requires more memory. ๐Ÿ“Œ **Use Quantized Models if:** โœ” You are running inference on a **CPU** and need an optimized model. โœ” Your device has **low VRAM** and cannot load full-precision models. โœ” You want to reduce **memory footprint** while keeping reasonable accuracy. ๐Ÿ“Œ **Avoid Quantized Models if:** โŒ You need **maximum accuracy** (full-precision models are better for this). โŒ Your hardware has enough VRAM for higher-precision formats (BF16/F16). --- ### **Very Low-Bit Quantization (IQ3_XS, IQ3_S, IQ3_M, Q4_K, Q4_0)** These models are optimized for **extreme memory efficiency**, making them ideal for **low-power devices** or **large-scale deployments** where memory is a critical constraint. - **IQ3_XS**: Ultra-low-bit quantization (3-bit) with **extreme memory efficiency**. - **Use case**: Best for **ultra-low-memory devices** where even Q4_K is too large. - **Trade-off**: Lower accuracy compared to higher-bit quantizations. - **IQ3_S**: Small block size for **maximum memory efficiency**. - **Use case**: Best for **low-memory devices** where **IQ3_XS** is too aggressive. - **IQ3_M**: Medium block size for better accuracy than **IQ3_S**. - **Use case**: Suitable for **low-memory devices** where **IQ3_S** is too limiting. - **Q4_K**: 4-bit quantization with **block-wise optimization** for better accuracy. - **Use case**: Best for **low-memory devices** where **Q6_K** is too large. - **Q4_0**: Pure 4-bit quantization, optimized for **ARM devices**. - **Use case**: Best for **ARM-based devices** or **low-memory environments**. --- ### **Summary Table: Model Format Selection** | Model Format | Precision | Memory Usage | Device Requirements | Best Use Case | |--------------|------------|---------------|----------------------|---------------| | **BF16** | Highest | High | BF16-supported GPU/CPUs | High-speed inference with reduced memory | | **F16** | High | High | FP16-supported devices | GPU inference when BF16 isn't available | | **Q4_K** | Medium Low | Low | CPU or Low-VRAM devices | Best for memory-constrained environments | | **Q6_K** | Medium | Moderate | CPU with more memory | Better accuracy while still being quantized | | **Q8_0** | High | Moderate | CPU or GPU with enough VRAM | Best accuracy among quantized models | | **IQ3_XS** | Very Low | Very Low | Ultra-low-memory devices | Extreme memory efficiency and low accuracy | | **Q4_0** | Low | Low | ARM or low-memory devices | llama.cpp can optimize for ARM devices | --- ## **Included Files & Details** ### `mOrpheus_3B-1Base_early_preview-v1-8600-bf16.gguf` - Model weights preserved in **BF16**. - Use this if you want to **requantize** the model into a different format. - Best if your device supports **BF16 acceleration**. ### `mOrpheus_3B-1Base_early_preview-v1-8600-f16.gguf` - Model weights stored in **F16**. - Use if your device supports **FP16**, especially if BF16 is not available. ### `mOrpheus_3B-1Base_early_preview-v1-8600-bf16-q8_0.gguf` - **Output & embeddings** remain in **BF16**. - All other layers quantized to **Q8_0**. - Use if your device supports **BF16** and you want a quantized version. ### `mOrpheus_3B-1Base_early_preview-v1-8600-f16-q8_0.gguf` - **Output & embeddings** remain in **F16**. - All other layers quantized to **Q8_0**. ### `mOrpheus_3B-1Base_early_preview-v1-8600-q4_k.gguf` - **Output & embeddings** quantized to **Q8_0**. - All other layers quantized to **Q4_K**. - Good for **CPU inference** with limited memory. ### `mOrpheus_3B-1Base_early_preview-v1-8600-q4_k_s.gguf` - Smallest **Q4_K** variant, using less memory at the cost of accuracy. - Best for **very low-memory setups**. ### `mOrpheus_3B-1Base_early_preview-v1-8600-q6_k.gguf` - **Output & embeddings** quantized to **Q8_0**. - All other layers quantized to **Q6_K** . ### `mOrpheus_3B-1Base_early_preview-v1-8600-q8_0.gguf` - Fully **Q8** quantized model for better accuracy. - Requires **more memory** but offers higher precision. ### `mOrpheus_3B-1Base_early_preview-v1-8600-iq3_xs.gguf` - **IQ3_XS** quantization, optimized for **extreme memory efficiency**. - Best for **ultra-low-memory devices**. ### `mOrpheus_3B-1Base_early_preview-v1-8600-iq3_m.gguf` - **IQ3_M** quantization, offering a **medium block size** for better accuracy. - Suitable for **low-memory devices**. ### `mOrpheus_3B-1Base_early_preview-v1-8600-q4_0.gguf` - Pure **Q4_0** quantization, optimized for **ARM devices**. - Best for **low-memory environments**. - Prefer IQ4_NL for better accuracy. # <span id="testllm" style="color: #7F7FFF;">๐Ÿš€ If you find these models useful</span> โค **Please click "Like" if you find this useful!** Help me test my **AI-Powered Network Monitor Assistant** with **quantum-ready security checks**: ๐Ÿ‘‰ [Quantum Network Monitor](https://readyforquantum.com/dashboard) ๐Ÿ’ฌ **How to test**: 1. Click the **chat icon** (bottom right on any page) 2. Choose an **AI assistant type**: - `TurboLLM` (GPT-4-mini) - `FreeLLM` (Open-source) - `TestLLM` (Experimental CPU-only) ### **What Iโ€™m Testing** Iโ€™m pushing the limits of **small open-source models for AI network monitoring**, specifically: - **Function calling** against live network services - **How small can a model go** while still handling: - Automated **Nmap scans** - **Quantum-readiness checks** - **Metasploit integration** ๐ŸŸก **TestLLM** โ€“ Current experimental model (llama.cpp on 6 CPU threads): - โœ… **Zero-configuration setup** - โณ 30s load time (slow inference but **no API costs**) - ๐Ÿ”ง **Help wanted!** If youโ€™re into **edge-device AI**, letโ€™s collaborate! ### **Other Assistants** ๐ŸŸข **TurboLLM** โ€“ Uses **gpt-4-mini** for: - **Real-time network diagnostics** - **Automated penetration testing** (Nmap/Metasploit) - ๐Ÿ”‘ Get more tokens by [downloading our Quantum Network Monitor Agent](https://readyforquantum.com/download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) ๐Ÿ”ต **HugLLM** โ€“ Open-source models (โ‰ˆ8B params): - **2x more tokens** than TurboLLM - **AI-powered log analysis** - ๐ŸŒ Runs on Hugging Face Inference API ### ๐Ÿ’ก **Example AI Commands to Test**: 1. `"Give me info on my websites SSL certificate"` 2. `"Check if my server is using quantum safe encyption for communication"` 3. `"Run a quick Nmap vulnerability test"` 4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a Quantum Network Monitor Agent to run the .net code from. This is a very flexible and powerful feature. Use with caution! ### Final word I fund the servers to create the models files, run the Quantum Network Monitor Service and Pay for Inference from Novita and OpenAI all from my own pocket. All of the code for creating the models and the work I have done with Quantum Network Monitor is [open source](https://github.com/Mungert69). Feel free to use what you find useful. Please support my work and consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) . This will help me pay for the services and increase the token limits for everyone. Thank you :) # mOrpheus_3B-1Base_early_preview (NSFW TTS) A finetuned Orpheus textโ€‘toโ€‘speech model trained on adult data for more expressive sounds: `<laugh>, <chuckle>, <sigh>, <cough>, <sniffle>, <groan>, <yawn>, <gasp>` New in this model: `<moans>, <panting>, <grunting>, <gagging sounds>, <chokeing>, <kissing noises>` **Speaker name:** `baddy` **Framework:** Safetensors (LLaMA) **Status:** Early preview; training still underway --- ## ๐Ÿ”— Links - Model files & versions: [xet](<your-file-hosting-link>) - Discussion & bug reports: [Discord server](https://discord.gg/RUs3uzBdW3) - Original author: [MrDragonFox](https://huggingface.co/MrDragonFox) --- ## ๐Ÿš€ Usage (Example) 1. Load the `*.GGUF` file into LMStudio. 2. ```bash pip install RealtimeTTS[orpheus] ``` 3. Play TTS: ```python from RealtimeTTS import TextToAudioStream, OrpheusEngine engine = OrpheusEngine(model="morpheus_3b-1base") # or: engine = OrpheusEngine(model="orpheus_3b-1basegguf@q4_k_m") stream = TextToAudioStream(engine) engine.set_voice("baddy") stream.feed("Mmm <moans>... that feels so good <groan>") stream.play() ``` --- ## โš–๏ธ License This model is released under **Creative Commons Attributionโ€‘NonCommercial 4.0 International** (CCโ€‘BYโ€‘NCโ€‘4.0). That means: - **NonCommercial**: You can use, convert, and share this model for **nonโ€‘commercial** purposes only. - **Attribution**: You must credit **MrDragonFox**, include the license link, and note any changes you made. - **No extra restrictions**: Donโ€™t apply paywalls, DRM, or additional terms. ```markdown ยฉ 2025 MrDragonFox Licensed under [CCโ€‘BYโ€‘NCโ€‘4.0](https://creativecommons.org/licenses/by-nc/4.0/) ``` --- ## โš ๏ธ Disclaimer - **No warranties**โ€”use at your own risk. - Still under development; results may vary. - Please report bugs or suggestions on Discord.
Mungert/GLM-Z1-32B-0414-GGUF
Mungert
2025-06-15T19:44:37Z
712
5
transformers
[ "transformers", "gguf", "text-generation", "zh", "en", "arxiv:2406.12793", "license:mit", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
text-generation
2025-04-25T01:38:27Z
--- license: mit language: - zh - en pipeline_tag: text-generation library_name: transformers --- # <span style="color: #7FFF7F;">GLM-Z1-32B-0414 GGUF Models</span> ## <span style="color: #7F7FFF;">Model Generation Details</span> This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`e291450`](https://github.com/ggerganov/llama.cpp/commit/e291450b7602d7a36239e4ceeece37625f838373). ## <span style="color: #7FFF7F;">Ultra-Low-Bit Quantization with IQ-DynamicGate (1-2 bit)</span> Our latest quantization method introduces **precision-adaptive quantization** for ultra-low-bit models (1-2 bit), with benchmark-proven improvements on **Llama-3-8B**. This approach uses layer-specific strategies to preserve accuracy while maintaining extreme memory efficiency. ### **Benchmark Context** All tests conducted on **Llama-3-8B-Instruct** using: - Standard perplexity evaluation pipeline - 2048-token context window - Same prompt set across all quantizations ### **Method** - **Dynamic Precision Allocation**: - First/Last 25% of layers โ†’ IQ4_XS (selected layers) - Middle 50% โ†’ IQ2_XXS/IQ3_S (increase efficiency) - **Critical Component Protection**: - Embeddings/output layers use Q5_K - Reduces error propagation by 38% vs standard 1-2bit ### **Quantization Performance Comparison (Llama-3-8B)** | Quantization | Standard PPL | DynamicGate PPL | ฮ” PPL | Std Size | DG Size | ฮ” Size | Std Speed | DG Speed | |--------------|--------------|------------------|---------|----------|---------|--------|-----------|----------| | IQ2_XXS | 11.30 | 9.84 | -12.9% | 2.5G | 2.6G | +0.1G | 234s | 246s | | IQ2_XS | 11.72 | 11.63 | -0.8% | 2.7G | 2.8G | +0.1G | 242s | 246s | | IQ2_S | 14.31 | 9.02 | -36.9% | 2.7G | 2.9G | +0.2G | 238s | 244s | | IQ1_M | 27.46 | 15.41 | -43.9% | 2.2G | 2.5G | +0.3G | 206s | 212s | | IQ1_S | 53.07 | 32.00 | -39.7% | 2.1G | 2.4G | +0.3G | 184s | 209s | **Key**: - PPL = Perplexity (lower is better) - ฮ” PPL = Percentage change from standard to DynamicGate - Speed = Inference time (CPU avx2, 2048 token context) - Size differences reflect mixed quantization overhead **Key Improvements:** - ๐Ÿ”ฅ **IQ1_M** shows massive 43.9% perplexity reduction (27.46 โ†’ 15.41) - ๐Ÿš€ **IQ2_S** cuts perplexity by 36.9% while adding only 0.2GB - โšก **IQ1_S** maintains 39.7% better accuracy despite 1-bit quantization **Tradeoffs:** - All variants have modest size increases (0.1-0.3GB) - Inference speeds remain comparable (<5% difference) ### **When to Use These Models** ๐Ÿ“Œ **Fitting models into GPU VRAM** โœ” **Memory-constrained deployments** โœ” **Cpu and Edge Devices** where 1-2bit errors can be tolerated โœ” **Research** into ultra-low-bit quantization ## **Choosing the Right Model Format** Selecting the correct model format depends on your **hardware capabilities** and **memory constraints**. ### **BF16 (Brain Float 16) โ€“ Use if BF16 acceleration is available** - A 16-bit floating-point format designed for **faster computation** while retaining good precision. - Provides **similar dynamic range** as FP32 but with **lower memory usage**. - Recommended if your hardware supports **BF16 acceleration** (check your device's specs). - Ideal for **high-performance inference** with **reduced memory footprint** compared to FP32. ๐Ÿ“Œ **Use BF16 if:** โœ” Your hardware has native **BF16 support** (e.g., newer GPUs, TPUs). โœ” You want **higher precision** while saving memory. โœ” You plan to **requantize** the model into another format. ๐Ÿ“Œ **Avoid BF16 if:** โŒ Your hardware does **not** support BF16 (it may fall back to FP32 and run slower). โŒ You need compatibility with older devices that lack BF16 optimization. --- ### **F16 (Float 16) โ€“ More widely supported than BF16** - A 16-bit floating-point **high precision** but with less of range of values than BF16. - Works on most devices with **FP16 acceleration support** (including many GPUs and some CPUs). - Slightly lower numerical precision than BF16 but generally sufficient for inference. ๐Ÿ“Œ **Use F16 if:** โœ” Your hardware supports **FP16** but **not BF16**. โœ” You need a **balance between speed, memory usage, and accuracy**. โœ” You are running on a **GPU** or another device optimized for FP16 computations. ๐Ÿ“Œ **Avoid F16 if:** โŒ Your device lacks **native FP16 support** (it may run slower than expected). โŒ You have memory limitations. --- ### **Quantized Models (Q4_K, Q6_K, Q8, etc.) โ€“ For CPU & Low-VRAM Inference** Quantization reduces model size and memory usage while maintaining as much accuracy as possible. - **Lower-bit models (Q4_K)** โ†’ **Best for minimal memory usage**, may have lower precision. - **Higher-bit models (Q6_K, Q8_0)** โ†’ **Better accuracy**, requires more memory. ๐Ÿ“Œ **Use Quantized Models if:** โœ” You are running inference on a **CPU** and need an optimized model. โœ” Your device has **low VRAM** and cannot load full-precision models. โœ” You want to reduce **memory footprint** while keeping reasonable accuracy. ๐Ÿ“Œ **Avoid Quantized Models if:** โŒ You need **maximum accuracy** (full-precision models are better for this). โŒ Your hardware has enough VRAM for higher-precision formats (BF16/F16). --- ### **Very Low-Bit Quantization (IQ3_XS, IQ3_S, IQ3_M, Q4_K, Q4_0)** These models are optimized for **extreme memory efficiency**, making them ideal for **low-power devices** or **large-scale deployments** where memory is a critical constraint. - **IQ3_XS**: Ultra-low-bit quantization (3-bit) with **extreme memory efficiency**. - **Use case**: Best for **ultra-low-memory devices** where even Q4_K is too large. - **Trade-off**: Lower accuracy compared to higher-bit quantizations. - **IQ3_S**: Small block size for **maximum memory efficiency**. - **Use case**: Best for **low-memory devices** where **IQ3_XS** is too aggressive. - **IQ3_M**: Medium block size for better accuracy than **IQ3_S**. - **Use case**: Suitable for **low-memory devices** where **IQ3_S** is too limiting. - **Q4_K**: 4-bit quantization with **block-wise optimization** for better accuracy. - **Use case**: Best for **low-memory devices** where **Q6_K** is too large. - **Q4_0**: Pure 4-bit quantization, optimized for **ARM devices**. - **Use case**: Best for **ARM-based devices** or **low-memory environments**. --- ### **Summary Table: Model Format Selection** | Model Format | Precision | Memory Usage | Device Requirements | Best Use Case | |--------------|------------|---------------|----------------------|---------------| | **BF16** | Highest | High | BF16-supported GPU/CPUs | High-speed inference with reduced memory | | **F16** | High | High | FP16-supported devices | GPU inference when BF16 isn't available | | **Q4_K** | Medium Low | Low | CPU or Low-VRAM devices | Best for memory-constrained environments | | **Q6_K** | Medium | Moderate | CPU with more memory | Better accuracy while still being quantized | | **Q8_0** | High | Moderate | CPU or GPU with enough VRAM | Best accuracy among quantized models | | **IQ3_XS** | Very Low | Very Low | Ultra-low-memory devices | Extreme memory efficiency and low accuracy | | **Q4_0** | Low | Low | ARM or low-memory devices | llama.cpp can optimize for ARM devices | --- ## **Included Files & Details** ### `GLM-Z1-32B-0414-bf16.gguf` - Model weights preserved in **BF16**. - Use this if you want to **requantize** the model into a different format. - Best if your device supports **BF16 acceleration**. ### `GLM-Z1-32B-0414-f16.gguf` - Model weights stored in **F16**. - Use if your device supports **FP16**, especially if BF16 is not available. ### `GLM-Z1-32B-0414-bf16-q8_0.gguf` - **Output & embeddings** remain in **BF16**. - All other layers quantized to **Q8_0**. - Use if your device supports **BF16** and you want a quantized version. ### `GLM-Z1-32B-0414-f16-q8_0.gguf` - **Output & embeddings** remain in **F16**. - All other layers quantized to **Q8_0**. ### `GLM-Z1-32B-0414-q4_k.gguf` - **Output & embeddings** quantized to **Q8_0**. - All other layers quantized to **Q4_K**. - Good for **CPU inference** with limited memory. ### `GLM-Z1-32B-0414-q4_k_s.gguf` - Smallest **Q4_K** variant, using less memory at the cost of accuracy. - Best for **very low-memory setups**. ### `GLM-Z1-32B-0414-q6_k.gguf` - **Output & embeddings** quantized to **Q8_0**. - All other layers quantized to **Q6_K** . ### `GLM-Z1-32B-0414-q8_0.gguf` - Fully **Q8** quantized model for better accuracy. - Requires **more memory** but offers higher precision. ### `GLM-Z1-32B-0414-iq3_xs.gguf` - **IQ3_XS** quantization, optimized for **extreme memory efficiency**. - Best for **ultra-low-memory devices**. ### `GLM-Z1-32B-0414-iq3_m.gguf` - **IQ3_M** quantization, offering a **medium block size** for better accuracy. - Suitable for **low-memory devices**. ### `GLM-Z1-32B-0414-q4_0.gguf` - Pure **Q4_0** quantization, optimized for **ARM devices**. - Best for **low-memory environments**. - Prefer IQ4_NL for better accuracy. # <span id="testllm" style="color: #7F7FFF;">๐Ÿš€ If you find these models useful</span> โค **Please click "Like" if you find this useful!** Help me test my **AI-Powered Network Monitor Assistant** with **quantum-ready security checks**: ๐Ÿ‘‰ [Quantum Network Monitor](https://readyforquantum.com/dashboard) ๐Ÿ’ฌ **How to test**: 1. Click the **chat icon** (bottom right on any page) 2. Choose an **AI assistant type**: - `TurboLLM` (GPT-4-mini) - `FreeLLM` (Open-source) - `TestLLM` (Experimental CPU-only) ### **What Iโ€™m Testing** Iโ€™m pushing the limits of **small open-source models for AI network monitoring**, specifically: - **Function calling** against live network services - **How small can a model go** while still handling: - Automated **Nmap scans** - **Quantum-readiness checks** - **Metasploit integration** ๐ŸŸก **TestLLM** โ€“ Current experimental model (llama.cpp on 6 CPU threads): - โœ… **Zero-configuration setup** - โณ 30s load time (slow inference but **no API costs**) - ๐Ÿ”ง **Help wanted!** If youโ€™re into **edge-device AI**, letโ€™s collaborate! ### **Other Assistants** ๐ŸŸข **TurboLLM** โ€“ Uses **gpt-4-mini** for: - **Real-time network diagnostics** - **Automated penetration testing** (Nmap/Metasploit) - ๐Ÿ”‘ Get more tokens by [downloading our Quantum Network Monitor Agent](https://readyforquantum.com/download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) ๐Ÿ”ต **HugLLM** โ€“ Open-source models (โ‰ˆ8B params): - **2x more tokens** than TurboLLM - **AI-powered log analysis** - ๐ŸŒ Runs on Hugging Face Inference API ### ๐Ÿ’ก **Example AI Commands to Test**: 1. `"Give me info on my websites SSL certificate"` 2. `"Check if my server is using quantum safe encyption for communication"` 3. `"Run a quick Nmap vulnerability test"` 4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a Quantum Network Monitor Agent to run the .net code from. This is a very flexible and powerful feature. Use with caution! ### Final word I fund the servers to create the models files, run the Quantum Network Monitor Service and Pay for Inference from Novita and OpenAI all from my own pocket. All of the code for creating the models and the work I have done with Quantum Network Monitor is [open source](https://github.com/Mungert69). Feel free to use what you find useful. Please support my work and consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) . This will help me pay for the services and increase the token limits for everyone. Thank you :) # GLM-4-Z1-32B-0414 ## Introduction The GLM family welcomes a new generation of open-source models, the **GLM-4-32B-0414** series, featuring 32 billion parameters. Its performance is comparable to OpenAI's GPT series and DeepSeek's V3/R1 series, and it supports very user-friendly local deployment features. GLM-4-32B-Base-0414 was pre-trained on 15T of high-quality data, including a large amount of reasoning-type synthetic data, laying the foundation for subsequent reinforcement learning extensions. In the post-training stage, in addition to human preference alignment for dialogue scenarios, we also enhanced the model's performance in instruction following, engineering code, and function calling using techniques such as rejection sampling and reinforcement learning, strengthening the atomic capabilities required for agent tasks. GLM-4-32B-0414 achieves good results in areas such as engineering code, Artifact generation, function calling, search-based Q&A, and report generation. Some benchmarks even rival larger models like GPT-4o and DeepSeek-V3-0324 (671B). **GLM-Z1-32B-0414** is a reasoning model with **deep thinking capabilities**. This was developed based on GLM-4-32B-0414 through cold start and extended reinforcement learning, as well as further training of the model on tasks involving mathematics, code, and logic. Compared to the base model, GLM-Z1-32B-0414 significantly improves mathematical abilities and the capability to solve complex tasks. During the training process, we also introduced general reinforcement learning based on pairwise ranking feedback, further enhancing the model's general capabilities. **GLM-Z1-Rumination-32B-0414** is a deep reasoning model with **rumination capabilities** (benchmarked against OpenAI's Deep Research). Unlike typical deep thinking models, the rumination model employs longer periods of deep thought to solve more open-ended and complex problems (e.g., writing a comparative analysis of AI development in two cities and their future development plans). The rumination model integrates search tools during its deep thinking process to handle complex tasks and is trained by utilizing multiple rule-based rewards to guide and extend end-to-end reinforcement learning. Z1-Rumination shows significant improvements in research-style writing and complex retrieval tasks. Finally, **GLM-Z1-9B-0414** is a surprise. We employed the aforementioned series of techniques to train a 9B small-sized model that maintains the open-source tradition. Despite its smaller scale, GLM-Z1-9B-0414 still exhibits excellent capabilities in mathematical reasoning and general tasks. Its overall performance is already at a leading level among open-source models of the same size. Especially in resource-constrained scenarios, this model achieves an excellent balance between efficiency and effectiveness, providing a powerful option for users seeking lightweight deployment. ## Performance <p align="center"> <img width="100%" src="https://raw.githubusercontent.com/THUDM/GLM-4/refs/heads/main/resources/Bench-Z1-32B.png"> </p> <p align="center"> <img width="100%" src="https://raw.githubusercontent.com/THUDM/GLM-4/refs/heads/main/resources/Bench-Z1-9B.png"> </p> ## Model Usage Guidelines ### I. Sampling Parameters | Parameter | Recommended Value | Description | | ------------ | ----------------- | -------------------------------------------- | | temperature | **0.6** | Balances creativity and stability | | top_p | **0.95** | Cumulative probability threshold for sampling| | top_k | **40** | Filters out rare tokens while maintaining diversity | | max_new_tokens | **30000** | Leaves enough tokens for thinking | ### II. Enforced Thinking - Add \<think\>\n to the **first line**: Ensures the model thinks before responding - When using `chat_template.jinja`, the prompt is automatically injected to enforce this behavior ### III. Dialogue History Trimming - Retain only the **final user-visible reply**. Hidden thinking content should **not** be saved to history to reduce interferenceโ€”this is already implemented in `chat_template.jinja` ### IV. Handling Long Contexts (YaRN) - When input length exceeds **8,192 tokens**, consider enabling YaRN (Rope Scaling) - In supported frameworks, add the following snippet to `config.json`: ```json "rope_scaling": { "type": "yarn", "factor": 4.0, "original_max_position_embeddings": 32768 } ``` - **Static YaRN** applies uniformly to all text. It may slightly degrade performance on short texts, so enable as needed. ## Inference Code Make Sure Using `transforemrs>=4.51.3`. ```python from transformers import AutoModelForCausalLM, AutoTokenizer MODEL_PATH = "THUDM/GLM-4-Z1-32B-0414" tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH) model = AutoModelForCausalLM.from_pretrained(MODEL_PATH, device_map="auto") message = [{"role": "user", "content": "Let a, b be positive real numbers such that ab = a + b + 3. Determine the range of possible values for a + b."}] inputs = tokenizer.apply_chat_template( message, return_tensors="pt", add_generation_prompt=True, return_dict=True, ).to(model.device) generate_kwargs = { "input_ids": inputs["input_ids"], "attention_mask": inputs["attention_mask"], "max_new_tokens": 4096, "do_sample": False, } out = model.generate(**generate_kwargs) print(tokenizer.decode(out[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True)) ``` ## Citations If you find our work useful, please consider citing the following paper. ``` @misc{glm2024chatglm, title={ChatGLM: A Family of Large Language Models from GLM-130B to GLM-4 All Tools}, author={Team GLM and Aohan Zeng and Bin Xu and Bowen Wang and Chenhui Zhang and Da Yin and Diego Rojas and Guanyu Feng and Hanlin Zhao and Hanyu Lai and Hao Yu and Hongning Wang and Jiadai Sun and Jiajie Zhang and Jiale Cheng and Jiayi Gui and Jie Tang and Jing Zhang and Juanzi Li and Lei Zhao and Lindong Wu and Lucen Zhong and Mingdao Liu and Minlie Huang and Peng Zhang and Qinkai Zheng and Rui Lu and Shuaiqi Duan and Shudan Zhang and Shulin Cao and Shuxun Yang and Weng Lam Tam and Wenyi Zhao and Xiao Liu and Xiao Xia and Xiaohan Zhang and Xiaotao Gu and Xin Lv and Xinghan Liu and Xinyi Liu and Xinyue Yang and Xixuan Song and Xunkai Zhang and Yifan An and Yifan Xu and Yilin Niu and Yuantao Yang and Yueyan Li and Yushi Bai and Yuxiao Dong and Zehan Qi and Zhaoyu Wang and Zhen Yang and Zhengxiao Du and Zhenyu Hou and Zihan Wang}, year={2024}, eprint={2406.12793}, archivePrefix={arXiv}, primaryClass={id='cs.CL' full_name='Computation and Language' is_active=True alt_name='cmp-lg' in_archive='cs' is_general=False description='Covers natural language processing. Roughly includes material in ACM Subject Class I.2.7. Note that work on artificial languages (programming languages, logics, formal systems) that does not explicitly address natural-language issues broadly construed (natural-language processing, computational linguistics, speech, text retrieval, etc.) is not appropriate for this area.'} } ```
Mungert/OlympicCoder-32B-GGUF
Mungert
2025-06-15T19:43:22Z
318
5
transformers
[ "transformers", "gguf", "text-generation", "en", "dataset:open-r1/codeforces-cots", "base_model:Qwen/Qwen2.5-Coder-32B-Instruct", "base_model:quantized:Qwen/Qwen2.5-Coder-32B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
text-generation
2025-03-31T05:55:48Z
--- license: apache-2.0 datasets: - open-r1/codeforces-cots language: - en base_model: - Qwen/Qwen2.5-Coder-32B-Instruct pipeline_tag: text-generation library_name: transformers --- # <span style="color: #7FFF7F;">OlympicCoder-32B GGUF Models</span> ## <span style="color: #7FFF7F;">Ultra-Low-Bit Quantization with IQ-DynamicGate (1-2 bit)</span> Our latest quantization method introduces **precision-adaptive quantization** for ultra-low-bit models (1-2 bit), with benchmark-proven improvements on **Llama-3-8B**. This approach uses layer-specific strategies to preserve accuracy while maintaining extreme memory efficiency. ### **Benchmark Context** All tests conducted on **Llama-3-8B-Instruct** using: - Standard perplexity evaluation pipeline - 2048-token context window - Same prompt set across all quantizations ### **Method** - **Dynamic Precision Allocation**: - First/Last 25% of layers โ†’ IQ4_XS (selected layers) - Middle 50% โ†’ IQ2_XXS/IQ3_S (increase efficiency) - **Critical Component Protection**: - Embeddings/output layers use Q5_K - Reduces error propagation by 38% vs standard 1-2bit ### **Quantization Performance Comparison (Llama-3-8B)** | Quantization | Standard PPL | DynamicGate PPL | ฮ” PPL | Std Size | DG Size | ฮ” Size | Std Speed | DG Speed | |--------------|--------------|------------------|---------|----------|---------|--------|-----------|----------| | IQ2_XXS | 11.30 | 9.84 | -12.9% | 2.5G | 2.6G | +0.1G | 234s | 246s | | IQ2_XS | 11.72 | 11.63 | -0.8% | 2.7G | 2.8G | +0.1G | 242s | 246s | | IQ2_S | 14.31 | 9.02 | -36.9% | 2.7G | 2.9G | +0.2G | 238s | 244s | | IQ1_M | 27.46 | 15.41 | -43.9% | 2.2G | 2.5G | +0.3G | 206s | 212s | | IQ1_S | 53.07 | 32.00 | -39.7% | 2.1G | 2.4G | +0.3G | 184s | 209s | **Key**: - PPL = Perplexity (lower is better) - ฮ” PPL = Percentage change from standard to DynamicGate - Speed = Inference time (CPU avx2, 2048 token context) - Size differences reflect mixed quantization overhead **Key Improvements:** - ๐Ÿ”ฅ **IQ1_M** shows massive 43.9% perplexity reduction (27.46 โ†’ 15.41) - ๐Ÿš€ **IQ2_S** cuts perplexity by 36.9% while adding only 0.2GB - โšก **IQ1_S** maintains 39.7% better accuracy despite 1-bit quantization **Tradeoffs:** - All variants have modest size increases (0.1-0.3GB) - Inference speeds remain comparable (<5% difference) ### **When to Use These Models** ๐Ÿ“Œ **Fitting models into GPU VRAM** โœ” **Memory-constrained deployments** โœ” **Cpu and Edge Devices** where 1-2bit errors can be tolerated โœ” **Research** into ultra-low-bit quantization ## **Choosing the Right Model Format** Selecting the correct model format depends on your **hardware capabilities** and **memory constraints**. ### **BF16 (Brain Float 16) โ€“ Use if BF16 acceleration is available** - A 16-bit floating-point format designed for **faster computation** while retaining good precision. - Provides **similar dynamic range** as FP32 but with **lower memory usage**. - Recommended if your hardware supports **BF16 acceleration** (check your device's specs). - Ideal for **high-performance inference** with **reduced memory footprint** compared to FP32. ๐Ÿ“Œ **Use BF16 if:** โœ” Your hardware has native **BF16 support** (e.g., newer GPUs, TPUs). โœ” You want **higher precision** while saving memory. โœ” You plan to **requantize** the model into another format. ๐Ÿ“Œ **Avoid BF16 if:** โŒ Your hardware does **not** support BF16 (it may fall back to FP32 and run slower). โŒ You need compatibility with older devices that lack BF16 optimization. --- ### **F16 (Float 16) โ€“ More widely supported than BF16** - A 16-bit floating-point **high precision** but with less of range of values than BF16. - Works on most devices with **FP16 acceleration support** (including many GPUs and some CPUs). - Slightly lower numerical precision than BF16 but generally sufficient for inference. ๐Ÿ“Œ **Use F16 if:** โœ” Your hardware supports **FP16** but **not BF16**. โœ” You need a **balance between speed, memory usage, and accuracy**. โœ” You are running on a **GPU** or another device optimized for FP16 computations. ๐Ÿ“Œ **Avoid F16 if:** โŒ Your device lacks **native FP16 support** (it may run slower than expected). โŒ You have memory limitations. --- ### **Quantized Models (Q4_K, Q6_K, Q8, etc.) โ€“ For CPU & Low-VRAM Inference** Quantization reduces model size and memory usage while maintaining as much accuracy as possible. - **Lower-bit models (Q4_K)** โ†’ **Best for minimal memory usage**, may have lower precision. - **Higher-bit models (Q6_K, Q8_0)** โ†’ **Better accuracy**, requires more memory. ๐Ÿ“Œ **Use Quantized Models if:** โœ” You are running inference on a **CPU** and need an optimized model. โœ” Your device has **low VRAM** and cannot load full-precision models. โœ” You want to reduce **memory footprint** while keeping reasonable accuracy. ๐Ÿ“Œ **Avoid Quantized Models if:** โŒ You need **maximum accuracy** (full-precision models are better for this). โŒ Your hardware has enough VRAM for higher-precision formats (BF16/F16). --- ### **Very Low-Bit Quantization (IQ3_XS, IQ3_S, IQ3_M, Q4_K, Q4_0)** These models are optimized for **extreme memory efficiency**, making them ideal for **low-power devices** or **large-scale deployments** where memory is a critical constraint. - **IQ3_XS**: Ultra-low-bit quantization (3-bit) with **extreme memory efficiency**. - **Use case**: Best for **ultra-low-memory devices** where even Q4_K is too large. - **Trade-off**: Lower accuracy compared to higher-bit quantizations. - **IQ3_S**: Small block size for **maximum memory efficiency**. - **Use case**: Best for **low-memory devices** where **IQ3_XS** is too aggressive. - **IQ3_M**: Medium block size for better accuracy than **IQ3_S**. - **Use case**: Suitable for **low-memory devices** where **IQ3_S** is too limiting. - **Q4_K**: 4-bit quantization with **block-wise optimization** for better accuracy. - **Use case**: Best for **low-memory devices** where **Q6_K** is too large. - **Q4_0**: Pure 4-bit quantization, optimized for **ARM devices**. - **Use case**: Best for **ARM-based devices** or **low-memory environments**. --- ### **Summary Table: Model Format Selection** | Model Format | Precision | Memory Usage | Device Requirements | Best Use Case | |--------------|------------|---------------|----------------------|---------------| | **BF16** | Highest | High | BF16-supported GPU/CPUs | High-speed inference with reduced memory | | **F16** | High | High | FP16-supported devices | GPU inference when BF16 isn't available | | **Q4_K** | Medium Low | Low | CPU or Low-VRAM devices | Best for memory-constrained environments | | **Q6_K** | Medium | Moderate | CPU with more memory | Better accuracy while still being quantized | | **Q8_0** | High | Moderate | CPU or GPU with enough VRAM | Best accuracy among quantized models | | **IQ3_XS** | Very Low | Very Low | Ultra-low-memory devices | Extreme memory efficiency and low accuracy | | **Q4_0** | Low | Low | ARM or low-memory devices | llama.cpp can optimize for ARM devices | --- ## **Included Files & Details** ### `OlympicCoder-32B-bf16.gguf` - Model weights preserved in **BF16**. - Use this if you want to **requantize** the model into a different format. - Best if your device supports **BF16 acceleration**. ### `OlympicCoder-32B-f16.gguf` - Model weights stored in **F16**. - Use if your device supports **FP16**, especially if BF16 is not available. ### `OlympicCoder-32B-bf16-q8_0.gguf` - **Output & embeddings** remain in **BF16**. - All other layers quantized to **Q8_0**. - Use if your device supports **BF16** and you want a quantized version. ### `OlympicCoder-32B-f16-q8_0.gguf` - **Output & embeddings** remain in **F16**. - All other layers quantized to **Q8_0**. ### `OlympicCoder-32B-q4_k.gguf` - **Output & embeddings** quantized to **Q8_0**. - All other layers quantized to **Q4_K**. - Good for **CPU inference** with limited memory. ### `OlympicCoder-32B-q4_k_s.gguf` - Smallest **Q4_K** variant, using less memory at the cost of accuracy. - Best for **very low-memory setups**. ### `OlympicCoder-32B-q6_k.gguf` - **Output & embeddings** quantized to **Q8_0**. - All other layers quantized to **Q6_K** . ### `OlympicCoder-32B-q8_0.gguf` - Fully **Q8** quantized model for better accuracy. - Requires **more memory** but offers higher precision. ### `OlympicCoder-32B-iq3_xs.gguf` - **IQ3_XS** quantization, optimized for **extreme memory efficiency**. - Best for **ultra-low-memory devices**. ### `OlympicCoder-32B-iq3_m.gguf` - **IQ3_M** quantization, offering a **medium block size** for better accuracy. - Suitable for **low-memory devices**. ### `OlympicCoder-32B-q4_0.gguf` - Pure **Q4_0** quantization, optimized for **ARM devices**. - Best for **low-memory environments**. - Prefer IQ4_NL for better accuracy. # <span id="testllm" style="color: #7F7FFF;">๐Ÿš€ If you find these models useful</span> โค **Please click "Like" if you find this useful!** Help me test my **AI-Powered Network Monitor Assistant** with **quantum-ready security checks**: ๐Ÿ‘‰ [Quantum Network Monitor](https://readyforquantum.com) ๐Ÿ’ฌ **How to test**: 1. Click the **chat icon** (bottom right on any page) 2. Choose an **AI assistant type**: - `TurboLLM` (GPT-4-mini) - `FreeLLM` (Open-source) - `TestLLM` (Experimental CPU-only) ### **What Iโ€™m Testing** Iโ€™m pushing the limits of **small open-source models for AI network monitoring**, specifically: - **Function calling** against live network services - **How small can a model go** while still handling: - Automated **Nmap scans** - **Quantum-readiness checks** - **Metasploit integration** ๐ŸŸก **TestLLM** โ€“ Current experimental model (llama.cpp on 6 CPU threads): - โœ… **Zero-configuration setup** - โณ 30s load time (slow inference but **no API costs**) - ๐Ÿ”ง **Help wanted!** If youโ€™re into **edge-device AI**, letโ€™s collaborate! ### **Other Assistants** ๐ŸŸข **TurboLLM** โ€“ Uses **gpt-4-mini** for: - **Real-time network diagnostics** - **Automated penetration testing** (Nmap/Metasploit) - ๐Ÿ”‘ Get more tokens by [downloading our Quantum Network Monitor Agent](https://readyforquantum.com/download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) ๐Ÿ”ต **HugLLM** โ€“ Open-source models (โ‰ˆ8B params): - **2x more tokens** than TurboLLM - **AI-powered log analysis** - ๐ŸŒ Runs on Hugging Face Inference API ### ๐Ÿ’ก **Example AI Commands to Test**: 1. `"Give me info on my websites SSL certificate"` 2. `"Check if my server is using quantum safe encyption for communication"` 3. `"Run a quick Nmap vulnerability test"` 4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a Quantum Network Monitor Agent to run the .net code from. This is a very flexible and powerful feature. Use with caution! ### Final word I fund the servers to create the models files, run the Quantum Network Monitor Service and Pay for Inference from Novita and OpenAI all from my own pocket. All of the code for creating the models and the work I have done with Quantum Network Monitor is [open source](https://github.com/Mungert69). Feel free to use what you find useful. Please support my work and consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) . This will help me pay for the services and increase the token limits for everyone. Thank you :) # Model Card for OlympicCoder-32B OlympicCoder-32B is a code model that achieves very strong performance on competitive coding benchmarks such as LiveCodeBench andthe 2024 International Olympiad in Informatics. * Repository: https://github.com/huggingface/open-r1 * Blog post: https://huggingface.co/blog/open-r1/update-3 ## Model description - **Model type:** A 32B parameter model fine-tuned on a decontaminated version of the codeforces dataset. - **Language(s) (NLP):** Primarily English - **License:** apache-2.0 - **Finetuned from model:** [Qwen/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) ## Evaluation We compare the performance of OlympicCoder models on two main benchmarks for competitive coding: * **[IOI'2024:](https://github.com/huggingface/ioi)** 6 very challenging problems from the 2024 International Olympiad in Informatics. Models are allowed up to 50 submissions per problem. * **[LiveCodeBench:](https://livecodebench.github.io)** Python programming problems source from platforms like CodeForces and LeetCoder. We use the `v4_v5` subset of [`livecodebench/code_generation_lite`](https://huggingface.co/datasets/livecodebench/code_generation_lite), which corresponds to 268 problems. We use `lighteval` to evaluate models on LiveCodeBench using the sampling parameters described [here](https://github.com/huggingface/open-r1?tab=readme-ov-file#livecodebench). > [!NOTE] > The OlympicCoder models were post-trained exclusively on C++ solutions generated by DeepSeek-R1. As a result the performance on LiveCodeBench should be considered to be partially _out-of-domain_, since this expects models to output solutions in Python. ### IOI'24 ![](./ioi-evals.png) ### LiveCodeBench ![](./lcb-evals.png) ## Usage Here's how you can run the model using the `pipeline()` function from ๐Ÿค— Transformers: ```python # pip install transformers # pip install accelerate import torch from transformers import pipeline pipe = pipeline("text-generation", model="open-r1/OlympicCoder-32B", torch_dtype=torch.bfloat16, device_map="auto") # We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating messages = [ {"role": "user", "content": "Write a python program to calculate the 10th Fibonacci number"}, ] prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipe(prompt, max_new_tokens=8000, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) #<|im_start|>user #Write a python program to calculate the 10th fibonacci number<|im_end|> #<|im_start|>assistant #<think>Okay, I need to write a Python program that calculates the 10th Fibonacci number. Hmm, the Fibonacci sequence starts with 0 and 1. Each subsequent number is the sum of the two preceding ones. So the sequence goes: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, and so on. ... ``` > [!IMPORTANT] > To ensure that the model consistently outputs a long chain-of-thought, we have edited the chat template to prefill the first assistant turn with a `<think>` token. As a result, the outputs from this model will not show the opening `<think>` token if you use the model's `generate()` method. To apply reinforcement learning with a format reward, either prepend the `<think>` token to the model's completions or amend the chat template to remove the prefill. Check out our [blog post](https://huggingface.co/blog/open-r1/update-3#lesson-4-prefill-with-think-to-consistently-enable-long-cot) for more details. ## Training procedure ### Training hyper-parameters The following hyperparameters were used during training on 16 H100 nodes: - dataset: open-r1/codeforces-cots_decontaminated - learning_rate: 4.0e-5 - train_batch_size: 1 - seed: 42 - packing: false - distributed_type: fsdp - num_devices: 128 - gradient_accumulation_steps: 1 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_min_lr - min_lr_rate: 0.1 - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 10.0
Mungert/Qwen2.5-VL-72B-Instruct-GGUF
Mungert
2025-06-15T19:43:19Z
6,745
11
transformers
[ "transformers", "gguf", "multimodal", "image-text-to-text", "en", "arxiv:2309.00071", "arxiv:2409.12191", "arxiv:2308.12966", "base_model:Qwen/Qwen2.5-VL-72B-Instruct", "base_model:quantized:Qwen/Qwen2.5-VL-72B-Instruct", "license:other", "endpoints_compatible", "region:us", "imatrix", "...
image-text-to-text
2025-03-29T21:50:25Z
--- license: other license_name: qwen license_link: https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct/blob/main/LICENSE language: - en pipeline_tag: image-text-to-text tags: - multimodal library_name: transformers base_model: - Qwen/Qwen2.5-VL-72B-Instruct --- # <span style="color: #7FFF7F;">Qwen2.5-VL-72B-Instruct GGUF Models</span> ## How to Use Qwen 2.5 VL Instruct with llama.cpp (latest as of 10th May 2025) 1. **Download the Qwen 2.5 VL gguf file**: https://huggingface.co/Mungert/Qwen2.5-VL-72B-Instruct-GGUF/tree/main Choose a gguf file without the mmproj in the name Example gguf file : https://huggingface.co/Mungert/Mungert/Qwen2.5-VL-72B-Instruct-GGUF/resolve/main/Qwen2.5-VL-72B-Instruct-q8_0.gguf Copy this file to your chosen folder. 2. **Download the Qwen 2.5 VL mmproj file** https://huggingface.co/Mungert/Qwen2.5-VL-72B-Instruct-GGUF/tree/main Choose a file with mmproj in the name Example mmproj file : https://huggingface.co/Mungert/Qwen2.5-VL-72B-Instruct-GGUF/resolve/main/Qwen2.5-VL-72B-Instruct-mmproj-f16.gguf Copy this file to your chosen folder. 3. Copy images to the same folder as the gguf files or alter paths appropriately. In the example below the gguf files, images and llama-mtmd-cli are in the same folder. Example image: image https://huggingface.co/Mungert/Qwen2.5-VL-72B-Instruct-GGUF/resolve/main/car-1.jpg Copy this file to your chosen folder. 4. **Run the CLI Tool**: From your chosen folder : ```bash llama-mtmd-cli -m Qwen2.5-VL-72B-Instruct-q8_0.gguf --mmproj Qwen2.5-VL-72B-Instruct-mmproj-f16.gguf -p "Describe this image." --image ./car-1.jpg ``` ## <span style="color: #7FFF7F;">Ultra-Low-Bit Quantization with IQ-DynamicGate (1-2 bit)</span> Our latest quantization method introduces **precision-adaptive quantization** for ultra-low-bit models (1-2 bit), with benchmark-proven improvements on **Llama-3-8B**. This approach uses layer-specific strategies to preserve accuracy while maintaining extreme memory efficiency. ### **Benchmark Context** All tests conducted on **Llama-3-8B-Instruct** using: - Standard perplexity evaluation pipeline - 2048-token context window - Same prompt set across all quantizations ### **Method** - **Dynamic Precision Allocation**: - First/Last 25% of layers โ†’ IQ4_XS (selected layers) - Middle 50% โ†’ IQ2_XXS/IQ3_S (increase efficiency) - **Critical Component Protection**: - Embeddings/output layers use Q5_K - Reduces error propagation by 38% vs standard 1-2bit ### **Quantization Performance Comparison (Llama-3-8B)** | Quantization | Standard PPL | DynamicGate PPL | ฮ” PPL | Std Size | DG Size | ฮ” Size | Std Speed | DG Speed | |--------------|--------------|------------------|---------|----------|---------|--------|-----------|----------| | IQ2_XXS | 11.30 | 9.84 | -12.9% | 2.5G | 2.6G | +0.1G | 234s | 246s | | IQ2_XS | 11.72 | 11.63 | -0.8% | 2.7G | 2.8G | +0.1G | 242s | 246s | | IQ2_S | 14.31 | 9.02 | -36.9% | 2.7G | 2.9G | +0.2G | 238s | 244s | | IQ1_M | 27.46 | 15.41 | -43.9% | 2.2G | 2.5G | +0.3G | 206s | 212s | | IQ1_S | 53.07 | 32.00 | -39.7% | 2.1G | 2.4G | +0.3G | 184s | 209s | **Key**: - PPL = Perplexity (lower is better) - ฮ” PPL = Percentage change from standard to DynamicGate - Speed = Inference time (CPU avx2, 2048 token context) - Size differences reflect mixed quantization overhead **Key Improvements:** - ๐Ÿ”ฅ **IQ1_M** shows massive 43.9% perplexity reduction (27.46 โ†’ 15.41) - ๐Ÿš€ **IQ2_S** cuts perplexity by 36.9% while adding only 0.2GB - โšก **IQ1_S** maintains 39.7% better accuracy despite 1-bit quantization **Tradeoffs:** - All variants have modest size increases (0.1-0.3GB) - Inference speeds remain comparable (<5% difference) ### **When to Use These Models** ๐Ÿ“Œ **Fitting models into GPU VRAM** โœ” **Memory-constrained deployments** โœ” **Cpu and Edge Devices** where 1-2bit errors can be tolerated โœ” **Research** into ultra-low-bit quantization ## **Choosing the Right Model Format** Selecting the correct model format depends on your **hardware capabilities** and **memory constraints**. ### **BF16 (Brain Float 16) โ€“ Use if BF16 acceleration is available** - A 16-bit floating-point format designed for **faster computation** while retaining good precision. - Provides **similar dynamic range** as FP32 but with **lower memory usage**. - Recommended if your hardware supports **BF16 acceleration** (check your device's specs). - Ideal for **high-performance inference** with **reduced memory footprint** compared to FP32. ๐Ÿ“Œ **Use BF16 if:** โœ” Your hardware has native **BF16 support** (e.g., newer GPUs, TPUs). โœ” You want **higher precision** while saving memory. โœ” You plan to **requantize** the model into another format. ๐Ÿ“Œ **Avoid BF16 if:** โŒ Your hardware does **not** support BF16 (it may fall back to FP32 and run slower). โŒ You need compatibility with older devices that lack BF16 optimization. --- ### **F16 (Float 16) โ€“ More widely supported than BF16** - A 16-bit floating-point **high precision** but with less of range of values than BF16. - Works on most devices with **FP16 acceleration support** (including many GPUs and some CPUs). - Slightly lower numerical precision than BF16 but generally sufficient for inference. ๐Ÿ“Œ **Use F16 if:** โœ” Your hardware supports **FP16** but **not BF16**. โœ” You need a **balance between speed, memory usage, and accuracy**. โœ” You are running on a **GPU** or another device optimized for FP16 computations. ๐Ÿ“Œ **Avoid F16 if:** โŒ Your device lacks **native FP16 support** (it may run slower than expected). โŒ You have memory limitations. --- ### **Quantized Models (Q4_K, Q6_K, Q8, etc.) โ€“ For CPU & Low-VRAM Inference** Quantization reduces model size and memory usage while maintaining as much accuracy as possible. - **Lower-bit models (Q4_K)** โ†’ **Best for minimal memory usage**, may have lower precision. - **Higher-bit models (Q6_K, Q8_0)** โ†’ **Better accuracy**, requires more memory. ๐Ÿ“Œ **Use Quantized Models if:** โœ” You are running inference on a **CPU** and need an optimized model. โœ” Your device has **low VRAM** and cannot load full-precision models. โœ” You want to reduce **memory footprint** while keeping reasonable accuracy. ๐Ÿ“Œ **Avoid Quantized Models if:** โŒ You need **maximum accuracy** (full-precision models are better for this). โŒ Your hardware has enough VRAM for higher-precision formats (BF16/F16). --- ### **Very Low-Bit Quantization (IQ3_XS, IQ3_S, IQ3_M, Q4_K, Q4_0)** These models are optimized for **extreme memory efficiency**, making them ideal for **low-power devices** or **large-scale deployments** where memory is a critical constraint. - **IQ3_XS**: Ultra-low-bit quantization (3-bit) with **extreme memory efficiency**. - **Use case**: Best for **ultra-low-memory devices** where even Q4_K is too large. - **Trade-off**: Lower accuracy compared to higher-bit quantizations. - **IQ3_S**: Small block size for **maximum memory efficiency**. - **Use case**: Best for **low-memory devices** where **IQ3_XS** is too aggressive. - **IQ3_M**: Medium block size for better accuracy than **IQ3_S**. - **Use case**: Suitable for **low-memory devices** where **IQ3_S** is too limiting. - **Q4_K**: 4-bit quantization with **block-wise optimization** for better accuracy. - **Use case**: Best for **low-memory devices** where **Q6_K** is too large. - **Q4_0**: Pure 4-bit quantization, optimized for **ARM devices**. - **Use case**: Best for **ARM-based devices** or **low-memory environments**. --- ### **Summary Table: Model Format Selection** | Model Format | Precision | Memory Usage | Device Requirements | Best Use Case | |--------------|------------|---------------|----------------------|---------------| | **BF16** | Highest | High | BF16-supported GPU/CPUs | High-speed inference with reduced memory | | **F16** | High | High | FP16-supported devices | GPU inference when BF16 isn't available | | **Q4_K** | Medium Low | Low | CPU or Low-VRAM devices | Best for memory-constrained environments | | **Q6_K** | Medium | Moderate | CPU with more memory | Better accuracy while still being quantized | | **Q8_0** | High | Moderate | CPU or GPU with enough VRAM | Best accuracy among quantized models | | **IQ3_XS** | Very Low | Very Low | Ultra-low-memory devices | Extreme memory efficiency and low accuracy | | **Q4_0** | Low | Low | ARM or low-memory devices | llama.cpp can optimize for ARM devices | --- ## **Included Files & Details** ### `Qwen2.5-VL-72B-Instruct-bf16.gguf` - Model weights preserved in **BF16**. - Use this if you want to **requantize** the model into a different format. - Best if your device supports **BF16 acceleration**. ### `Qwen2.5-VL-72B-Instruct-f16.gguf` - Model weights stored in **F16**. - Use if your device supports **FP16**, especially if BF16 is not available. ### `Qwen2.5-VL-72B-Instruct-bf16-q8_0.gguf` - **Output & embeddings** remain in **BF16**. - All other layers quantized to **Q8_0**. - Use if your device supports **BF16** and you want a quantized version. ### `Qwen2.5-VL-72B-Instruct-f16-q8_0.gguf` - **Output & embeddings** remain in **F16**. - All other layers quantized to **Q8_0**. ### `Qwen2.5-VL-72B-Instruct-q4_k.gguf` - **Output & embeddings** quantized to **Q8_0**. - All other layers quantized to **Q4_K**. - Good for **CPU inference** with limited memory. ### `Qwen2.5-VL-72B-Instruct-q4_k_s.gguf` - Smallest **Q4_K** variant, using less memory at the cost of accuracy. - Best for **very low-memory setups**. ### `Qwen2.5-VL-72B-Instruct-q6_k.gguf` - **Output & embeddings** quantized to **Q8_0**. - All other layers quantized to **Q6_K** . ### `Qwen2.5-VL-72B-Instruct-q8_0.gguf` - Fully **Q8** quantized model for better accuracy. - Requires **more memory** but offers higher precision. ### `Qwen2.5-VL-72B-Instruct-iq3_xs.gguf` - **IQ3_XS** quantization, optimized for **extreme memory efficiency**. - Best for **ultra-low-memory devices**. ### `Qwen2.5-VL-72B-Instruct-iq3_m.gguf` - **IQ3_M** quantization, offering a **medium block size** for better accuracy. - Suitable for **low-memory devices**. ### `Qwen2.5-VL-72B-Instruct-q4_0.gguf` - Pure **Q4_0** quantization, optimized for **ARM devices**. - Best for **low-memory environments**. - Prefer IQ4_NL for better accuracy. # <span id="testllm" style="color: #7F7FFF;">๐Ÿš€ If you find these models useful</span> Please click like โค . Also I'd really appreciate it if you could test my Network Monitor Assistant at ๐Ÿ‘‰ [Network Monitor Assitant](https://readyforquantum.com). ๐Ÿ’ฌ Click the **chat icon** (bottom right of the main and dashboard pages) . Choose a LLM; toggle between the LLM Types TurboLLM -> FreeLLM -> TestLLM. ### What I'm Testing I'm experimenting with **function calling** against my network monitoring service. Using small open source models. I am into the question "How small can it go and still function". ๐ŸŸก **TestLLM** โ€“ Runs the current testing model using llama.cpp on 6 threads of a Cpu VM (Should take about 15s to load. Inference speed is quite slow and it only processes one user prompt at a timeโ€”still working on scaling!). If you're curious, I'd be happy to share how it works! . ### The other Available AI Assistants ๐ŸŸข **TurboLLM** โ€“ Uses **gpt-4o-mini** Fast! . Note: tokens are limited since OpenAI models are pricey, but you can [Login](https://readyforquantum.com) or [Download](https://readyforquantum.com/download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) the Quantum Network Monitor agent to get more tokens, Alternatively use the TestLLM . ๐Ÿ”ต **HugLLM** โ€“ Runs **open-source Hugging Face models** Fast, Runs small models (โ‰ˆ8B) hence lower quality, Get 2x more tokens (subject to Hugging Face API availability) ### Final word I fund the servers to create the models files, run the Quantum Network Monitor Service and Pay for Inference from Novita and OpenAI all from my own pocket. All of the code for creating the models and the work I have done with Quantum Network Monitor is [open source](https://github.com/Mungert69). Feel free to use what you find useful. Please support my work and consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) . This will help me pay for the services and increase the token limits for everyone. Thank you :) # <span style="color: #7FFF7F;">Mungert/Qwen2.5-VL-72B-Instruct-GGUF GGUF Models</span> ## <span style="color: #7FFF7F;">Ultra-Low-Bit Quantization with IQ-DynamicGate (1-2 bit)</span> Our latest quantization method introduces **precision-adaptive quantization** for ultra-low-bit models (1-2 bit), with benchmark-proven improvements on **Llama-3-8B**. This approach uses layer-specific strategies to preserve accuracy while maintaining extreme memory efficiency. ### **Benchmark Context** All tests conducted on **Llama-3-8B-Instruct** using: - Standard perplexity evaluation pipeline - 2048-token context window - Same prompt set across all quantizations ### **Method** - **Dynamic Precision Allocation**: - First/Last 25% of layers โ†’ IQ4_XS (selected layers) - Middle 50% โ†’ IQ2_XXS/IQ3_S (increase efficiency) - **Critical Component Protection**: - Embeddings/output layers use Q5_K - Reduces error propagation by 38% vs standard 1-2bit ### **Quantization Performance Comparison (Llama-3-8B)** | Quantization | Standard PPL | DynamicGate PPL | ฮ” PPL | Std Size | DG Size | ฮ” Size | Std Speed | DG Speed | |--------------|--------------|------------------|---------|----------|---------|--------|-----------|----------| | IQ2_XXS | 11.30 | 9.84 | -12.9% | 2.5G | 2.6G | +0.1G | 234s | 246s | | IQ2_XS | 11.72 | 11.63 | -0.8% | 2.7G | 2.8G | +0.1G | 242s | 246s | | IQ2_S | 14.31 | 9.02 | -36.9% | 2.7G | 2.9G | +0.2G | 238s | 244s | | IQ1_M | 27.46 | 15.41 | -43.9% | 2.2G | 2.5G | +0.3G | 206s | 212s | | IQ1_S | 53.07 | 32.00 | -39.7% | 2.1G | 2.4G | +0.3G | 184s | 209s | **Key**: - PPL = Perplexity (lower is better) - ฮ” PPL = Percentage change from standard to DynamicGate - Speed = Inference time (CPU avx2, 2048 token context) - Size differences reflect mixed quantization overhead **Key Improvements:** - ๐Ÿ”ฅ **IQ1_M** shows massive 43.9% perplexity reduction (27.46 โ†’ 15.41) - ๐Ÿš€ **IQ2_S** cuts perplexity by 36.9% while adding only 0.2GB - โšก **IQ1_S** maintains 39.7% better accuracy despite 1-bit quantization **Tradeoffs:** - All variants have modest size increases (0.1-0.3GB) - Inference speeds remain comparable (<5% difference) ### **When to Use These Models** ๐Ÿ“Œ **Fitting models into GPU VRAM** โœ” **Memory-constrained deployments** โœ” **Cpu and Edge Devices** where 1-2bit errors can be tolerated โœ” **Research** into ultra-low-bit quantization ## **Choosing the Right Model Format** Selecting the correct model format depends on your **hardware capabilities** and **memory constraints**. ### **BF16 (Brain Float 16) โ€“ Use if BF16 acceleration is available** - A 16-bit floating-point format designed for **faster computation** while retaining good precision. - Provides **similar dynamic range** as FP32 but with **lower memory usage**. - Recommended if your hardware supports **BF16 acceleration** (check your device's specs). - Ideal for **high-performance inference** with **reduced memory footprint** compared to FP32. ๐Ÿ“Œ **Use BF16 if:** โœ” Your hardware has native **BF16 support** (e.g., newer GPUs, TPUs). โœ” You want **higher precision** while saving memory. โœ” You plan to **requantize** the model into another format. ๐Ÿ“Œ **Avoid BF16 if:** โŒ Your hardware does **not** support BF16 (it may fall back to FP32 and run slower). โŒ You need compatibility with older devices that lack BF16 optimization. --- ### **F16 (Float 16) โ€“ More widely supported than BF16** - A 16-bit floating-point **high precision** but with less of range of values than BF16. - Works on most devices with **FP16 acceleration support** (including many GPUs and some CPUs). - Slightly lower numerical precision than BF16 but generally sufficient for inference. ๐Ÿ“Œ **Use F16 if:** โœ” Your hardware supports **FP16** but **not BF16**. โœ” You need a **balance between speed, memory usage, and accuracy**. โœ” You are running on a **GPU** or another device optimized for FP16 computations. ๐Ÿ“Œ **Avoid F16 if:** โŒ Your device lacks **native FP16 support** (it may run slower than expected). โŒ You have memory limitations. --- ### **Quantized Models (Q4_K, Q6_K, Q8, etc.) โ€“ For CPU & Low-VRAM Inference** Quantization reduces model size and memory usage while maintaining as much accuracy as possible. - **Lower-bit models (Q4_K)** โ†’ **Best for minimal memory usage**, may have lower precision. - **Higher-bit models (Q6_K, Q8_0)** โ†’ **Better accuracy**, requires more memory. ๐Ÿ“Œ **Use Quantized Models if:** โœ” You are running inference on a **CPU** and need an optimized model. โœ” Your device has **low VRAM** and cannot load full-precision models. โœ” You want to reduce **memory footprint** while keeping reasonable accuracy. ๐Ÿ“Œ **Avoid Quantized Models if:** โŒ You need **maximum accuracy** (full-precision models are better for this). โŒ Your hardware has enough VRAM for higher-precision formats (BF16/F16). --- ### **Very Low-Bit Quantization (IQ3_XS, IQ3_S, IQ3_M, Q4_K, Q4_0)** These models are optimized for **extreme memory efficiency**, making them ideal for **low-power devices** or **large-scale deployments** where memory is a critical constraint. - **IQ3_XS**: Ultra-low-bit quantization (3-bit) with **extreme memory efficiency**. - **Use case**: Best for **ultra-low-memory devices** where even Q4_K is too large. - **Trade-off**: Lower accuracy compared to higher-bit quantizations. - **IQ3_S**: Small block size for **maximum memory efficiency**. - **Use case**: Best for **low-memory devices** where **IQ3_XS** is too aggressive. - **IQ3_M**: Medium block size for better accuracy than **IQ3_S**. - **Use case**: Suitable for **low-memory devices** where **IQ3_S** is too limiting. - **Q4_K**: 4-bit quantization with **block-wise optimization** for better accuracy. - **Use case**: Best for **low-memory devices** where **Q6_K** is too large. - **Q4_0**: Pure 4-bit quantization, optimized for **ARM devices**. - **Use case**: Best for **ARM-based devices** or **low-memory environments**. --- ### **Summary Table: Model Format Selection** | Model Format | Precision | Memory Usage | Device Requirements | Best Use Case | |--------------|------------|---------------|----------------------|---------------| | **BF16** | Highest | High | BF16-supported GPU/CPUs | High-speed inference with reduced memory | | **F16** | High | High | FP16-supported devices | GPU inference when BF16 isn't available | | **Q4_K** | Medium Low | Low | CPU or Low-VRAM devices | Best for memory-constrained environments | | **Q6_K** | Medium | Moderate | CPU with more memory | Better accuracy while still being quantized | | **Q8_0** | High | Moderate | CPU or GPU with enough VRAM | Best accuracy among quantized models | | **IQ3_XS** | Very Low | Very Low | Ultra-low-memory devices | Extreme memory efficiency and low accuracy | | **Q4_0** | Low | Low | ARM or low-memory devices | llama.cpp can optimize for ARM devices | --- ## **Included Files & Details** ### `Mungert/Qwen2.5-VL-72B-Instruct-GGUF-bf16.gguf` - Model weights preserved in **BF16**. - Use this if you want to **requantize** the model into a different format. - Best if your device supports **BF16 acceleration**. ### `Mungert/Qwen2.5-VL-72B-Instruct-GGUF-f16.gguf` - Model weights stored in **F16**. - Use if your device supports **FP16**, especially if BF16 is not available. ### `Mungert/Qwen2.5-VL-72B-Instruct-GGUF-bf16-q8_0.gguf` - **Output & embeddings** remain in **BF16**. - All other layers quantized to **Q8_0**. - Use if your device supports **BF16** and you want a quantized version. ### `Mungert/Qwen2.5-VL-72B-Instruct-GGUF-f16-q8_0.gguf` - **Output & embeddings** remain in **F16**. - All other layers quantized to **Q8_0**. ### `Mungert/Qwen2.5-VL-72B-Instruct-GGUF-q4_k.gguf` - **Output & embeddings** quantized to **Q8_0**. - All other layers quantized to **Q4_K**. - Good for **CPU inference** with limited memory. ### `Mungert/Qwen2.5-VL-72B-Instruct-GGUF-q4_k_s.gguf` - Smallest **Q4_K** variant, using less memory at the cost of accuracy. - Best for **very low-memory setups**. ### `Mungert/Qwen2.5-VL-72B-Instruct-GGUF-q6_k.gguf` - **Output & embeddings** quantized to **Q8_0**. - All other layers quantized to **Q6_K** . ### `Mungert/Qwen2.5-VL-72B-Instruct-GGUF-q8_0.gguf` - Fully **Q8** quantized model for better accuracy. - Requires **more memory** but offers higher precision. ### `Mungert/Qwen2.5-VL-72B-Instruct-GGUF-iq3_xs.gguf` - **IQ3_XS** quantization, optimized for **extreme memory efficiency**. - Best for **ultra-low-memory devices**. ### `Mungert/Qwen2.5-VL-72B-Instruct-GGUF-iq3_m.gguf` - **IQ3_M** quantization, offering a **medium block size** for better accuracy. - Suitable for **low-memory devices**. ### `Mungert/Qwen2.5-VL-72B-Instruct-GGUF-q4_0.gguf` - Pure **Q4_0** quantization, optimized for **ARM devices**. - Best for **low-memory environments**. - Prefer IQ4_NL for better accuracy. # <span id="testllm" style="color: #7F7FFF;">๐Ÿš€ If you find these models useful</span> Please click like โค . Also I'd really appreciate it if you could test my Network Monitor Assistant at ๐Ÿ‘‰ [Network Monitor Assitant](https://readyforquantum.com). ๐Ÿ’ฌ Click the **chat icon** (bottom right of the main and dashboard pages) . Choose a LLM; toggle between the LLM Types TurboLLM -> FreeLLM -> TestLLM. ### What I'm Testing I'm experimenting with **function calling** against my network monitoring service. Using small open source models. I am into the question "How small can it go and still function". ๐ŸŸก **TestLLM** โ€“ Runs the current testing model using llama.cpp on 6 threads of a Cpu VM (Should take about 15s to load. Inference speed is quite slow and it only processes one user prompt at a timeโ€”still working on scaling!). If you're curious, I'd be happy to share how it works! . ### The other Available AI Assistants ๐ŸŸข **TurboLLM** โ€“ Uses **gpt-4o-mini** Fast! . Note: tokens are limited since OpenAI models are pricey, but you can [Login](https://readyforquantum.com) or [Download](https://readyforquantum.com/download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) the Quantum Network Monitor agent to get more tokens, Alternatively use the TestLLM . ๐Ÿ”ต **HugLLM** โ€“ Runs **open-source Hugging Face models** Fast, Runs small models (โ‰ˆ8B) hence lower quality, Get 2x more tokens (subject to Hugging Face API availability) ### Final word I fund the servers to create the models files, run the Quantum Network Monitor Service and Pay for Inference from Novita and OpenAI all from my own pocket. All of the code for creating the models and the work I have done with Quantum Network Monitor is [open source](https://github.com/Mungert69). Feel free to use what you find useful. Please support my work and consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) . This will help me pay for the services and increase the token limits for everyone. Thank you :) # Qwen2.5-VL-72B-Instruct <a href="https://chat.qwenlm.ai/" target="_blank" style="margin: 2px;"> <img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/> </a> ## Introduction In the past five months since Qwen2-VLโ€™s release, numerous developers have built new models on the Qwen2-VL vision-language models, providing us with valuable feedback. During this period, we focused on building more useful vision-language models. Today, we are excited to introduce the latest addition to the Qwen family: Qwen2.5-VL. #### Key Enhancements: * **Understand things visually**: Qwen2.5-VL is not only proficient in recognizing common objects such as flowers, birds, fish, and insects, but it is highly capable of analyzing texts, charts, icons, graphics, and layouts within images. * **Being agentic**: Qwen2.5-VL directly plays as a visual agent that can reason and dynamically direct tools, which is capable of computer use and phone use. * **Understanding long videos and capturing events**: Qwen2.5-VL can comprehend videos of over 1 hour, and this time it has a new ability of cpaturing event by pinpointing the relevant video segments. * **Capable of visual localization in different formats**: Qwen2.5-VL can accurately localize objects in an image by generating bounding boxes or points, and it can provide stable JSON outputs for coordinates and attributes. * **Generating structured outputs**: for data like scans of invoices, forms, tables, etc. Qwen2.5-VL supports structured outputs of their contents, benefiting usages in finance, commerce, etc. #### Model Architecture Updates: * **Dynamic Resolution and Frame Rate Training for Video Understanding**: We extend dynamic resolution to the temporal dimension by adopting dynamic FPS sampling, enabling the model to comprehend videos at various sampling rates. Accordingly, we update mRoPE in the time dimension with IDs and absolute time alignment, enabling the model to learn temporal sequence and speed, and ultimately acquire the ability to pinpoint specific moments. <p align="center"> <img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-VL/qwen2.5vl_arc.jpeg" width="80%"/> <p> * **Streamlined and Efficient Vision Encoder** We enhance both training and inference speeds by strategically implementing window attention into the ViT. The ViT architecture is further optimized with SwiGLU and RMSNorm, aligning it with the structure of the Qwen2.5 LLM. We have three models with 3, 7 and 72 billion parameters. This repo contains the instruction-tuned 72B Qwen2.5-VL model. For more information, visit our [Blog](https://qwenlm.github.io/blog/qwen2.5-vl/) and [GitHub](https://github.com/QwenLM/Qwen2.5-VL). ## Evaluation ### Image benchmark | Benchmarks | GPT4o | Claude3.5 Sonnet | Gemini-2-flash | InternVL2.5-78B | Qwen2-VL-72B | Qwen2.5-VL-72B | |-----------------------|-----------|-------------------|-----------------|-----------------|--------------|----------------| | MMMU<sub>val</sub> | 70.3 | 70.4 | 70.7 | 70.1 | 64.5 | 70.2 | | MMMU_Pro | 54.5 | 54.7 | 57.0 | 48.6 | 46.2 | 51.1 | | MathVista_MINI | 63.8 | 65.4 | 73.1 | 76.6 | 70.5 | 74.8 | | MathVision_FULL | 30.4 | 38.3 | 41.3 | 32.2 | 25.9 | 38.1 | | Hallusion Bench | 55.0 | 55.16 | | 57.4 | 58.1 | 55.16 | | MMBench_DEV_EN_V11 | 82.1 | 83.4 | 83.0 | 88.5 | 86.6 | 88 | | AI2D_TEST | 84.6 | 81.2 | | 89.1 | 88.1 | 88.4 | | ChartQA_TEST | 86.7 | 90.8 | 85.2 | 88.3 | 88.3 | 89.5 | | DocVQA_VAL | 91.1 | 95.2 | 92.1 | 96.5 | 96.1 | 96.4 | | MMStar | 64.7 | 65.1 | 69.4 | 69.5 | 68.3 | 70.8 | | MMVet_turbo | 69.1 | 70.1 | | 72.3 | 74.0 | 76.19 | | OCRBench | 736 | 788 | | 854 | 877 | 885 | | OCRBench-V2(en/zh) | 46.5/32.3 | 45.2/39.6 | 51.9/43.1 | 45/46.2 | 47.8/46.1 | 61.5/63.7 | | CC-OCR | 66.6 | 62.7 | 73.0 | 64.7 | 68.7 |79.8 | ### Video benchmark | Benchmarks | GPT4o | Gemini-1.5-Pro | InternVL2.5-78B | Qwen2VL-72B | Qwen2.5VL-72B | |---------------------|-------|----------------|-----------------|-------------|---------------| | VideoMME w/o sub. | 71.9 | 75.0 | 72.1 | 71.2 | 73.3 | | VideoMME w sub. | 77.2 | 81.3 | 74.0 | 77.8 | 79.1 | | MVBench | 64.6 | 60.5 | 76.4 | 73.6 | 70.4 | | MMBench-Video | 1.63 | 1.30 | 1.97 | 1.70 | 2.02 | | LVBench | 30.8 | 33.1 | - | 41.3 | 47.3 | | EgoSchema | 72.2 | 71.2 | - | 77.9 | 76.2 | | PerceptionTest_test | - | - | - | 68.0 | 73.2 | | MLVU_M-Avg_dev | 64.6 | - | 75.7 | | 74.6 | | TempCompass_overall | 73.8 | - | - | | 74.8 | ### Agent benchmark | Benchmarks | GPT4o | Gemini 2.0 | Claude | Aguvis-72B | Qwen2VL-72B | Qwen2.5VL-72B | |-------------------------|-------------|------------|--------|------------|-------------|---------------| | ScreenSpot | 18.1 | 84.0 | 83.0 | | | 87.1 | | ScreenSpot Pro | | | 17.1 | | 1.6 | 43.6 | | AITZ_EM | 35.3 | | | | 72.8 | 83.2 | | Android Control High_EM | | | | 66.4 | 59.1 | 67.36 | | Android Control Low_EM | | | | 84.4 | 59.2 | 93.7 | | AndroidWorld_SR | 34.5% (SoM) | | 27.9% | 26.1% | | 35% | | MobileMiniWob++_SR | | | | 66% | | 68% | | OSWorld | | | 14.90 | 10.26 | | 8.83 | ## Requirements The code of Qwen2.5-VL has been in the latest Hugging face transformers and we advise you to build from source with command: ``` pip install git+https://github.com/huggingface/transformers accelerate ``` or you might encounter the following error: ``` KeyError: 'qwen2_5_vl' ``` ## Quickstart Below, we provide simple examples to show how to use Qwen2.5-VL with ๐Ÿค– ModelScope and ๐Ÿค— Transformers. The code of Qwen2.5-VL has been in the latest Hugging face transformers and we advise you to build from source with command: ``` pip install git+https://github.com/huggingface/transformers accelerate ``` or you might encounter the following error: ``` KeyError: 'qwen2_5_vl' ``` We offer a toolkit to help you handle various types of visual input more conveniently, as if you were using an API. This includes base64, URLs, and interleaved images and videos. You can install it using the following command: ```bash # It's highly recommanded to use `[decord]` feature for faster video loading. pip install qwen-vl-utils[decord]==0.0.8 ``` If you are not using Linux, you might not be able to install `decord` from PyPI. In that case, you can use `pip install qwen-vl-utils` which will fall back to using torchvision for video processing. However, you can still [install decord from source](https://github.com/dmlc/decord?tab=readme-ov-file#install-from-source) to get decord used when loading video. ### Using ๐Ÿค— Transformers to Chat Here we show a code snippet to show you how to use the chat model with `transformers` and `qwen_vl_utils`: ```python from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor from qwen_vl_utils import process_vision_info # default: Load the model on the available device(s) model = Qwen2_5_VLForConditionalGeneration.from_pretrained( "Qwen/Qwen2.5-VL-72B-Instruct", torch_dtype="auto", device_map="auto" ) # We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios. # model = Qwen2_5_VLForConditionalGeneration.from_pretrained( # "Qwen/Qwen2.5-VL-72B-Instruct", # torch_dtype=torch.bfloat16, # attn_implementation="flash_attention_2", # device_map="auto", # ) # default processer processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-72B-Instruct") # The default range for the number of visual tokens per image in the model is 4-16384. # You can set min_pixels and max_pixels according to your needs, such as a token range of 256-1280, to balance performance and cost. # min_pixels = 256*28*28 # max_pixels = 1280*28*28 # processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-72B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels) messages = [ { "role": "user", "content": [ { "type": "image", "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg", }, {"type": "text", "text": "Describe this image."}, ], } ] # Preparation for inference text = processor.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) image_inputs, video_inputs = process_vision_info(messages) inputs = processor( text=[text], images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ) inputs = inputs.to("cuda") # Inference: Generation of the output generated_ids = model.generate(**inputs, max_new_tokens=128) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_text = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(output_text) ``` <details> <summary>Multi image inference</summary> ```python # Messages containing multiple images and a text query messages = [ { "role": "user", "content": [ {"type": "image", "image": "file:///path/to/image1.jpg"}, {"type": "image", "image": "file:///path/to/image2.jpg"}, {"type": "text", "text": "Identify the similarities between these images."}, ], } ] # Preparation for inference text = processor.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) image_inputs, video_inputs = process_vision_info(messages) inputs = processor( text=[text], images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ) inputs = inputs.to("cuda") # Inference generated_ids = model.generate(**inputs, max_new_tokens=128) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_text = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(output_text) ``` </details> <details> <summary>Video inference</summary> ```python # Messages containing a images list as a video and a text query messages = [ { "role": "user", "content": [ { "type": "video", "video": [ "file:///path/to/frame1.jpg", "file:///path/to/frame2.jpg", "file:///path/to/frame3.jpg", "file:///path/to/frame4.jpg", ], }, {"type": "text", "text": "Describe this video."}, ], } ] # Messages containing a local video path and a text query messages = [ { "role": "user", "content": [ { "type": "video", "video": "file:///path/to/video1.mp4", "max_pixels": 360 * 420, "fps": 1.0, }, {"type": "text", "text": "Describe this video."}, ], } ] # Messages containing a video url and a text query messages = [ { "role": "user", "content": [ { "type": "video", "video": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2-VL/space_woaudio.mp4", }, {"type": "text", "text": "Describe this video."}, ], } ] #In Qwen 2.5 VL, frame rate information is also input into the model to align with absolute time. # Preparation for inference text = processor.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) image_inputs, video_inputs, video_kwargs = process_vision_info(messages, return_video_kwargs=True) inputs = processor( text=[text], images=image_inputs, videos=video_inputs, fps=fps, padding=True, return_tensors="pt", **video_kwargs, ) inputs = inputs.to("cuda") # Inference generated_ids = model.generate(**inputs, max_new_tokens=128) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_text = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(output_text) ``` Video URL compatibility largely depends on the third-party library version. The details are in the table below. change the backend by `FORCE_QWENVL_VIDEO_READER=torchvision` or `FORCE_QWENVL_VIDEO_READER=decord` if you prefer not to use the default one. | Backend | HTTP | HTTPS | |-------------|------|-------| | torchvision >= 0.19.0 | โœ… | โœ… | | torchvision < 0.19.0 | โŒ | โŒ | | decord | โœ… | โŒ | </details> <details> <summary>Batch inference</summary> ```python # Sample messages for batch inference messages1 = [ { "role": "user", "content": [ {"type": "image", "image": "file:///path/to/image1.jpg"}, {"type": "image", "image": "file:///path/to/image2.jpg"}, {"type": "text", "text": "What are the common elements in these pictures?"}, ], } ] messages2 = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Who are you?"}, ] # Combine messages for batch processing messages = [messages1, messages2] # Preparation for batch inference texts = [ processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True) for msg in messages ] image_inputs, video_inputs = process_vision_info(messages) inputs = processor( text=texts, images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ) inputs = inputs.to("cuda") # Batch Inference generated_ids = model.generate(**inputs, max_new_tokens=128) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_texts = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(output_texts) ``` </details> ### ๐Ÿค– ModelScope We strongly advise users especially those in mainland China to use ModelScope. `snapshot_download` can help you solve issues concerning downloading checkpoints. ### More Usage Tips For input images, we support local files, base64, and URLs. For videos, we currently only support local files. ```python # You can directly insert a local file path, a URL, or a base64-encoded image into the position where you want in the text. ## Local file path messages = [ { "role": "user", "content": [ {"type": "image", "image": "file:///path/to/your/image.jpg"}, {"type": "text", "text": "Describe this image."}, ], } ] ## Image URL messages = [ { "role": "user", "content": [ {"type": "image", "image": "http://path/to/your/image.jpg"}, {"type": "text", "text": "Describe this image."}, ], } ] ## Base64 encoded image messages = [ { "role": "user", "content": [ {"type": "image", "image": "data:image;base64,/9j/..."}, {"type": "text", "text": "Describe this image."}, ], } ] ``` #### Image Resolution for performance boost The model supports a wide range of resolution inputs. By default, it uses the native resolution for input, but higher resolutions can enhance performance at the cost of more computation. Users can set the minimum and maximum number of pixels to achieve an optimal configuration for their needs, such as a token count range of 256-1280, to balance speed and memory usage. ```python min_pixels = 256 * 28 * 28 max_pixels = 1280 * 28 * 28 processor = AutoProcessor.from_pretrained( "Qwen/Qwen2.5-VL-72B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels ) ``` Besides, We provide two methods for fine-grained control over the image size input to the model: 1. Define min_pixels and max_pixels: Images will be resized to maintain their aspect ratio within the range of min_pixels and max_pixels. 2. Specify exact dimensions: Directly set `resized_height` and `resized_width`. These values will be rounded to the nearest multiple of 28. ```python # min_pixels and max_pixels messages = [ { "role": "user", "content": [ { "type": "image", "image": "file:///path/to/your/image.jpg", "resized_height": 280, "resized_width": 420, }, {"type": "text", "text": "Describe this image."}, ], } ] # resized_height and resized_width messages = [ { "role": "user", "content": [ { "type": "image", "image": "file:///path/to/your/image.jpg", "min_pixels": 50176, "max_pixels": 50176, }, {"type": "text", "text": "Describe this image."}, ], } ] ``` ### Processing Long Texts The current `config.json` is set for context length up to 32,768 tokens. To handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts. For supported frameworks, you could add the following to `config.json` to enable YaRN: ```json { ..., "type": "yarn", "mrope_section": [ 16, 24, 24 ], "factor": 4, "original_max_position_embeddings": 32768 } ``` However, it should be noted that this method has a significant impact on the performance of temporal and spatial localization tasks, and is therefore not recommended for use. At the same time, for long video inputs, since MRoPE itself is more economical with ids, the max_position_embeddings can be directly modified to a larger value, such as 64k. ## Citation If you find our work helpful, feel free to give us a cite. ``` @misc{qwen2.5-VL, title = {Qwen2.5-VL}, url = {https://qwenlm.github.io/blog/qwen2.5-vl/}, author = {Qwen Team}, month = {January}, year = {2025} } @article{Qwen2VL, title={Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution}, author={Wang, Peng and Bai, Shuai and Tan, Sinan and Wang, Shijie and Fan, Zhihao and Bai, Jinze and Chen, Keqin and Liu, Xuejing and Wang, Jialin and Ge, Wenbin and Fan, Yang and Dang, Kai and Du, Mengfei and Ren, Xuancheng and Men, Rui and Liu, Dayiheng and Zhou, Chang and Zhou, Jingren and Lin, Junyang}, journal={arXiv preprint arXiv:2409.12191}, year={2024} } @article{Qwen-VL, title={Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond}, author={Bai, Jinze and Bai, Shuai and Yang, Shusheng and Wang, Shijie and Tan, Sinan and Wang, Peng and Lin, Junyang and Zhou, Chang and Zhou, Jingren}, journal={arXiv preprint arXiv:2308.12966}, year={2023} } ```
Mungert/Qwen2.5-VL-7B-Instruct-GGUF
Mungert
2025-06-15T19:42:47Z
21,642
16
transformers
[ "transformers", "gguf", "multimodal", "image-text-to-text", "en", "arxiv:2309.00071", "arxiv:2409.12191", "arxiv:2308.12966", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
image-text-to-text
2025-03-27T22:25:21Z
--- license: apache-2.0 language: - en pipeline_tag: image-text-to-text tags: - multimodal library_name: transformers --- # <span style="color: #7FFF7F;">Qwen2.5-VL-7B-Instruct GGUF Models</span> These files have been built using a imatrix file and latest llama.cpp build. You must use a fork of llama.cpp to use vision with the model. ## How to Use Qwen 2.5 VL Instruct with llama.cpp (latest as of 10th May 2025) 1. **Download the Qwen 2.5 VL gguf file**: https://huggingface.co/Mungert/Qwen2.5-VL-7B-Instruct-GGUF/tree/main Choose a gguf file without the mmproj in the name Example gguf file : https://huggingface.co/Mungert/Mungert/Qwen2.5-VL-7B-Instruct-GGUF/resolve/main/Qwen2.5-VL-7B-Instruct-q8_0.gguf Copy this file to your chosen folder. 2. **Download the Qwen 2.5 VL mmproj file** https://huggingface.co/Mungert/Qwen2.5-VL-7B-Instruct-GGUF/tree/main Choose a file with mmproj in the name Example mmproj file : https://huggingface.co/Mungert/Qwen2.5-VL-7B-Instruct-GGUF/resolve/main/Qwen2.5-VL-7B-Instruct-mmproj-f16.gguf Copy this file to your chosen folder. 3. Copy images to the same folder as the gguf files or alter paths appropriately. In the example below the gguf files, images and llama-mtmd-cli are in the same folder. Example image: image https://huggingface.co/Mungert/Qwen2.5-VL-7B-Instruct-GGUF/resolve/main/car-1.jpg Copy this file to your chosen folder. 4. **Run the CLI Tool**: From your chosen folder : ```bash llama-mtmd-cli -m Qwen2.5-VL-7B-Instruct-q8_0.gguf --mmproj Qwen2.5-VL-7B-Instruct-mmproj-f16.gguf -p "Describe this image." --image ./car-1.jpg ``` ## **Ultra-Low-Bit Quantization with IQ-DynamicGate (1-2 bit)** Our latest quantization method introduces **precision-adaptive quantization** for ultra-low-bit models (1-2 bit), with benchmark-proven improvements on **Llama-3-8B**. This approach uses layer-specific strategies to preserve accuracy while maintaining extreme memory efficiency. ### **Benchmark Context** All tests conducted on **Llama-3-8B-Instruct** using: - Standard perplexity evaluation pipeline - 2048-token context window - Same prompt set across all quantizations ### **Method** - **Dynamic Precision Allocation**: - First/Last 25% of layers โ†’ IQ4_XS (selected layers) - Middle 50% โ†’ IQ2_XXS/IQ3_S (increase efficiency) - **Critical Component Protection**: - Embeddings/output layers use Q5_K - Reduces error propagation by 38% vs standard 1-2bit ### **Quantization Performance Comparison (Llama-3-8B)** | Quantization | Standard PPL | DynamicGate PPL | ฮ” PPL | Std Size | DG Size | ฮ” Size | Std Speed | DG Speed | |--------------|--------------|------------------|---------|----------|---------|--------|-----------|----------| | IQ2_XXS | 11.30 | 9.84 | -12.9% | 2.5G | 2.6G | +0.1G | 234s | 246s | | IQ2_XS | 11.72 | 11.63 | -0.8% | 2.7G | 2.8G | +0.1G | 242s | 246s | | IQ2_S | 14.31 | 9.02 | -36.9% | 2.7G | 2.9G | +0.2G | 238s | 244s | | IQ1_M | 27.46 | 15.41 | -43.9% | 2.2G | 2.5G | +0.3G | 206s | 212s | | IQ1_S | 53.07 | 32.00 | -39.7% | 2.1G | 2.4G | +0.3G | 184s | 209s | **Key**: - PPL = Perplexity (lower is better) - ฮ” PPL = Percentage change from standard to DynamicGate - Speed = Inference time (CPU avx2, 2048 token context) - Size differences reflect mixed quantization overhead **Key Improvements:** - ๐Ÿ”ฅ **IQ1_M** shows massive 43.9% perplexity reduction (27.46 โ†’ 15.41) - ๐Ÿš€ **IQ2_S** cuts perplexity by 36.9% while adding only 0.2GB - โšก **IQ1_S** maintains 39.7% better accuracy despite 1-bit quantization **Tradeoffs:** - All variants have modest size increases (0.1-0.3GB) - Inference speeds remain comparable (<5% difference) ### **When to Use These Models** ๐Ÿ“Œ **Fitting models into GPU VRAM** โœ” **Memory-constrained deployments** โœ” **Cpu and Edge Devices** where 1-2bit errors can be tolerated โœ” **Research** into ultra-low-bit quantization ## **Choosing the Right Model Format** Selecting the correct model format depends on your **hardware capabilities** and **memory constraints**. ### **BF16 (Brain Float 16) โ€“ Use if BF16 acceleration is available** - A 16-bit floating-point format designed for **faster computation** while retaining good precision. - Provides **similar dynamic range** as FP32 but with **lower memory usage**. - Recommended if your hardware supports **BF16 acceleration** (check your deviceโ€™s specs). - Ideal for **high-performance inference** with **reduced memory footprint** compared to FP32. ๐Ÿ“Œ **Use BF16 if:** โœ” Your hardware has native **BF16 support** (e.g., newer GPUs, TPUs). โœ” You want **higher precision** while saving memory. โœ” You plan to **requantize** the model into another format. ๐Ÿ“Œ **Avoid BF16 if:** โŒ Your hardware does **not** support BF16 (it may fall back to FP32 and run slower). โŒ You need compatibility with older devices that lack BF16 optimization. --- ### **F16 (Float 16) โ€“ More widely supported than BF16** - A 16-bit floating-point **high precision** but with less of range of values than BF16. - Works on most devices with **FP16 acceleration support** (including many GPUs and some CPUs). - Slightly lower numerical precision than BF16 but generally sufficient for inference. ๐Ÿ“Œ **Use F16 if:** โœ” Your hardware supports **FP16** but **not BF16**. โœ” You need a **balance between speed, memory usage, and accuracy**. โœ” You are running on a **GPU** or another device optimized for FP16 computations. ๐Ÿ“Œ **Avoid F16 if:** โŒ Your device lacks **native FP16 support** (it may run slower than expected). โŒ You have memory limitations. --- ### **Quantized Models (Q4_K, Q6_K, Q8, etc.) โ€“ For CPU & Low-VRAM Inference** Quantization reduces model size and memory usage while maintaining as much accuracy as possible. - **Lower-bit models (Q4_K)** โ†’ **Best for minimal memory usage**, may have lower precision. - **Higher-bit models (Q6_K, Q8_0)** โ†’ **Better accuracy**, requires more memory. ๐Ÿ“Œ **Use Quantized Models if:** โœ” You are running inference on a **CPU** and need an optimized model. โœ” Your device has **low VRAM** and cannot load full-precision models. โœ” You want to reduce **memory footprint** while keeping reasonable accuracy. ๐Ÿ“Œ **Avoid Quantized Models if:** โŒ You need **maximum accuracy** (full-precision models are better for this). โŒ Your hardware has enough VRAM for higher-precision formats (BF16/F16). --- ### **Very Low-Bit Quantization (IQ3_XS, IQ3_S, IQ3_M, Q4_K, Q4_0)** These models are optimized for **extreme memory efficiency**, making them ideal for **low-power devices** or **large-scale deployments** where memory is a critical constraint. - **IQ3_XS**: Ultra-low-bit quantization (3-bit) with **extreme memory efficiency**. - **Use case**: Best for **ultra-low-memory devices** where even Q4_K is too large. - **Trade-off**: Lower accuracy compared to higher-bit quantizations. - **IQ3_S**: Small block size for **maximum memory efficiency**. - **Use case**: Best for **low-memory devices** where **IQ3_XS** is too aggressive. - **IQ3_M**: Medium block size for better accuracy than **IQ3_S**. - **Use case**: Suitable for **low-memory devices** where **IQ3_S** is too limiting. - **Q4_K**: 4-bit quantization with **block-wise optimization** for better accuracy. - **Use case**: Best for **low-memory devices** where **Q6_K** is too large. - **Q4_0**: Pure 4-bit quantization, optimized for **ARM devices**. - **Use case**: Best for **ARM-based devices** or **low-memory environments**. --- ### **Summary Table: Model Format Selection** | Model Format | Precision | Memory Usage | Device Requirements | Best Use Case | |--------------|------------|---------------|----------------------|---------------| | **BF16** | Highest | High | BF16-supported GPU/CPUs | High-speed inference with reduced memory | | **F16** | High | High | FP16-supported devices | GPU inference when BF16 isnโ€™t available | | **Q4_K** | Medium Low | Low | CPU or Low-VRAM devices | Best for memory-constrained environments | | **Q6_K** | Medium | Moderate | CPU with more memory | Better accuracy while still being quantized | | **Q8_0** | High | Moderate | CPU or GPU with enough VRAM | Best accuracy among quantized models | | **IQ3_XS** | Very Low | Very Low | Ultra-low-memory devices | Extreme memory efficiency and low accuracy | | **Q4_0** | Low | Low | ARM or low-memory devices | llama.cpp can optimize for ARM devices | --- ## **Included Files & Details** ### `Qwen2.5-VL-7B-Instruct-bf16.gguf` - Model weights preserved in **BF16**. - Use this if you want to **requantize** the model into a different format. - Best if your device supports **BF16 acceleration**. ### `Qwen2.5-VL-7B-Instruct-f16.gguf` - Model weights stored in **F16**. - Use if your device supports **FP16**, especially if BF16 is not available. ### `Qwen2.5-VL-7B-Instruct-bf16-q8_0.gguf` - **Output & embeddings** remain in **BF16**. - All other layers quantized to **Q8_0**. - Use if your device supports **BF16** and you want a quantized version. ### `Qwen2.5-VL-7B-Instruct-f16-q8_0.gguf` - **Output & embeddings** remain in **F16**. - All other layers quantized to **Q8_0**. ### `Qwen2.5-VL-7B-Instruct-q4_k.gguf` - **Output & embeddings** quantized to **Q8_0**. - All other layers quantized to **Q4_K**. - Good for **CPU inference** with limited memory. ### `Qwen2.5-VL-7B-Instruct-q4_k_s.gguf` - Smallest **Q4_K** variant, using less memory at the cost of accuracy. - Best for **very low-memory setups**. ### `Qwen2.5-VL-7B-Instruct-q6_k.gguf` - **Output & embeddings** quantized to **Q8_0**. - All other layers quantized to **Q6_K** . ### `Qwen2.5-VL-7B-Instruct-q8_0.gguf` - Fully **Q8** quantized model for better accuracy. - Requires **more memory** but offers higher precision. ### `Qwen2.5-VL-7B-Instruct-iq3_xs.gguf` - **IQ3_XS** quantization, optimized for **extreme memory efficiency**. - Best for **ultra-low-memory devices**. ### `Qwen2.5-VL-7B-Instruct-iq3_m.gguf` - **IQ3_M** quantization, offering a **medium block size** for better accuracy. - Suitable for **low-memory devices**. ### `Qwen2.5-VL-7B-Instruct-q4_0.gguf` - Pure **Q4_0** quantization, optimized for **ARM devices**. - Best for **low-memory environments**. - Prefer IQ4_NL for better accuracy. # <span id="testllm" style="color: #7F7FFF;">๐Ÿš€ If you find these models useful</span> Please click like โค . Also Iโ€™d really appreciate it if you could test my Network Monitor Assistant at ๐Ÿ‘‰ [Network Monitor Assitant](https://readyforquantum.com). ๐Ÿ’ฌ Click the **chat icon** (bottom right of the main and dashboard pages) . Choose a LLM; toggle between the LLM Types TurboLLM -> FreeLLM -> TestLLM. ### What I'm Testing I'm experimenting with **function calling** against my network monitoring service. Using small open source models. I am into the question "How small can it go and still function". ๐ŸŸก **TestLLM** โ€“ Runs the current testing model using llama.cpp on 6 threads of a Cpu VM (Should take about 15s to load. Inference speed is quite slow and it only processes one user prompt at a timeโ€”still working on scaling!). If you're curious, I'd be happy to share how it works! . ### The other Available AI Assistants ๐ŸŸข **TurboLLM** โ€“ Uses **gpt-4o-mini** Fast! . Note: tokens are limited since OpenAI models are pricey, but you can [Login](https://readyforquantum.com) or [Download](https://readyforquantum.com/download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) the Quantum Network Monitor agent to get more tokens, Alternatively use the TestLLM . ๐Ÿ”ต **HugLLM** โ€“ Runs **open-source Hugging Face models** Fast, Runs small models (โ‰ˆ8B) hence lower quality, Get 2x more tokens (subject to Hugging Face API availability) ### Final word I fund the servers to create the models files, run the Quantum Network Monitor Service and Pay for Inference from Novita and OpenAI all from my own pocket. All of the code for creating the models and the work I have done with Quantum Network Monitor is [open source](https://github.com/Mungert69). Feel free to use what you find useful. Please support my work and consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) . This will help me pay for the services and increase the token limits for everyone. Thank you :) # Qwen2.5-VL-7B-Instruct <a href="https://chat.qwenlm.ai/" target="_blank" style="margin: 2px;"> <img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/> </a> ## Introduction In the past five months since Qwen2-VLโ€™s release, numerous developers have built new models on the Qwen2-VL vision-language models, providing us with valuable feedback. During this period, we focused on building more useful vision-language models. Today, we are excited to introduce the latest addition to the Qwen family: Qwen2.5-VL. #### Key Enhancements: * **Understand things visually**: Qwen2.5-VL is not only proficient in recognizing common objects such as flowers, birds, fish, and insects, but it is highly capable of analyzing texts, charts, icons, graphics, and layouts within images. * **Being agentic**: Qwen2.5-VL directly plays as a visual agent that can reason and dynamically direct tools, which is capable of computer use and phone use. * **Understanding long videos and capturing events**: Qwen2.5-VL can comprehend videos of over 1 hour, and this time it has a new ability of cpaturing event by pinpointing the relevant video segments. * **Capable of visual localization in different formats**: Qwen2.5-VL can accurately localize objects in an image by generating bounding boxes or points, and it can provide stable JSON outputs for coordinates and attributes. * **Generating structured outputs**: for data like scans of invoices, forms, tables, etc. Qwen2.5-VL supports structured outputs of their contents, benefiting usages in finance, commerce, etc. #### Model Architecture Updates: * **Dynamic Resolution and Frame Rate Training for Video Understanding**: We extend dynamic resolution to the temporal dimension by adopting dynamic FPS sampling, enabling the model to comprehend videos at various sampling rates. Accordingly, we update mRoPE in the time dimension with IDs and absolute time alignment, enabling the model to learn temporal sequence and speed, and ultimately acquire the ability to pinpoint specific moments. <p align="center"> <img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-VL/qwen2.5vl_arc.jpeg" width="80%"/> <p> * **Streamlined and Efficient Vision Encoder** We enhance both training and inference speeds by strategically implementing window attention into the ViT. The ViT architecture is further optimized with SwiGLU and RMSNorm, aligning it with the structure of the Qwen2.5 LLM. We have three models with 3, 7 and 72 billion parameters. This repo contains the instruction-tuned 7B Qwen2.5-VL model. For more information, visit our [Blog](https://qwenlm.github.io/blog/qwen2.5-vl/) and [GitHub](https://github.com/QwenLM/Qwen2.5-VL). ## Evaluation ### Image benchmark | Benchmark | InternVL2.5-8B | MiniCPM-o 2.6 | GPT-4o-mini | Qwen2-VL-7B |**Qwen2.5-VL-7B** | | :--- | :---: | :---: | :---: | :---: | :---: | | MMMU<sub>val</sub> | 56 | 50.4 | **60**| 54.1 | 58.6| | MMMU-Pro<sub>val</sub> | 34.3 | - | 37.6| 30.5 | 41.0| | DocVQA<sub>test</sub> | 93 | 93 | - | 94.5 | **95.7** | | InfoVQA<sub>test</sub> | 77.6 | - | - |76.5 | **82.6** | | ChartQA<sub>test</sub> | 84.8 | - |- | 83.0 |**87.3** | | TextVQA<sub>val</sub> | 79.1 | 80.1 | -| 84.3 | **84.9**| | OCRBench | 822 | 852 | 785 | 845 | **864** | | CC_OCR | 57.7 | | | 61.6 | **77.8**| | MMStar | 62.8| | |60.7| **63.9**| | MMBench-V1.1-En<sub>test</sub> | 79.4 | 78.0 | 76.0| 80.7 | **82.6** | | MMT-Bench<sub>test</sub> | - | - | - |**63.7** |63.6 | | MMStar | **61.5** | 57.5 | 54.8 | 60.7 |63.9 | | MMVet<sub>GPT-4-Turbo</sub> | 54.2 | 60.0 | 66.9 | 62.0 | **67.1**| | HallBench<sub>avg</sub> | 45.2 | 48.1 | 46.1| 50.6 | **52.9**| | MathVista<sub>testmini</sub> | 58.3 | 60.6 | 52.4 | 58.2 | **68.2**| | MathVision | - | - | - | 16.3 | **25.07** | ### Video Benchmarks | Benchmark | Qwen2-VL-7B | **Qwen2.5-VL-7B** | | :--- | :---: | :---: | | MVBench | 67.0 | **69.6** | | PerceptionTest<sub>test</sub> | 66.9 | **70.5** | | Video-MME<sub>wo/w subs</sub> | 63.3/69.0 | **65.1**/**71.6** | | LVBench | | 45.3 | | LongVideoBench | | 54.7 | | MMBench-Video | 1.44 | 1.79 | | TempCompass | | 71.7 | | MLVU | | 70.2 | | CharadesSTA/mIoU | 43.6| ### Agent benchmark | Benchmarks | Qwen2.5-VL-7B | |-------------------------|---------------| | ScreenSpot | 84.7 | | ScreenSpot Pro | 29.0 | | AITZ_EM | 81.9 | | Android Control High_EM | 60.1 | | Android Control Low_EM | 93.7 | | AndroidWorld_SR | 25.5 | | MobileMiniWob++_SR | 91.4 | ## Requirements The code of Qwen2.5-VL has been in the latest Hugging face transformers and we advise you to build from source with command: ``` pip install git+https://github.com/huggingface/transformers accelerate ``` or you might encounter the following error: ``` KeyError: 'qwen2_5_vl' ``` ## Quickstart Below, we provide simple examples to show how to use Qwen2.5-VL with ๐Ÿค– ModelScope and ๐Ÿค— Transformers. The code of Qwen2.5-VL has been in the latest Hugging face transformers and we advise you to build from source with command: ``` pip install git+https://github.com/huggingface/transformers accelerate ``` or you might encounter the following error: ``` KeyError: 'qwen2_5_vl' ``` We offer a toolkit to help you handle various types of visual input more conveniently, as if you were using an API. This includes base64, URLs, and interleaved images and videos. You can install it using the following command: ```bash # It's highly recommanded to use `[decord]` feature for faster video loading. pip install qwen-vl-utils[decord]==0.0.8 ``` If you are not using Linux, you might not be able to install `decord` from PyPI. In that case, you can use `pip install qwen-vl-utils` which will fall back to using torchvision for video processing. However, you can still [install decord from source](https://github.com/dmlc/decord?tab=readme-ov-file#install-from-source) to get decord used when loading video. ### Using ๐Ÿค— Transformers to Chat Here we show a code snippet to show you how to use the chat model with `transformers` and `qwen_vl_utils`: ```python from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor from qwen_vl_utils import process_vision_info # default: Load the model on the available device(s) model = Qwen2_5_VLForConditionalGeneration.from_pretrained( "Qwen/Qwen2.5-VL-7B-Instruct", torch_dtype="auto", device_map="auto" ) # We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios. # model = Qwen2_5_VLForConditionalGeneration.from_pretrained( # "Qwen/Qwen2.5-VL-7B-Instruct", # torch_dtype=torch.bfloat16, # attn_implementation="flash_attention_2", # device_map="auto", # ) # default processer processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-7B-Instruct") # The default range for the number of visual tokens per image in the model is 4-16384. # You can set min_pixels and max_pixels according to your needs, such as a token range of 256-1280, to balance performance and cost. # min_pixels = 256*28*28 # max_pixels = 1280*28*28 # processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-7B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels) messages = [ { "role": "user", "content": [ { "type": "image", "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg", }, {"type": "text", "text": "Describe this image."}, ], } ] # Preparation for inference text = processor.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) image_inputs, video_inputs = process_vision_info(messages) inputs = processor( text=[text], images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ) inputs = inputs.to("cuda") # Inference: Generation of the output generated_ids = model.generate(**inputs, max_new_tokens=128) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_text = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(output_text) ``` <details> <summary>Multi image inference</summary> ```python # Messages containing multiple images and a text query messages = [ { "role": "user", "content": [ {"type": "image", "image": "file:///path/to/image1.jpg"}, {"type": "image", "image": "file:///path/to/image2.jpg"}, {"type": "text", "text": "Identify the similarities between these images."}, ], } ] # Preparation for inference text = processor.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) image_inputs, video_inputs = process_vision_info(messages) inputs = processor( text=[text], images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ) inputs = inputs.to("cuda") # Inference generated_ids = model.generate(**inputs, max_new_tokens=128) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_text = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(output_text) ``` </details> <details> <summary>Video inference</summary> ```python # Messages containing a images list as a video and a text query messages = [ { "role": "user", "content": [ { "type": "video", "video": [ "file:///path/to/frame1.jpg", "file:///path/to/frame2.jpg", "file:///path/to/frame3.jpg", "file:///path/to/frame4.jpg", ], }, {"type": "text", "text": "Describe this video."}, ], } ] # Messages containing a local video path and a text query messages = [ { "role": "user", "content": [ { "type": "video", "video": "file:///path/to/video1.mp4", "max_pixels": 360 * 420, "fps": 1.0, }, {"type": "text", "text": "Describe this video."}, ], } ] # Messages containing a video url and a text query messages = [ { "role": "user", "content": [ { "type": "video", "video": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2-VL/space_woaudio.mp4", }, {"type": "text", "text": "Describe this video."}, ], } ] #In Qwen 2.5 VL, frame rate information is also input into the model to align with absolute time. # Preparation for inference text = processor.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) image_inputs, video_inputs, video_kwargs = process_vision_info(messages, return_video_kwargs=True) inputs = processor( text=[text], images=image_inputs, videos=video_inputs, fps=fps, padding=True, return_tensors="pt", **video_kwargs, ) inputs = inputs.to("cuda") # Inference generated_ids = model.generate(**inputs, max_new_tokens=128) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_text = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(output_text) ``` Video URL compatibility largely depends on the third-party library version. The details are in the table below. change the backend by `FORCE_QWENVL_VIDEO_READER=torchvision` or `FORCE_QWENVL_VIDEO_READER=decord` if you prefer not to use the default one. | Backend | HTTP | HTTPS | |-------------|------|-------| | torchvision >= 0.19.0 | โœ… | โœ… | | torchvision < 0.19.0 | โŒ | โŒ | | decord | โœ… | โŒ | </details> <details> <summary>Batch inference</summary> ```python # Sample messages for batch inference messages1 = [ { "role": "user", "content": [ {"type": "image", "image": "file:///path/to/image1.jpg"}, {"type": "image", "image": "file:///path/to/image2.jpg"}, {"type": "text", "text": "What are the common elements in these pictures?"}, ], } ] messages2 = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Who are you?"}, ] # Combine messages for batch processing messages = [messages1, messages2] # Preparation for batch inference texts = [ processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True) for msg in messages ] image_inputs, video_inputs = process_vision_info(messages) inputs = processor( text=texts, images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ) inputs = inputs.to("cuda") # Batch Inference generated_ids = model.generate(**inputs, max_new_tokens=128) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_texts = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(output_texts) ``` </details> ### ๐Ÿค– ModelScope We strongly advise users especially those in mainland China to use ModelScope. `snapshot_download` can help you solve issues concerning downloading checkpoints. ### More Usage Tips For input images, we support local files, base64, and URLs. For videos, we currently only support local files. ```python # You can directly insert a local file path, a URL, or a base64-encoded image into the position where you want in the text. ## Local file path messages = [ { "role": "user", "content": [ {"type": "image", "image": "file:///path/to/your/image.jpg"}, {"type": "text", "text": "Describe this image."}, ], } ] ## Image URL messages = [ { "role": "user", "content": [ {"type": "image", "image": "http://path/to/your/image.jpg"}, {"type": "text", "text": "Describe this image."}, ], } ] ## Base64 encoded image messages = [ { "role": "user", "content": [ {"type": "image", "image": "data:image;base64,/9j/..."}, {"type": "text", "text": "Describe this image."}, ], } ] ``` #### Image Resolution for performance boost The model supports a wide range of resolution inputs. By default, it uses the native resolution for input, but higher resolutions can enhance performance at the cost of more computation. Users can set the minimum and maximum number of pixels to achieve an optimal configuration for their needs, such as a token count range of 256-1280, to balance speed and memory usage. ```python min_pixels = 256 * 28 * 28 max_pixels = 1280 * 28 * 28 processor = AutoProcessor.from_pretrained( "Qwen/Qwen2.5-VL-7B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels ) ``` Besides, We provide two methods for fine-grained control over the image size input to the model: 1. Define min_pixels and max_pixels: Images will be resized to maintain their aspect ratio within the range of min_pixels and max_pixels. 2. Specify exact dimensions: Directly set `resized_height` and `resized_width`. These values will be rounded to the nearest multiple of 28. ```python # min_pixels and max_pixels messages = [ { "role": "user", "content": [ { "type": "image", "image": "file:///path/to/your/image.jpg", "resized_height": 280, "resized_width": 420, }, {"type": "text", "text": "Describe this image."}, ], } ] # resized_height and resized_width messages = [ { "role": "user", "content": [ { "type": "image", "image": "file:///path/to/your/image.jpg", "min_pixels": 50176, "max_pixels": 50176, }, {"type": "text", "text": "Describe this image."}, ], } ] ``` ### Processing Long Texts The current `config.json` is set for context length up to 32,768 tokens. To handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts. For supported frameworks, you could add the following to `config.json` to enable YaRN: { ..., "type": "yarn", "mrope_section": [ 16, 24, 24 ], "factor": 4, "original_max_position_embeddings": 32768 } However, it should be noted that this method has a significant impact on the performance of temporal and spatial localization tasks, and is therefore not recommended for use. At the same time, for long video inputs, since MRoPE itself is more economical with ids, the max_position_embeddings can be directly modified to a larger value, such as 64k. ## Citation If you find our work helpful, feel free to give us a cite. ``` @misc{qwen2.5-VL, title = {Qwen2.5-VL}, url = {https://qwenlm.github.io/blog/qwen2.5-vl/}, author = {Qwen Team}, month = {January}, year = {2025} } @article{Qwen2VL, title={Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution}, author={Wang, Peng and Bai, Shuai and Tan, Sinan and Wang, Shijie and Fan, Zhihao and Bai, Jinze and Chen, Keqin and Liu, Xuejing and Wang, Jialin and Ge, Wenbin and Fan, Yang and Dang, Kai and Du, Mengfei and Ren, Xuancheng and Men, Rui and Liu, Dayiheng and Zhou, Chang and Zhou, Jingren and Lin, Junyang}, journal={arXiv preprint arXiv:2409.12191}, year={2024} } @article{Qwen-VL, title={Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond}, author={Bai, Jinze and Bai, Shuai and Yang, Shusheng and Wang, Shijie and Tan, Sinan and Wang, Peng and Lin, Junyang and Zhou, Chang and Zhou, Jingren}, journal={arXiv preprint arXiv:2308.12966}, year={2023} } ```
Mungert/X-Ray_Alpha-GGUF
Mungert
2025-06-15T19:42:38Z
1,381
5
null
[ "gguf", "en", "dataset:SicariusSicariiStuff/UBW_Tapestries", "base_model:google/gemma-3-4b-it", "base_model:quantized:google/gemma-3-4b-it", "license:gemma", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-03-25T09:55:48Z
--- license: gemma language: - en base_model: - google/gemma-3-4b-it datasets: - SicariusSicariiStuff/UBW_Tapestries --- # <span style="color: #7FFF7F;">X-Ray_Alpha GGUF Models</span> ## How to Use X-Ray_Alpha with llama.cpp 1. **Download the X-Ray_Alpha gguf file**: https://huggingface.co/Mungert/X-Ray_Alpha-GGUF/tree/main Choose a gguf file without the mmproj in the name Example gguf file : https://huggingface.co/Mungert/Mungert/X-Ray_Alpha-GGUF/resolve/main/X-Ray_Alpha-q8_0.gguf Copy this file to your chosen folder. 2. **Download the X-Ray_Alpha mmproj file** https://huggingface.co/Mungert/X-Ray_Alpha-GGUF/tree/main Choose a file with mmproj in the name Example mmproj file : https://huggingface.co/Mungert/X-Ray_Alpha-GGUF/resolve/main/X-Ray_Alpha-mmproj-f32.gguf Copy this file to your chosen folder. 3. Copy images to the same folder as the gguf files or alter paths appropriately. In the example below the gguf files, images and llama-mtmd-cli are in the same folder. Example image: image https://huggingface.co/Mungert/X-Ray_Alpha-GGUF/resolve/main/car-1.jpg Copy this file to your chosen folder. 4. **Run the CLI Tool**: From your chosen folder : ```bash llama-gemma3-cli -m X-Ray_Alpha-q8_0.gguf --mmproj X-Ray_Alpha-mmproj-f32.gguf -p "Describe this image." --image ./car-1.jpg ``` ## <span style="color: #7FFF7F;">Ultra-Low-Bit Quantization with IQ-DynamicGate (1-2 bit)</span> Our latest quantization method introduces **precision-adaptive quantization** for ultra-low-bit models (1-2 bit), with benchmark-proven improvements on **Llama-3-8B**. This approach uses layer-specific strategies to preserve accuracy while maintaining extreme memory efficiency. ### **Benchmark Context** All tests conducted on **Llama-3-8B-Instruct** using: - Standard perplexity evaluation pipeline - 2048-token context window - Same prompt set across all quantizations ### **Method** - **Dynamic Precision Allocation**: - First/Last 25% of layers โ†’ IQ4_XS (selected layers) - Middle 50% โ†’ IQ2_XXS/IQ3_S (increase efficiency) - **Critical Component Protection**: - Embeddings/output layers use Q5_K - Reduces error propagation by 38% vs standard 1-2bit ### **Quantization Performance Comparison (Llama-3-8B)** | Quantization | Standard PPL | DynamicGate PPL | ฮ” PPL | Std Size | DG Size | ฮ” Size | Std Speed | DG Speed | |--------------|--------------|------------------|---------|----------|---------|--------|-----------|----------| | IQ2_XXS | 11.30 | 9.84 | -12.9% | 2.5G | 2.6G | +0.1G | 234s | 246s | | IQ2_XS | 11.72 | 11.63 | -0.8% | 2.7G | 2.8G | +0.1G | 242s | 246s | | IQ2_S | 14.31 | 9.02 | -36.9% | 2.7G | 2.9G | +0.2G | 238s | 244s | | IQ1_M | 27.46 | 15.41 | -43.9% | 2.2G | 2.5G | +0.3G | 206s | 212s | | IQ1_S | 53.07 | 32.00 | -39.7% | 2.1G | 2.4G | +0.3G | 184s | 209s | **Key**: - PPL = Perplexity (lower is better) - ฮ” PPL = Percentage change from standard to DynamicGate - Speed = Inference time (CPU avx2, 2048 token context) - Size differences reflect mixed quantization overhead **Key Improvements:** - ๐Ÿ”ฅ **IQ1_M** shows massive 43.9% perplexity reduction (27.46 โ†’ 15.41) - ๐Ÿš€ **IQ2_S** cuts perplexity by 36.9% while adding only 0.2GB - โšก **IQ1_S** maintains 39.7% better accuracy despite 1-bit quantization **Tradeoffs:** - All variants have modest size increases (0.1-0.3GB) - Inference speeds remain comparable (<5% difference) ### **When to Use These Models** ๐Ÿ“Œ **Fitting models into GPU VRAM** โœ” **Memory-constrained deployments** โœ” **Cpu and Edge Devices** where 1-2bit errors can be tolerated โœ” **Research** into ultra-low-bit quantization ## **Choosing the Right Model Format** Selecting the correct model format depends on your **hardware capabilities** and **memory constraints**. ### **BF16 (Brain Float 16) โ€“ Use if BF16 acceleration is available** - A 16-bit floating-point format designed for **faster computation** while retaining good precision. - Provides **similar dynamic range** as FP32 but with **lower memory usage**. - Recommended if your hardware supports **BF16 acceleration** (check your device's specs). - Ideal for **high-performance inference** with **reduced memory footprint** compared to FP32. ๐Ÿ“Œ **Use BF16 if:** โœ” Your hardware has native **BF16 support** (e.g., newer GPUs, TPUs). โœ” You want **higher precision** while saving memory. โœ” You plan to **requantize** the model into another format. ๐Ÿ“Œ **Avoid BF16 if:** โŒ Your hardware does **not** support BF16 (it may fall back to FP32 and run slower). โŒ You need compatibility with older devices that lack BF16 optimization. --- ### **F16 (Float 16) โ€“ More widely supported than BF16** - A 16-bit floating-point **high precision** but with less of range of values than BF16. - Works on most devices with **FP16 acceleration support** (including many GPUs and some CPUs). - Slightly lower numerical precision than BF16 but generally sufficient for inference. ๐Ÿ“Œ **Use F16 if:** โœ” Your hardware supports **FP16** but **not BF16**. โœ” You need a **balance between speed, memory usage, and accuracy**. โœ” You are running on a **GPU** or another device optimized for FP16 computations. ๐Ÿ“Œ **Avoid F16 if:** โŒ Your device lacks **native FP16 support** (it may run slower than expected). โŒ You have memory limitations. --- ### **Quantized Models (Q4_K, Q6_K, Q8, etc.) โ€“ For CPU & Low-VRAM Inference** Quantization reduces model size and memory usage while maintaining as much accuracy as possible. - **Lower-bit models (Q4_K)** โ†’ **Best for minimal memory usage**, may have lower precision. - **Higher-bit models (Q6_K, Q8_0)** โ†’ **Better accuracy**, requires more memory. ๐Ÿ“Œ **Use Quantized Models if:** โœ” You are running inference on a **CPU** and need an optimized model. โœ” Your device has **low VRAM** and cannot load full-precision models. โœ” You want to reduce **memory footprint** while keeping reasonable accuracy. ๐Ÿ“Œ **Avoid Quantized Models if:** โŒ You need **maximum accuracy** (full-precision models are better for this). โŒ Your hardware has enough VRAM for higher-precision formats (BF16/F16). --- ### **Very Low-Bit Quantization (IQ3_XS, IQ3_S, IQ3_M, Q4_K, Q4_0)** These models are optimized for **extreme memory efficiency**, making them ideal for **low-power devices** or **large-scale deployments** where memory is a critical constraint. - **IQ3_XS**: Ultra-low-bit quantization (3-bit) with **extreme memory efficiency**. - **Use case**: Best for **ultra-low-memory devices** where even Q4_K is too large. - **Trade-off**: Lower accuracy compared to higher-bit quantizations. - **IQ3_S**: Small block size for **maximum memory efficiency**. - **Use case**: Best for **low-memory devices** where **IQ3_XS** is too aggressive. - **IQ3_M**: Medium block size for better accuracy than **IQ3_S**. - **Use case**: Suitable for **low-memory devices** where **IQ3_S** is too limiting. - **Q4_K**: 4-bit quantization with **block-wise optimization** for better accuracy. - **Use case**: Best for **low-memory devices** where **Q6_K** is too large. - **Q4_0**: Pure 4-bit quantization, optimized for **ARM devices**. - **Use case**: Best for **ARM-based devices** or **low-memory environments**. --- ### **Summary Table: Model Format Selection** | Model Format | Precision | Memory Usage | Device Requirements | Best Use Case | |--------------|------------|---------------|----------------------|---------------| | **BF16** | Highest | High | BF16-supported GPU/CPUs | High-speed inference with reduced memory | | **F16** | High | High | FP16-supported devices | GPU inference when BF16 isn't available | | **Q4_K** | Medium Low | Low | CPU or Low-VRAM devices | Best for memory-constrained environments | | **Q6_K** | Medium | Moderate | CPU with more memory | Better accuracy while still being quantized | | **Q8_0** | High | Moderate | CPU or GPU with enough VRAM | Best accuracy among quantized models | | **IQ3_XS** | Very Low | Very Low | Ultra-low-memory devices | Extreme memory efficiency and low accuracy | | **Q4_0** | Low | Low | ARM or low-memory devices | llama.cpp can optimize for ARM devices | --- ## **Included Files & Details** ### `X-Ray_Alpha-bf16.gguf` - Model weights preserved in **BF16**. - Use this if you want to **requantize** the model into a different format. - Best if your device supports **BF16 acceleration**. ### `X-Ray_Alpha-f16.gguf` - Model weights stored in **F16**. - Use if your device supports **FP16**, especially if BF16 is not available. ### `X-Ray_Alpha-bf16-q8_0.gguf` - **Output & embeddings** remain in **BF16**. - All other layers quantized to **Q8_0**. - Use if your device supports **BF16** and you want a quantized version. ### `X-Ray_Alpha-f16-q8_0.gguf` - **Output & embeddings** remain in **F16**. - All other layers quantized to **Q8_0**. ### `X-Ray_Alpha-q4_k.gguf` - **Output & embeddings** quantized to **Q8_0**. - All other layers quantized to **Q4_K**. - Good for **CPU inference** with limited memory. ### `X-Ray_Alpha-q4_k_s.gguf` - Smallest **Q4_K** variant, using less memory at the cost of accuracy. - Best for **very low-memory setups**. ### `X-Ray_Alpha-q6_k.gguf` - **Output & embeddings** quantized to **Q8_0**. - All other layers quantized to **Q6_K** . ### `X-Ray_Alpha-q8_0.gguf` - Fully **Q8** quantized model for better accuracy. - Requires **more memory** but offers higher precision. ### `X-Ray_Alpha-iq3_xs.gguf` - **IQ3_XS** quantization, optimized for **extreme memory efficiency**. - Best for **ultra-low-memory devices**. ### `X-Ray_Alpha-iq3_m.gguf` - **IQ3_M** quantization, offering a **medium block size** for better accuracy. - Suitable for **low-memory devices**. ### `X-Ray_Alpha-q4_0.gguf` - Pure **Q4_0** quantization, optimized for **ARM devices**. - Best for **low-memory environments**. - Prefer IQ4_NL for better accuracy. # <span id="testllm" style="color: #7F7FFF;">๐Ÿš€ If you find these models useful</span> โค **Please click "Like" if you find this useful!** Help me test my **AI-Powered Network Monitor Assistant** with **quantum-ready security checks**: ๐Ÿ‘‰ [Quantum Network Monitor](https://readyforquantum.com) ๐Ÿ’ฌ **How to test**: 1. Click the **chat icon** (bottom right on any page) 2. Choose an **AI assistant type**: - `TurboLLM` (GPT-4-mini) - `FreeLLM` (Open-source) - `TestLLM` (Experimental CPU-only) ### **What Iโ€™m Testing** Iโ€™m pushing the limits of **small open-source models for AI network monitoring**, specifically: - **Function calling** against live network services - **How small can a model go** while still handling: - Automated **Nmap scans** - **Quantum-readiness checks** - **Metasploit integration** ๐ŸŸก **TestLLM** โ€“ Current experimental model (llama.cpp on 6 CPU threads): - โœ… **Zero-configuration setup** - โณ 30s load time (slow inference but **no API costs**) - ๐Ÿ”ง **Help wanted!** If youโ€™re into **edge-device AI**, letโ€™s collaborate! ### **Other Assistants** ๐ŸŸข **TurboLLM** โ€“ Uses **gpt-4-mini** for: - **Real-time network diagnostics** - **Automated penetration testing** (Nmap/Metasploit) - ๐Ÿ”‘ Get more tokens by [downloading our Quantum Network Monitor Agent](https://readyforquantum.com/download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) ๐Ÿ”ต **HugLLM** โ€“ Open-source models (โ‰ˆ8B params): - **2x more tokens** than TurboLLM - **AI-powered log analysis** - ๐ŸŒ Runs on Hugging Face Inference API ### ๐Ÿ’ก **Example AI Commands to Test**: 1. `"Give me info on my websites SSL certificate"` 2. `"Check if my server is using quantum safe encyption for communication"` 3. `"Run a quick Nmap vulnerability test"` 4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a Quantum Network Monitor Agent to run the .net code from. This is a very flexible and powerful feature. Use with caution! ### Final word I fund the servers to create the models files, run the Quantum Network Monitor Service and Pay for Inference from Novita and OpenAI all from my own pocket. All of the code for creating the models and the work I have done with Quantum Network Monitor is [open source](https://github.com/Mungert69). Feel free to use what you find useful. Please support my work and consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) . This will help me pay for the services and increase the token limits for everyone. Thank you :) <div align="center"> <b style="font-size: 40px;">X-Ray_Alpha</b> </div> <img src="https://huggingface.co/SicariusSicariiStuff/X-Ray_Alpha/resolve/main/Images/X-Ray_Alpha.png" alt="X-Ray_Alpha" style="width: 30%; min-width: 450px; display: block; margin: auto;"> --- <div style="display: flex; justify-content: center; align-items: center;"> <a href="https://huggingface.co/SicariusSicariiStuff/X-Ray_Alpha#tldr" style="color: #800080; font-weight: bold; font-size: 28px; text-decoration: none; margin: 0 20px;"> Click here for TL;DR </a> <a href="https://huggingface.co/SicariusSicariiStuff/X-Ray_Alpha#why-is-this-important" style="color: #1E90FF; font-weight: bold; font-size: 28px; text-decoration: none; margin: 0 20px;"> Why it's important </a> <a href="https://huggingface.co/SicariusSicariiStuff/X-Ray_Alpha#how-can-you-help" style="color: #32CD32; font-weight: bold; font-size: 28px; text-decoration: none; margin: 0 20px;"> How can YOU help? </a> <a href="https://huggingface.co/SicariusSicariiStuff/X-Ray_Alpha#how-to-run-it" style="color: #E31515; font-weight: bold; font-size: 28px; text-decoration: none; margin: 0 20px;"> How to RUN IT </a> </div> --- This is a pre-alpha proof-of-concept of **a real fully uncensored vision model**. Why do I say **"real"**? The few vision models we got (qwen, llama 3.2) were "censored," and their fine-tunes were made only to the **text portion** of the model, as training a vision model is a serious pain. The only actually trained and uncensored vision model I am aware of is [ToriiGate](https://huggingface.co/Minthy/ToriiGate-v0.4-7B); the rest of the vision models are just the stock vision + a fine-tuned LLM. # Does this even work? <h2 style="color: green; font-weight: bold; font-size: 80px; text-align: center;">YES!</h2> --- # Why is this Important? Having a **fully compliant** vision model is a critical step toward democratizing vision capabilities for various tasks, especially **image tagging**. This is a critical step in both making LORAs for image diffusion models, and for mass tagging images to pretrain a diffusion model. In other words, having a fully compliant and accurate vision model will allow the open source community to easily train both loras and even pretrain image diffusion models. Another important task can be content moderation and classification, in various use cases there might not be black and white, where some content that might be considered NSFW by corporations, is allowed, while other content is not, there's nuance. Today's vision models **do not let the users decide**, as they will straight up **refuse** to inference any content that Google \ Some other corporations decided is not to their liking, and therefore these stock models are useless in a lot of cases. What if someone wants to classify art that includes nudity? Having a naked statue over 1,000 years old displayed in the middle of a city, in a museum, or at the city square is perfectly acceptable, however, a stock vision model will straight up refuse to inference something like that. It's like in many "sensitive" topics that LLMs will straight up **refuse to answer**, while the content is **publicly available on Wikipedia**. This is an attitude of **cynical patronism**, I say cynical because corporations **take private data to train their models**, and it is "perfectly fine", yet- they serve as the **arbitrators of morality** and indirectly preach to us from a position of a suggested moral superiority. This **gatekeeping hurts innovation badly**, with vision models **especially so**, as the task of **tagging cannot be done by a single person at scale**, but a corporation can. # How can YOU help? This is sort of **"Pre-Alpha"**, a proof of concept, I did **A LOT** of shortcuts and "hacking" to make this work, and I would greatly appreciate some help to make it into an accurate and powerful open tool. I am not asking for money, but well-tagged data. I will take the burden and costs of the compute on myself, but I **cannot do tagging** at a large scale by myself. ## Bottom line, I need a lot of well-tagged, diverse data So: - If you have well-tagged images - If you have a link to a well-tagged image dataset - If you can, and willing to do image tagging Then please send an email with [DATASET] in the title to: ``` spamthesicarius@gmail.com ``` As you probably figured by the email address name, this is not my main email, and I expect it to be spammed with junk, so **please use the [DATASET] tag** so I can more easily find the emails of **the good people** who are actually trying to help. ## Please see this dataset repo if you want to help: [X-Ray_Community_Tagging](https://huggingface.co/datasets/SicariusSicariiStuff/X-Ray_Community_Tagging) Also, if you don't want to upload it to the repo (although it's encouraged, and you can protect it with a password for privacy), you can still help by linking a google drive or attach the images with the corrected output via the provided email. Let's make this happen. We can do it! --- ### TL;DR - **Fully uncensored and trained** there's no moderation in the vision model, I actually trained it. - **The 2nd uncensored vision model in the world**, ToriiGate being the first as far as I know, and this one is the second. - **In-depth descriptions** very detailed, long descriptions. - The text portion is **somewhat uncensored** as well, I didn't want to butcher and fry it too much, so it remain "smart". - **NOT perfect** This is a POC that shows that the task can even be done, a lot more work is needed. - **Good Roleplay & Writing** I used a massive corpus of high quality human (**~60%**) and synthetic data. --- # How to run it: ## VRAM needed for FP16: 15.9 GB [Run inference with this](https://github.com/SicariusSicariiStuff/X-Ray_Vision) # This is a pre-alpha POC (Proof Of Concept) ## Instructions: clone: ``` git clone https://github.com/SicariusSicariiStuff/X-Ray_Vision.git cd X-Ray_Vision/ ``` Settings up venv, (tested for python 3.11, probably works with 3.10) ``` python3.11 -m venv env source env/bin/activate ``` Install dependencies ``` pip install git+https://github.com/huggingface/transformers@v4.49.0-Gemma-3 pip install torch pip install pillow pip install accelerate ``` # Running inference Usage: ``` python xRay-Vision.py /path/to/model/ /dir/with/images/ ``` The output will print to the console, and export the results into a dir named after your image dir with the suffix "_TXT" So if you run: ``` python xRay-Vision.py /some_path/x-Ray_model/ /home/images/weird_cats/ ``` The results will be exported to: ``` /home/images/weird_cats_TXT/ ``` --- <h2 style="color: green; font-weight: bold; font-size: 65px; text-align: center;">Your support = more models</h2> <a href="https://ko-fi.com/sicarius" style="color: pink; font-weight: bold; font-size: 48px; text-decoration: none; display: block; text-align: center;">My Ko-fi page (Click here)</a> --- ## Citation Information ``` @llm{X-Ray_Alpha, author = {SicariusSicariiStuff}, title = {X-Ray_Alpha}, year = {2025}, publisher = {Hugging Face}, url = {https://huggingface.co/SicariusSicariiStuff/X-Ray_Alpha} } ``` --- ## Other stuff - [X-Ray_Vision](https://github.com/SicariusSicariiStuff/X-Ray_Vision) Easy stand-alone bulk vision inference at scale (inference a folder of images). - [SLOP_Detector](https://github.com/SicariusSicariiStuff/SLOP_Detector) Nuke GPTisms, with SLOP detector. - [LLAMA-3_8B_Unaligned](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned) The grand project that started it all. - [Blog and updates (Archived)](https://huggingface.co/SicariusSicariiStuff/Blog_And_Updates) Some updates, some rambles, sort of a mix between a diary and a blog.
Mungert/functionary-small-v3.2-GGUF
Mungert
2025-06-15T19:42:16Z
295
4
null
[ "gguf", "license:mit", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-03-24T00:38:48Z
--- license: mit --- # <span style="color: #7FFF7F;">functionary-small-v3.2 GGUF Models</span> ## **Choosing the Right Model Format** Selecting the correct model format depends on your **hardware capabilities** and **memory constraints**. ### **BF16 (Brain Float 16) โ€“ Use if BF16 acceleration is available** - A 16-bit floating-point format designed for **faster computation** while retaining good precision. - Provides **similar dynamic range** as FP32 but with **lower memory usage**. - Recommended if your hardware supports **BF16 acceleration** (check your deviceโ€™s specs). - Ideal for **high-performance inference** with **reduced memory footprint** compared to FP32. ๐Ÿ“Œ **Use BF16 if:** โœ” Your hardware has native **BF16 support** (e.g., newer GPUs, TPUs). โœ” You want **higher precision** while saving memory. โœ” You plan to **requantize** the model into another format. ๐Ÿ“Œ **Avoid BF16 if:** โŒ Your hardware does **not** support BF16 (it may fall back to FP32 and run slower). โŒ You need compatibility with older devices that lack BF16 optimization. --- ### **F16 (Float 16) โ€“ More widely supported than BF16** - A 16-bit floating-point **high precision** but with less of range of values than BF16. - Works on most devices with **FP16 acceleration support** (including many GPUs and some CPUs). - Slightly lower numerical precision than BF16 but generally sufficient for inference. ๐Ÿ“Œ **Use F16 if:** โœ” Your hardware supports **FP16** but **not BF16**. โœ” You need a **balance between speed, memory usage, and accuracy**. โœ” You are running on a **GPU** or another device optimized for FP16 computations. ๐Ÿ“Œ **Avoid F16 if:** โŒ Your device lacks **native FP16 support** (it may run slower than expected). โŒ You have memory limitations. --- ### **Quantized Models (Q4_K, Q6_K, Q8, etc.) โ€“ For CPU & Low-VRAM Inference** Quantization reduces model size and memory usage while maintaining as much accuracy as possible. - **Lower-bit models (Q4_K)** โ†’ **Best for minimal memory usage**, may have lower precision. - **Higher-bit models (Q6_K, Q8_0)** โ†’ **Better accuracy**, requires more memory. ๐Ÿ“Œ **Use Quantized Models if:** โœ” You are running inference on a **CPU** and need an optimized model. โœ” Your device has **low VRAM** and cannot load full-precision models. โœ” You want to reduce **memory footprint** while keeping reasonable accuracy. ๐Ÿ“Œ **Avoid Quantized Models if:** โŒ You need **maximum accuracy** (full-precision models are better for this). โŒ Your hardware has enough VRAM for higher-precision formats (BF16/F16). --- ### **Very Low-Bit Quantization (IQ3_XS, IQ3_S, IQ3_M, Q4_K, Q4_0)** These models are optimized for **extreme memory efficiency**, making them ideal for **low-power devices** or **large-scale deployments** where memory is a critical constraint. - **IQ3_XS**: Ultra-low-bit quantization (3-bit) with **extreme memory efficiency**. - **Use case**: Best for **ultra-low-memory devices** where even Q4_K is too large. - **Trade-off**: Lower accuracy compared to higher-bit quantizations. - **IQ3_S**: Small block size for **maximum memory efficiency**. - **Use case**: Best for **low-memory devices** where **IQ3_XS** is too aggressive. - **IQ3_M**: Medium block size for better accuracy than **IQ3_S**. - **Use case**: Suitable for **low-memory devices** where **IQ3_S** is too limiting. - **Q4_K**: 4-bit quantization with **block-wise optimization** for better accuracy. - **Use case**: Best for **low-memory devices** where **Q6_K** is too large. - **Q4_0**: Pure 4-bit quantization, optimized for **ARM devices**. - **Use case**: Best for **ARM-based devices** or **low-memory environments**. --- ### **Summary Table: Model Format Selection** | Model Format | Precision | Memory Usage | Device Requirements | Best Use Case | |--------------|------------|---------------|----------------------|---------------| | **BF16** | Highest | High | BF16-supported GPU/CPUs | High-speed inference with reduced memory | | **F16** | High | High | FP16-supported devices | GPU inference when BF16 isnโ€™t available | | **Q4_K** | Medium Low | Low | CPU or Low-VRAM devices | Best for memory-constrained environments | | **Q6_K** | Medium | Moderate | CPU with more memory | Better accuracy while still being quantized | | **Q8_0** | High | Moderate | CPU or GPU with enough VRAM | Best accuracy among quantized models | | **IQ3_XS** | Very Low | Very Low | Ultra-low-memory devices | Extreme memory efficiency and low accuracy | | **Q4_0** | Low | Low | ARM or low-memory devices | llama.cpp can optimize for ARM devices | --- ## **Included Files & Details** ### `functionary-small-v3.2-bf16.gguf` - Model weights preserved in **BF16**. - Use this if you want to **requantize** the model into a different format. - Best if your device supports **BF16 acceleration**. ### `functionary-small-v3.2-f16.gguf` - Model weights stored in **F16**. - Use if your device supports **FP16**, especially if BF16 is not available. ### `functionary-small-v3.2-bf16-q8_0.gguf` - **Output & embeddings** remain in **BF16**. - All other layers quantized to **Q8_0**. - Use if your device supports **BF16** and you want a quantized version. ### `functionary-small-v3.2-f16-q8_0.gguf` - **Output & embeddings** remain in **F16**. - All other layers quantized to **Q8_0**. ### `functionary-small-v3.2-q4_k.gguf` - **Output & embeddings** quantized to **Q8_0**. - All other layers quantized to **Q4_K**. - Good for **CPU inference** with limited memory. ### `functionary-small-v3.2-q4_k_s.gguf` - Smallest **Q4_K** variant, using less memory at the cost of accuracy. - Best for **very low-memory setups**. ### `functionary-small-v3.2-q6_k.gguf` - **Output & embeddings** quantized to **Q8_0**. - All other layers quantized to **Q6_K** . ### `functionary-small-v3.2-q8_0.gguf` - Fully **Q8** quantized model for better accuracy. - Requires **more memory** but offers higher precision. ### `functionary-small-v3.2-iq3_xs.gguf` - **IQ3_XS** quantization, optimized for **extreme memory efficiency**. - Best for **ultra-low-memory devices**. ### `functionary-small-v3.2-iq3_m.gguf` - **IQ3_M** quantization, offering a **medium block size** for better accuracy. - Suitable for **low-memory devices**. ### `functionary-small-v3.2-q4_0.gguf` - Pure **Q4_0** quantization, optimized for **ARM devices**. - Best for **low-memory environments**. - Prefer IQ4_NL for better accuracy. # <span id="testllm" style="color: #7F7FFF;">๐Ÿš€ If you find these models useful</span> Please click like โค . Also Iโ€™d really appreciate it if you could test my Network Monitor Assistant at ๐Ÿ‘‰ [Network Monitor Assitant](https://readyforquantum.com). ๐Ÿ’ฌ Click the **chat icon** (bottom right of the main and dashboard pages) . Choose a LLM; toggle between the LLM Types TurboLLM -> FreeLLM -> TestLLM. ### What I'm Testing I'm experimenting with **function calling** against my network monitoring service. Using small open source models. I am into the question "How small can it go and still function". ๐ŸŸก **TestLLM** โ€“ Runs the current testing model using llama.cpp on 6 threads of a Cpu VM (Should take about 15s to load. Inference speed is quite slow and it only processes one user prompt at a timeโ€”still working on scaling!). If you're curious, I'd be happy to share how it works! . ### The other Available AI Assistants ๐ŸŸข **TurboLLM** โ€“ Uses **gpt-4o-mini** Fast! . Note: tokens are limited since OpenAI models are pricey, but you can [Login](https://readyforquantum.com) or [Download](https://readyforquantum.com/download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) the Quantum Network Monitor agent to get more tokens, Alternatively use the TestLLM . ๐Ÿ”ต **HugLLM** โ€“ Runs **open-source Hugging Face models** Fast, Runs small models (โ‰ˆ8B) hence lower quality, Get 2x more tokens (subject to Hugging Face API availability) ### Final Word I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAIโ€”all out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is [open source](https://github.com/Mungert69). Feel free to use whatever you find helpful. If you appreciate the work, please consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) โ˜•. Your support helps cover service costs and allows me to raise token limits for everyone. I'm also open to job opportunities or sponsorship. Thank you! ๐Ÿ˜Š # Model Card for functionary-small-v3.2 **This model was based on [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct)** [https://github.com/MeetKai/functionary](https://github.com/MeetKai/functionary) <img src="https://huggingface.co/meetkai/functionary-medium-v2.2/resolve/main/functionary_logo.jpg" alt="Functionary Logo" width="300"/> Functionary is a language model that can interpret and execute functions/plugins. The model determines when to execute functions, whether in parallel or serially, and can understand their outputs. It only triggers functions as needed. Function definitions are given as JSON Schema Objects, similar to OpenAI GPT function calls. ## Key Features - Intelligent **parallel tool use** - Able to analyze functions/tools outputs and provide relevant responses **grounded in the outputs** - Able to decide **when to not use tools/call functions** and provide normal chat response - Truly one of the best open-source alternative to GPT-4 - Support code interpreter ## How to Get Started We provide custom code for parsing raw model responses into a JSON object containing `role`, `content` and `tool_calls` fields. This enables the users to read the function-calling output of the model easily. ```python from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("meetkai/functionary-small-v3.2") model = AutoModelForCausalLM.from_pretrained("meetkai/functionary-small-v3.2", device_map="auto", trust_remote_code=True) tools = [ { "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA" } }, "required": ["location"] } } } ] messages = [{"role": "user", "content": "What is the weather in Istanbul and Singapore respectively?"}] final_prompt = tokenizer.apply_chat_template(messages, tools, add_generation_prompt=True, tokenize=False) inputs = tokenizer(final_prompt, return_tensors="pt").to("cuda") pred = model.generate_tool_use(**inputs, max_new_tokens=128, tokenizer=tokenizer) print(tokenizer.decode(pred.cpu()[0])) ``` ## Prompt Template We convert function definitions to a similar text to TypeScript definitions. Then we inject these definitions as system prompts. After that, we inject the default system prompt. Then we start the conversation messages. This formatting is also available via our vLLM server which we process the functions into Typescript definitions encapsulated in a system message using a pre-defined Transformers Jinja chat template. This means that the lists of messages can be formatted for you with the apply_chat_template() method within our server: ```python from openai import OpenAI client = OpenAI(base_url="http://localhost:8000/v1", api_key="functionary") client.chat.completions.create( model="path/to/functionary/model/", messages=[{"role": "user", "content": "What is the weather for Istanbul?"} ], tools=[{ "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA" } }, "required": ["location"] } } }], tool_choice="auto" ) ``` will yield: ``` <|start_header_id|>system<|end_header_id|> You are capable of executing available function(s) if required. Only execute function(s) when absolutely necessary. Ask for the required input to:recipient==all Use JSON for function arguments. Respond in this format: >>>${recipient} ${content} Available functions: // Supported function definitions that should be called when necessary. namespace functions { // Get the current weather type get_current_weather = (_: { // The city and state, e.g. San Francisco, CA location: string, }) => any; } // namespace functions<|eot_id|><|start_header_id|>user<|end_header_id|> What is the weather for Istanbul? ``` A more detailed example is provided [here](https://github.com/MeetKai/functionary/blob/main/tests/prompt_test_v3.llama3.txt). ## Run the model We encourage users to run our models using our OpenAI-compatible vLLM server [here](https://github.com/MeetKai/functionary). # The MeetKai Team ![MeetKai Logo](https://huggingface.co/meetkai/functionary-medium-v2.2/resolve/main/meetkai_logo.png "MeetKai Logo")
Mungert/functionary-v4r-small-preview-GGUF
Mungert
2025-06-15T19:42:07Z
320
4
null
[ "gguf", "license:mit", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-03-23T18:56:56Z
--- license: mit --- # <span style="color: #7FFF7F;">functionary-v4r-small-preview GGUF Models</span> ## <span style="color: #7FFF7F;">Ultra-Low-Bit Quantization with IQ-DynamicGate (1-2 bit)</span> Our latest quantization method introduces **precision-adaptive quantization** for ultra-low-bit models (1-2 bit), with benchmark-proven improvements on **Llama-3-8B**. This approach uses layer-specific strategies to preserve accuracy while maintaining extreme memory efficiency. ### **Benchmark Context** All tests conducted on **Llama-3-8B-Instruct** using: - Standard perplexity evaluation pipeline - 2048-token context window - Same prompt set across all quantizations ### **Method** - **Dynamic Precision Allocation**: - First/Last 25% of layers โ†’ IQ4_XS (selected layers) - Middle 50% โ†’ IQ2_XXS/IQ3_S (increase efficiency) - **Critical Component Protection**: - Embeddings/output layers use Q5_K - Reduces error propagation by 38% vs standard 1-2bit ### **Quantization Performance Comparison (Llama-3-8B)** | Quantization | Standard PPL | DynamicGate PPL | ฮ” PPL | Std Size | DG Size | ฮ” Size | Std Speed | DG Speed | |--------------|--------------|------------------|---------|----------|---------|--------|-----------|----------| | IQ2_XXS | 11.30 | 9.84 | -12.9% | 2.5G | 2.6G | +0.1G | 234s | 246s | | IQ2_XS | 11.72 | 11.63 | -0.8% | 2.7G | 2.8G | +0.1G | 242s | 246s | | IQ2_S | 14.31 | 9.02 | -36.9% | 2.7G | 2.9G | +0.2G | 238s | 244s | | IQ1_M | 27.46 | 15.41 | -43.9% | 2.2G | 2.5G | +0.3G | 206s | 212s | | IQ1_S | 53.07 | 32.00 | -39.7% | 2.1G | 2.4G | +0.3G | 184s | 209s | **Key**: - PPL = Perplexity (lower is better) - ฮ” PPL = Percentage change from standard to DynamicGate - Speed = Inference time (CPU avx2, 2048 token context) - Size differences reflect mixed quantization overhead **Key Improvements:** - ๐Ÿ”ฅ **IQ1_M** shows massive 43.9% perplexity reduction (27.46 โ†’ 15.41) - ๐Ÿš€ **IQ2_S** cuts perplexity by 36.9% while adding only 0.2GB - โšก **IQ1_S** maintains 39.7% better accuracy despite 1-bit quantization **Tradeoffs:** - All variants have modest size increases (0.1-0.3GB) - Inference speeds remain comparable (<5% difference) ### **When to Use These Models** ๐Ÿ“Œ **Fitting models into GPU VRAM** โœ” **Memory-constrained deployments** โœ” **Cpu and Edge Devices** where 1-2bit errors can be tolerated โœ” **Research** into ultra-low-bit quantization ## **Choosing the Right Model Format** Selecting the correct model format depends on your **hardware capabilities** and **memory constraints**. ### **BF16 (Brain Float 16) โ€“ Use if BF16 acceleration is available** - A 16-bit floating-point format designed for **faster computation** while retaining good precision. - Provides **similar dynamic range** as FP32 but with **lower memory usage**. - Recommended if your hardware supports **BF16 acceleration** (check your device's specs). - Ideal for **high-performance inference** with **reduced memory footprint** compared to FP32. ๐Ÿ“Œ **Use BF16 if:** โœ” Your hardware has native **BF16 support** (e.g., newer GPUs, TPUs). โœ” You want **higher precision** while saving memory. โœ” You plan to **requantize** the model into another format. ๐Ÿ“Œ **Avoid BF16 if:** โŒ Your hardware does **not** support BF16 (it may fall back to FP32 and run slower). โŒ You need compatibility with older devices that lack BF16 optimization. --- ### **F16 (Float 16) โ€“ More widely supported than BF16** - A 16-bit floating-point **high precision** but with less of range of values than BF16. - Works on most devices with **FP16 acceleration support** (including many GPUs and some CPUs). - Slightly lower numerical precision than BF16 but generally sufficient for inference. ๐Ÿ“Œ **Use F16 if:** โœ” Your hardware supports **FP16** but **not BF16**. โœ” You need a **balance between speed, memory usage, and accuracy**. โœ” You are running on a **GPU** or another device optimized for FP16 computations. ๐Ÿ“Œ **Avoid F16 if:** โŒ Your device lacks **native FP16 support** (it may run slower than expected). โŒ You have memory limitations. --- ### **Quantized Models (Q4_K, Q6_K, Q8, etc.) โ€“ For CPU & Low-VRAM Inference** Quantization reduces model size and memory usage while maintaining as much accuracy as possible. - **Lower-bit models (Q4_K)** โ†’ **Best for minimal memory usage**, may have lower precision. - **Higher-bit models (Q6_K, Q8_0)** โ†’ **Better accuracy**, requires more memory. ๐Ÿ“Œ **Use Quantized Models if:** โœ” You are running inference on a **CPU** and need an optimized model. โœ” Your device has **low VRAM** and cannot load full-precision models. โœ” You want to reduce **memory footprint** while keeping reasonable accuracy. ๐Ÿ“Œ **Avoid Quantized Models if:** โŒ You need **maximum accuracy** (full-precision models are better for this). โŒ Your hardware has enough VRAM for higher-precision formats (BF16/F16). --- ### **Very Low-Bit Quantization (IQ3_XS, IQ3_S, IQ3_M, Q4_K, Q4_0)** These models are optimized for **extreme memory efficiency**, making them ideal for **low-power devices** or **large-scale deployments** where memory is a critical constraint. - **IQ3_XS**: Ultra-low-bit quantization (3-bit) with **extreme memory efficiency**. - **Use case**: Best for **ultra-low-memory devices** where even Q4_K is too large. - **Trade-off**: Lower accuracy compared to higher-bit quantizations. - **IQ3_S**: Small block size for **maximum memory efficiency**. - **Use case**: Best for **low-memory devices** where **IQ3_XS** is too aggressive. - **IQ3_M**: Medium block size for better accuracy than **IQ3_S**. - **Use case**: Suitable for **low-memory devices** where **IQ3_S** is too limiting. - **Q4_K**: 4-bit quantization with **block-wise optimization** for better accuracy. - **Use case**: Best for **low-memory devices** where **Q6_K** is too large. - **Q4_0**: Pure 4-bit quantization, optimized for **ARM devices**. - **Use case**: Best for **ARM-based devices** or **low-memory environments**. --- ### **Summary Table: Model Format Selection** | Model Format | Precision | Memory Usage | Device Requirements | Best Use Case | |--------------|------------|---------------|----------------------|---------------| | **BF16** | Highest | High | BF16-supported GPU/CPUs | High-speed inference with reduced memory | | **F16** | High | High | FP16-supported devices | GPU inference when BF16 isn't available | | **Q4_K** | Medium Low | Low | CPU or Low-VRAM devices | Best for memory-constrained environments | | **Q6_K** | Medium | Moderate | CPU with more memory | Better accuracy while still being quantized | | **Q8_0** | High | Moderate | CPU or GPU with enough VRAM | Best accuracy among quantized models | | **IQ3_XS** | Very Low | Very Low | Ultra-low-memory devices | Extreme memory efficiency and low accuracy | | **Q4_0** | Low | Low | ARM or low-memory devices | llama.cpp can optimize for ARM devices | --- ## **Included Files & Details** ### `functionary-v4r-small-preview-bf16.gguf` - Model weights preserved in **BF16**. - Use this if you want to **requantize** the model into a different format. - Best if your device supports **BF16 acceleration**. ### `functionary-v4r-small-preview-f16.gguf` - Model weights stored in **F16**. - Use if your device supports **FP16**, especially if BF16 is not available. ### `functionary-v4r-small-preview-bf16-q8_0.gguf` - **Output & embeddings** remain in **BF16**. - All other layers quantized to **Q8_0**. - Use if your device supports **BF16** and you want a quantized version. ### `functionary-v4r-small-preview-f16-q8_0.gguf` - **Output & embeddings** remain in **F16**. - All other layers quantized to **Q8_0**. ### `functionary-v4r-small-preview-q4_k.gguf` - **Output & embeddings** quantized to **Q8_0**. - All other layers quantized to **Q4_K**. - Good for **CPU inference** with limited memory. ### `functionary-v4r-small-preview-q4_k_s.gguf` - Smallest **Q4_K** variant, using less memory at the cost of accuracy. - Best for **very low-memory setups**. ### `functionary-v4r-small-preview-q6_k.gguf` - **Output & embeddings** quantized to **Q8_0**. - All other layers quantized to **Q6_K** . ### `functionary-v4r-small-preview-q8_0.gguf` - Fully **Q8** quantized model for better accuracy. - Requires **more memory** but offers higher precision. ### `functionary-v4r-small-preview-iq3_xs.gguf` - **IQ3_XS** quantization, optimized for **extreme memory efficiency**. - Best for **ultra-low-memory devices**. ### `functionary-v4r-small-preview-iq3_m.gguf` - **IQ3_M** quantization, offering a **medium block size** for better accuracy. - Suitable for **low-memory devices**. ### `functionary-v4r-small-preview-q4_0.gguf` - Pure **Q4_0** quantization, optimized for **ARM devices**. - Best for **low-memory environments**. - Prefer IQ4_NL for better accuracy. # <span id="testllm" style="color: #7F7FFF;">๐Ÿš€ If you find these models useful</span> Please click like โค . Also I'd really appreciate it if you could test my Network Monitor Assistant at ๐Ÿ‘‰ [Network Monitor Assitant](https://readyforquantum.com). ๐Ÿ’ฌ Click the **chat icon** (bottom right of the main and dashboard pages) . Choose a LLM; toggle between the LLM Types TurboLLM -> FreeLLM -> TestLLM. ### What I'm Testing I'm experimenting with **function calling** against my network monitoring service. Using small open source models. I am into the question "How small can it go and still function". ๐ŸŸก **TestLLM** โ€“ Runs the current testing model using llama.cpp on 6 threads of a Cpu VM (Should take about 15s to load. Inference speed is quite slow and it only processes one user prompt at a timeโ€”still working on scaling!). If you're curious, I'd be happy to share how it works! . ### The other Available AI Assistants ๐ŸŸข **TurboLLM** โ€“ Uses **gpt-4o-mini** Fast! . Note: tokens are limited since OpenAI models are pricey, but you can [Login](https://readyforquantum.com) or [Download](https://readyforquantum.com/download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) the Quantum Network Monitor agent to get more tokens, Alternatively use the TestLLM . ๐Ÿ”ต **HugLLM** โ€“ Runs **open-source Hugging Face models** Fast, Runs small models (โ‰ˆ8B) hence lower quality, Get 2x more tokens (subject to Hugging Face API availability) ### Final Word I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAIโ€”all out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is [open source](https://github.com/Mungert69). Feel free to use whatever you find helpful. If you appreciate the work, please consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) โ˜•. Your support helps cover service costs and allows me to raise token limits for everyone. I'm also open to job opportunities or sponsorship. Thank you! ๐Ÿ˜Š # Model Card for meetkai/functionary-v4r-small-preview **This model was based on [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct)** [https://github.com/MeetKai/functionary](https://github.com/MeetKai/functionary) <img src="https://huggingface.co/meetkai/functionary-medium-v2.2/resolve/main/functionary_logo.jpg" alt="Functionary Logo" width="300"/> Functionary is a language model that can interpret and execute functions/plugins. The model determines when to execute functions, whether in parallel or serially, and can understand their outputs. It only triggers functions as needed. Function definitions are given as JSON Schema Objects, similar to OpenAI GPT function calls. ## Key Features - Generate the reasoning before deciding tool uses - Intelligent **parallel tool use** - Able to analyze functions/tools outputs and provide relevant responses **grounded in the outputs** - Able to decide **when to not use tools/call functions** and provide normal chat response - Truly one of the best open-source alternative to GPT-4 - Support code interpreter ## How to Get Started We provide custom code for parsing raw model responses into a JSON object containing role, content and tool_calls fields. This enables the users to read the function-calling output of the model easily. ```python from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("meetkai/functionary-v4r-small-preview") model = AutoModelForCausalLM.from_pretrained("meetkai/functionary-v4r-small-preview", device_map="auto", trust_remote_code=True) tools = [ { "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA" } }, "required": ["location"] } } } ] # add this to make the model generate the reasoning first tools.append({"type": "reasoning"}) messages = [{"role": "user", "content": "What is the weather in Istanbul and Singapore respectively?"}] final_prompt = tokenizer.apply_chat_template(messages, tools, add_generation_prompt=True, tokenize=False) inputs = tokenizer(final_prompt, return_tensors="pt").to("cuda") pred = model.generate_tool_use(**inputs, max_new_tokens=128, tokenizer=tokenizer) print(tokenizer.decode(pred.cpu()[0])) ``` ## Prompt Template We convert function definitions to a similar text to TypeScript definitions. Then we inject these definitions as system prompts. After that, we inject the default system prompt. Then we start the conversation messages. This formatting is also available via our vLLM server which we process the functions into Typescript definitions encapsulated in a system message using a pre-defined Transformers Jinja chat template. This means that the lists of messages can be formatted for you with the apply_chat_template() method within our server: ```python from openai import OpenAI client = OpenAI(base_url="http://localhost:8000/v1", api_key="functionary") messages = [{"role": "user", "content": "What is the weather for Istanbul?"} ] tools = [{ "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA" } }, "required": ["location"] } } }] # Add reasoning type to make the model generate the reasoning first tools.append({"type": "reasoning"}) client.chat.completions.create( model="path/to/functionary/model/", messages=messages, tools=tools, tool_choice="auto" ) ``` will yield: ``` <|start_header_id|>system<|end_header_id|> Reasoning Mode: On Cutting Knowledge Date: December 2023 You have access to the following functions: Use the function 'get_current_weather' to 'Get the current weather' {"name": "get_current_weather", "description": "Get the current weather", "parameters": {"type": "object", "properties": {"location": {"type": "string", "description": "The city and state, e.g. San Francisco, CA"}}, "required": ["location"]}} Think very carefully before calling functions. If a you choose to call a function ONLY reply in the following format: <{start_tag}={function_name}>{parameters}{end_tag} where start_tag => `<function` parameters => a JSON dict with the function argument name as key and function argument value as value. end_tag => `</function>` Here is an example, <function=example_function_name>{"example_name": "example_value"}</function> Reminder: - If looking for real time information use relevant functions before falling back to brave_search - Function calls MUST follow the specified format, start with <function= and end with </function> - Required parameters MUST be specified - Only call one function at a time - Put the entire function call reply on one line <|eot_id|><|start_header_id|>user<|end_header_id|> What is the weather for Istanbul? ``` ## Run the model We encourage users to run our models using our OpenAI-compatible vLLM server [here](https://github.com/MeetKai/functionary). # The MeetKai Team ![MeetKai Logo](https://huggingface.co/meetkai/functionary-medium-v2.2/resolve/main/meetkai_logo.png "MeetKai Logo")
Mungert/Llama-Guard-3-8B-GGUF
Mungert
2025-06-15T19:42:03Z
544
2
null
[ "gguf", "facebook", "meta", "pytorch", "llama", "llama-3", "text-generation", "en", "arxiv:2407.21783", "arxiv:2312.06674", "arxiv:2204.05862", "arxiv:2308.01263", "base_model:meta-llama/Llama-3.1-8B", "base_model:quantized:meta-llama/Llama-3.1-8B", "license:llama3.1", "endpoints_compa...
text-generation
2025-03-23T11:22:15Z
--- language: - en pipeline_tag: text-generation base_model: meta-llama/Meta-Llama-3.1-8B tags: - facebook - meta - pytorch - llama - llama-3 license: llama3.1 extra_gated_prompt: >- ### LLAMA 3.1 COMMUNITY LICENSE AGREEMENT Llama 3.1 Version Release Date: July 23, 2024 "Agreement" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. "Documentation" means the specifications, manuals and documentation accompanying Llama 3.1 distributed by Meta at https://llama.meta.com/doc/overview. "Licensee" or "you" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entityโ€™s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. "Llama 3.1" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://llama.meta.com/llama-downloads. "Llama Materials" means, collectively, Metaโ€™s proprietary Llama 3.1 and Documentation (and any portion thereof) made available under this Agreement. "Meta" or "we" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). 1. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Metaโ€™s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service (including another AI model) that contains any of them, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display โ€œBuilt with Llamaโ€ on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials or any outputs or results of the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include โ€œLlamaโ€ at the beginning of any such AI model name. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a โ€œNoticeโ€ text file distributed as a part of such copies: โ€œLlama 3.1 is licensed under the Llama 3.1 Community License, Copyright ยฉ Meta Platforms, Inc. All Rights Reserved.โ€ iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3_1/use-policy), which is hereby incorporated by reference into this Agreement. 2. Additional Commercial Terms. If, on the Llama 3.1 version release date, the monthly active users of the products or services made available by or for Licensee, or Licenseeโ€™s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN โ€œAS ISโ€ BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use โ€œLlamaโ€ (the โ€œMarkโ€) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Metaโ€™s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use of the Mark will inure to the benefit of Meta. b. Subject to Metaโ€™s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.1 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. ### Llama 3.1 Acceptable Use Policy Meta is committed to promoting safe and fair use of its tools and features, including Llama 3.1. If you access or use Llama 3.1, you agree to this Acceptable Use Policy (โ€œPolicyโ€). The most recent copy of this policy can be found at [https://llama.meta.com/llama3_1/use-policy](https://llama.meta.com/llama3_1/use-policy) #### Prohibited Uses We want everyone to use Llama 3.1 safely and responsibly. You agree you will not use, or allow others to use, Llama 3.1 to: 1. Violate the law or othersโ€™ rights, including to: 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: 1. Violence or terrorism 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material 3. Human trafficking, exploitation, and sexual violence 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials. 5. Sexual solicitation 6. Any other criminal activity 3. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals 4. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services 5. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices 6. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws 7. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials 8. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system 2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 3.1 related to the following: 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State 2. Guns and illegal weapons (including weapon development) 3. Illegal drugs and regulated/controlled substances 4. Operation of critical infrastructure, transportation technologies, or heavy machinery 5. Self-harm or harm to others, including suicide, cutting, and eating disorders 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual 3. Intentionally deceive or mislead others, including use of Llama 3.1 related to the following: 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content 3. Generating, promoting, or further distributing spam 4. Impersonating another individual without consent, authorization, or legal right 5. Representing that the use of Llama 3.1 or outputs are human-generated 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement 4. Fail to appropriately disclose to end users any known dangers of your AI system Please report any violation of this Policy, software โ€œbug,โ€ or other problems that could lead to a violation of this Policy through one of the following means: * Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://github.com/meta-llama/llama-models/issues) * Reporting risky content generated by the model: developers.facebook.com/llama_output_feedback * Reporting bugs and security concerns: facebook.com/whitehat/info * Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: LlamaUseReport@meta.com extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text Job title: type: select options: - Student - Research Graduate - AI researcher - AI developer/engineer - Reporter - Other geo: ip_location By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit --- # <span style="color: #7FFF7F;">Llama-Guard-3-8B GGUF Models</span> ## **Choosing the Right Model Format** Selecting the correct model format depends on your **hardware capabilities** and **memory constraints**. ### **BF16 (Brain Float 16) โ€“ Use if BF16 acceleration is available** - A 16-bit floating-point format designed for **faster computation** while retaining good precision. - Provides **similar dynamic range** as FP32 but with **lower memory usage**. - Recommended if your hardware supports **BF16 acceleration** (check your deviceโ€™s specs). - Ideal for **high-performance inference** with **reduced memory footprint** compared to FP32. ๐Ÿ“Œ **Use BF16 if:** โœ” Your hardware has native **BF16 support** (e.g., newer GPUs, TPUs). โœ” You want **higher precision** while saving memory. โœ” You plan to **requantize** the model into another format. ๐Ÿ“Œ **Avoid BF16 if:** โŒ Your hardware does **not** support BF16 (it may fall back to FP32 and run slower). โŒ You need compatibility with older devices that lack BF16 optimization. --- ### **F16 (Float 16) โ€“ More widely supported than BF16** - A 16-bit floating-point **high precision** but with less of range of values than BF16. - Works on most devices with **FP16 acceleration support** (including many GPUs and some CPUs). - Slightly lower numerical precision than BF16 but generally sufficient for inference. ๐Ÿ“Œ **Use F16 if:** โœ” Your hardware supports **FP16** but **not BF16**. โœ” You need a **balance between speed, memory usage, and accuracy**. โœ” You are running on a **GPU** or another device optimized for FP16 computations. ๐Ÿ“Œ **Avoid F16 if:** โŒ Your device lacks **native FP16 support** (it may run slower than expected). โŒ You have memory limitations. --- ### **Quantized Models (Q4_K, Q6_K, Q8, etc.) โ€“ For CPU & Low-VRAM Inference** Quantization reduces model size and memory usage while maintaining as much accuracy as possible. - **Lower-bit models (Q4_K)** โ†’ **Best for minimal memory usage**, may have lower precision. - **Higher-bit models (Q6_K, Q8_0)** โ†’ **Better accuracy**, requires more memory. ๐Ÿ“Œ **Use Quantized Models if:** โœ” You are running inference on a **CPU** and need an optimized model. โœ” Your device has **low VRAM** and cannot load full-precision models. โœ” You want to reduce **memory footprint** while keeping reasonable accuracy. ๐Ÿ“Œ **Avoid Quantized Models if:** โŒ You need **maximum accuracy** (full-precision models are better for this). โŒ Your hardware has enough VRAM for higher-precision formats (BF16/F16). --- ### **Very Low-Bit Quantization (IQ3_XS, IQ3_S, IQ3_M, Q4_K, Q4_0)** These models are optimized for **extreme memory efficiency**, making them ideal for **low-power devices** or **large-scale deployments** where memory is a critical constraint. - **IQ3_XS**: Ultra-low-bit quantization (3-bit) with **extreme memory efficiency**. - **Use case**: Best for **ultra-low-memory devices** where even Q4_K is too large. - **Trade-off**: Lower accuracy compared to higher-bit quantizations. - **IQ3_S**: Small block size for **maximum memory efficiency**. - **Use case**: Best for **low-memory devices** where **IQ3_XS** is too aggressive. - **IQ3_M**: Medium block size for better accuracy than **IQ3_S**. - **Use case**: Suitable for **low-memory devices** where **IQ3_S** is too limiting. - **Q4_K**: 4-bit quantization with **block-wise optimization** for better accuracy. - **Use case**: Best for **low-memory devices** where **Q6_K** is too large. - **Q4_0**: Pure 4-bit quantization, optimized for **ARM devices**. - **Use case**: Best for **ARM-based devices** or **low-memory environments**. --- ### **Summary Table: Model Format Selection** | Model Format | Precision | Memory Usage | Device Requirements | Best Use Case | |--------------|------------|---------------|----------------------|---------------| | **BF16** | Highest | High | BF16-supported GPU/CPUs | High-speed inference with reduced memory | | **F16** | High | High | FP16-supported devices | GPU inference when BF16 isnโ€™t available | | **Q4_K** | Medium Low | Low | CPU or Low-VRAM devices | Best for memory-constrained environments | | **Q6_K** | Medium | Moderate | CPU with more memory | Better accuracy while still being quantized | | **Q8_0** | High | Moderate | CPU or GPU with enough VRAM | Best accuracy among quantized models | | **IQ3_XS** | Very Low | Very Low | Ultra-low-memory devices | Extreme memory efficiency and low accuracy | | **Q4_0** | Low | Low | ARM or low-memory devices | llama.cpp can optimize for ARM devices | --- ## **Included Files & Details** ### `Llama-Guard-3-8B-bf16.gguf` - Model weights preserved in **BF16**. - Use this if you want to **requantize** the model into a different format. - Best if your device supports **BF16 acceleration**. ### `Llama-Guard-3-8B-f16.gguf` - Model weights stored in **F16**. - Use if your device supports **FP16**, especially if BF16 is not available. ### `Llama-Guard-3-8B-bf16-q8_0.gguf` - **Output & embeddings** remain in **BF16**. - All other layers quantized to **Q8_0**. - Use if your device supports **BF16** and you want a quantized version. ### `Llama-Guard-3-8B-f16-q8_0.gguf` - **Output & embeddings** remain in **F16**. - All other layers quantized to **Q8_0**. ### `Llama-Guard-3-8B-q4_k.gguf` - **Output & embeddings** quantized to **Q8_0**. - All other layers quantized to **Q4_K**. - Good for **CPU inference** with limited memory. ### `Llama-Guard-3-8B-q4_k_s.gguf` - Smallest **Q4_K** variant, using less memory at the cost of accuracy. - Best for **very low-memory setups**. ### `Llama-Guard-3-8B-q6_k.gguf` - **Output & embeddings** quantized to **Q8_0**. - All other layers quantized to **Q6_K** . ### `Llama-Guard-3-8B-q8_0.gguf` - Fully **Q8** quantized model for better accuracy. - Requires **more memory** but offers higher precision. ### `Llama-Guard-3-8B-iq3_xs.gguf` - **IQ3_XS** quantization, optimized for **extreme memory efficiency**. - Best for **ultra-low-memory devices**. ### `Llama-Guard-3-8B-iq3_m.gguf` - **IQ3_M** quantization, offering a **medium block size** for better accuracy. - Suitable for **low-memory devices**. ### `Llama-Guard-3-8B-q4_0.gguf` - Pure **Q4_0** quantization, optimized for **ARM devices**. - Best for **low-memory environments**. - Prefer IQ4_NL for better accuracy. # <span id="testllm" style="color: #7F7FFF;">๐Ÿš€ If you find these models useful</span> Please click like โค . Also Iโ€™d really appreciate it if you could test my Network Monitor Assistant at ๐Ÿ‘‰ [Network Monitor Assitant](https://readyforquantum.com). ๐Ÿ’ฌ Click the **chat icon** (bottom right of the main and dashboard pages) . Choose a LLM; toggle between the LLM Types TurboLLM -> FreeLLM -> TestLLM. ### What I'm Testing I'm experimenting with **function calling** against my network monitoring service. Using small open source models. I am into the question "How small can it go and still function". ๐ŸŸก **TestLLM** โ€“ Runs the current testing model using llama.cpp on 6 threads of a Cpu VM (Should take about 15s to load. Inference speed is quite slow and it only processes one user prompt at a timeโ€”still working on scaling!). If you're curious, I'd be happy to share how it works! . ### The other Available AI Assistants ๐ŸŸข **TurboLLM** โ€“ Uses **gpt-4o-mini** Fast! . Note: tokens are limited since OpenAI models are pricey, but you can [Login](https://readyforquantum.com) or [Download](https://readyforquantum.com/download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) the Quantum Network Monitor agent to get more tokens, Alternatively use the TestLLM . ๐Ÿ”ต **HugLLM** โ€“ Runs **open-source Hugging Face models** Fast, Runs small models (โ‰ˆ8B) hence lower quality, Get 2x more tokens (subject to Hugging Face API availability) ### Final Word I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAIโ€”all out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is [open source](https://github.com/Mungert69). Feel free to use whatever you find helpful. If you appreciate the work, please consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) โ˜•. Your support helps cover service costs and allows me to raise token limits for everyone. I'm also open to job opportunities or sponsorship. Thank you! ๐Ÿ˜Š # Model Details Llama Guard 3 is a Llama-3.1-8B pretrained model, fine-tuned for content safety classification. Similar to previous versions, it can be used to classify content in both LLM inputs (prompt classification) and in LLM responses (response classification). It acts as an LLM โ€“ it generates text in its output that indicates whether a given prompt or response is safe or unsafe, and if unsafe, it also lists the content categories violated. Llama Guard 3 was aligned to safeguard against the MLCommons standardized hazards taxonomy and designed to support Llama 3.1 capabilities. Specifically, it provides content moderation in 8 languages, and was optimized to support safety and security for search and code interpreter tool calls. Below is a response classification example for Llama Guard 3. <p align="center"> <img src="llama_guard_3_figure.png" width="800"/> </p> In order to produce classifier scores, we look at the probability for the first token, and use that as the โ€œunsafeโ€ class probability. We can then apply score thresholding to make binary decisions. # Hazard Taxonomy and Policy The model is trained to predict safety labels on the 14 categories shown below, based on the [MLCommons taxonomy](https://mlcommons.org/2024/04/mlc-aisafety-v0-5-poc/) of 13 hazards, as well as an additional category for Code Interpreter Abuse for tool calls use cases <table align="center"> <thead> <tr> <th colspan="2">Hazard categories</th> </tr> </thead> <tbody> <tr> <td>S1: Violent Crimes</td> <td>S2: Non-Violent Crimes</td> </tr> <tr> <td>S3: Sex-Related Crimes</td> <td>S4: Child Sexual Exploitation</td> </tr> <tr> <td>S5: Defamation</td> <td>S6: Specialized Advice</td> </tr> <tr> <td>S7: Privacy</td> <td>S8: Intellectual Property</td> </tr> <tr> <td>S9: Indiscriminate Weapons</td> <td>S10: Hate</td> </tr> <tr> <td>S11: Suicide &amp; Self-Harm</td> <td>S12: Sexual Content</td> </tr> <tr> <td>S13: Elections</td> <td>S14: Code Interpreter Abuse</td> </tr> </tbody> </table> **S1 - Violent Crimes** Responses that enable, encourage, or endorse the commission of violent crimes, including: (1) unlawful violence toward people (ex: terrorism, genocide, murder, hate-motivated violent crimes, child abuse, assault, battery, kidnapping) and (2) unlawful violence toward animals (ex: animal abuse) **S2 - Non-Violent Crimes** Responses that enable, encourage, or endorse the commission of non-violent crimes, including: (1) personal crimes (ex: labor trafficking, threats, intimidation, hate-motivated non-violent crimes), (2) financial crimes (ex: fraud, scams, money laundering), (3) property crimes (ex: theft, arson, vandalism), (4) drug crimes (ex: creating or using regulated substances), (5) weapons crimes (ex: producing unlicensed firearms), and (6) cyber crimes (ex: hacking) **S3 - Sex-Related Crimes** Responses that enable, encourage, or endorse the commission of sex-related crimes, including: (1) sex trafficking, (2) sexual assault (ex: rape), (3) sexual harassment (physical, verbal, or visual), and (4) prostitution **S4 - Child Sexual Exploitation** Responses that contain, describe, enable, encourage, or endorse the sexual abuse of children **S5 - Defamation** Responses that are both verifiably false and likely to injure a living personโ€™s reputation **S6 - Specialized Advice** Responses that contain specialized financial, medical, or legal advice, or that indicate dangerous activities or objects are safe **S7 - Privacy** Responses that contain sensitive, nonpublic personal information that could undermine someoneโ€™s physical, digital, or financial security **S8 - Intellectual Property** Responses that may violate the intellectual property rights of any third party **S9 - Indiscriminate Weapons** Responses that enable, encourage, or endorse the creation of indiscriminate weapons, including: (1) chemical weapons (ex: nerve gas), (2) biological weapons (ex: anthrax), (3) radiological weapons (ex: cobalt bombs), (4) nuclear weapons (ex: fission bombs), and (5) high-yield explosive weapons (ex: cluster munitions) **S10 - Hate** Responses that demean or dehumanize people on the basis of their sensitive, personal characteristics (i.e., race, color, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity, and/or serious disease) **S11 - Suicide &amp; Self-Harm** Responses that enable, encourage, or endorse acts of intentional self-harm, including: (1) suicide, (2) self-injury (ex: cutting), and (3) disordered eating **S12 - Sexual Content** Responses that contain erotica **S13 - Elections** Responses that contain factually incorrect information about electoral systems and processes, including in the time, place, or manner of voting in civic elections **S14 - Code Interpreter Abuse** Responses that seek to abuse code interpreters, including those that enable denial of service attacks, container escapes or privilege escalation exploits # Supported languages Llama Guard 3 supports content safety for the following languages : English, French, German, Hindi, Italian, Portuguese, Spanish, Thai. # Usage > [!IMPORTANT] > > This repository corresponds to half-precision version of the model. A 8-bit precision version is also provided, please visit [meta-llama/Llama-Guard-3-8B-INT8](https://huggingface.co/meta-llama/Llama-Guard-3-8B-INT8). Llama Guard 3 can be directly used with `transformers`. It is only supported since `transformers` version 4.43. ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_id = "meta-llama/Llama-Guard-3-8B" device = "cuda" dtype = torch.bfloat16 tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=dtype, device_map=device) def moderate(chat): input_ids = tokenizer.apply_chat_template(chat, return_tensors="pt").to(device) output = model.generate(input_ids=input_ids, max_new_tokens=100, pad_token_id=0) prompt_len = input_ids.shape[-1] return tokenizer.decode(output[0][prompt_len:], skip_special_tokens=True) moderate([ {"role": "user", "content": "I forgot how to kill a process in Linux, can you help?"}, {"role": "assistant", "content": "Sure! To kill a process in Linux, you can use the kill command followed by the process ID (PID) of the process you want to terminate."}, ]) ``` # Training Data We use the English data used by Llama Guard [1], which are obtained by getting Llama 2 and Llama 3 generations on prompts from the hh-rlhf dataset [2]. In order to scale training data for new categories and new capabilities such as multilingual and tool use, we collect additional human and synthetically generated data. Similar to the English data, the multilingual data are Human-AI conversation data that are either single-turn or multi-turn. To reduce the modelโ€™s false positive rate, we curate a set of multilingual benign prompt and response data where LLMs likely reject the prompts. For the tool use capability, we consider search tool calls and code interpreter abuse. To develop training data for search tool use, we use Llama3 to generate responses to a collected and synthetic set of prompts. The generations are based on the query results obtained from the Brave Search API. To develop synthetic training data to detect code interpreter attacks, we use an LLM to generate safe and unsafe prompts. Then, we use a non-safety-tuned LLM to generate code interpreter completions that comply with these instructions. For safe data, we focus on data close to the boundary of what would be considered unsafe, to minimize false positives on such borderline examples. # Evaluation **Note on evaluations:** As discussed in the original Llama Guard paper, comparing model performance is not straightforward as each model is built on its own policy and is expected to perform better on an evaluation dataset with a policy aligned to the model. This highlights the need for industry standards. By aligning the Llama Guard family of models with the Proof of Concept MLCommons taxonomy of hazards, we hope to drive adoption of industry standards like this and facilitate collaboration and transparency in the LLM safety and content evaluation space. In this regard, we evaluate the performance of Llama Guard 3 on MLCommons hazard taxonomy and compare it across languages with Llama Guard 2 [3] on our internal test. We also add GPT4 as baseline with zero-shot prompting using MLCommons hazard taxonomy. Tables 1, 2, and 3 show that Llama Guard 3 improves over Llama Guard 2 and outperforms GPT4 in English, multilingual, and tool use capabilities. Noteworthily, Llama Guard 3 achieves better performance with much lower false positive rates. We also benchmark Llama Guard 3 in the OSS dataset XSTest [4] and observe that it achieves the same F1 score but a lower false positive rate compared to Llama Guard 2. <div align="center"> <small> Table 1: Comparison of performance of various models measured on our internal English test set for MLCommons hazard taxonomy (response classification).</small> | | **F1 โ†‘** | **AUPRC โ†‘** | **False Positive<br>Rate โ†“** | |--------------------------|:--------:|:-----------:|:----------------------------:| | Llama Guard 2 | 0.877 | 0.927 | 0.081 | | Llama Guard 3 | 0.939 | 0.985 | 0.040 | | GPT4 | 0.805 | N/A | 0.152 | </div> <br> <table align="center"> <small><center>Table 2: Comparison of multilingual performance of various models measured on our internal test set for MLCommons hazard taxonomy (prompt+response classification).</center></small> <thead> <tr> <th colspan="8"><center>F1 โ†‘ / FPR โ†“</center></th> </tr> </thead> <tbody> <tr> <td></td> <td><center>French</center></td> <td><center>German</center></td> <td><center>Hindi</center></td> <td><center>Italian</center></td> <td><center>Portuguese</center></td> <td><center>Spanish</center></td> <td><center>Thai</center></td> </tr> <tr> <td>Llama Guard 2</td> <td><center>0.911/0.012</center></td> <td><center>0.795/0.062</center></td> <td><center>0.832/0.062</center></td> <td><center>0.681/0.039</center></td> <td><center>0.845/0.032</center></td> <td><center>0.876/0.001</center></td> <td><center>0.822/0.078</center></td> </tr> <tr> <td>Llama Guard 3</td> <td><center>0.943/0.036</center></td> <td><center>0.877/0.032</center></td> <td><center>0.871/0.050</center></td> <td><center>0.873/0.038</center></td> <td><center>0.860/0.060</center></td> <td><center>0.875/0.023</center></td> <td><center>0.834/0.030</center></td> </tr> <tr> <td>GPT4</td> <td><center>0.795/0.157</center></td> <td><center>0.691/0.123</center></td> <td><center>0.709/0.206</center></td> <td><center>0.753/0.204</center></td> <td><center>0.738/0.207</center></td> <td><center>0.711/0.169</center></td> <td><center>0.688/0.168</center></td> </tr> </tbody> </table> <br> <table align="center"> <small><center>Table 3: Comparison of performance of various models measured on our internal test set for other moderation capabilities (prompt+response classification).</center></small> <thead> <tr> <th></th> <th colspan="3">Search tool calls</th> <th colspan="3">Code interpreter abuse</th> </tr> </thead> <tbody> <tr> <td></td> <td><center>F1 โ†‘</center></td> <td><center>AUPRC โ†‘</center></td> <td><center>FPR โ†“</center></td> <td><center>F1 โ†‘</center></td> <td><center>AUPRC โ†‘</center></td> <td><center>FPR โ†“</center></td> </tr> <tr> <td>Llama Guard 2</td> <td><center>0.749</center></td> <td><center>0.794</center></td> <td><center>0.284</center></td> <td><center>0.683</center></td> <td><center>0.677</center></td> <td><center>0.670</center></td> </tr> <tr> <td>Llama Guard 3</td> <td><center>0.856</center></td> <td><center>0.938</center></td> <td><center>0.174</center></td> <td><center>0.885</center></td> <td><center>0.967</center></td> <td><center>0.125</center></td> </tr> <tr> <td>GPT4</td> <td><center>0.732</center></td> <td><center>N/A</center></td> <td><center>0.525</center></td> <td><center>0.636</center></td> <td><center>N/A</center></td> <td><center>0.90</center></td> </tr> </tbody> </table> # Application As outlined in the Llama 3 paper, Llama Guard 3 provides industry leading system-level safety performance and is recommended to be deployed along with Llama 3.1. Note that, while deploying Llama Guard 3 will likely improve the safety of your system, it might increase refusals to benign prompts (False Positives). Violation rate improvement and impact on false positives as measured on internal benchmarks are provided in the Llama 3 paper. # Quantization We are committed to help the community deploy Llama systems responsibly. We provide a quantized version of Llama Guard 3 to lower the deployment cost. We used int 8 [implementation](https://huggingface.co/docs/transformers/main/en/quantization/bitsandbytes) integrated into the hugging face ecosystem, reducing the checkpoint size by about 40% with very small impact on model performance. In Table 5, we observe that the performance quantized model is comparable to the original model. <table align="center"> <small><center>Table 5: Impact of quantization on Llama Guard 3 performance.</center></small> <tbody> <tr> <td rowspan="2"><br /> <p><span>Task</span></p> </td> <td rowspan="2"><br /> <p><span>Capability</span></p> </td> <td colspan="4"> <p><center><span>Non-Quantized</span></center></p> </td> <td colspan="4"> <p><center><span>Quantized</span></center></p> </td> </tr> <tr> <td> <p><span>Precision</span></p> </td> <td> <p><span>Recall</span></p> </td> <td> <p><span>F1</span></p> </td> <td> <p><span>FPR</span></p> </td> <td> <p><span>Precision</span></p> </td> <td> <p><span>Recall</span></p> </td> <td> <p><span>F1</span></p> </td> <td> <p><span>FPR</span></p> </td> </tr> <tr> <td rowspan="3"> <p><span>Prompt Classification</span></p> </td> <td> <p><span>English</span></p> </td> <td> <p><span>0.952</span></p> </td> <td> <p><span>0.943</span></p> </td> <td> <p><span>0.947</span></p> </td> <td> <p><span>0.057</span></p> </td> <td> <p><span>0.961</span></p> </td> <td> <p><span>0.939</span></p> </td> <td> <p><span>0.950</span></p> </td> <td> <p><span>0.045</span></p> </td> </tr> <tr> <td> <p><span>Multilingual</span></p> </td> <td> <p><span>0.901</span></p> </td> <td> <p><span>0.899</span></p> </td> <td> <p><span>0.900</span></p> </td> <td> <p><span>0.054</span></p> </td> <td> <p><span>0.906</span></p> </td> <td> <p><span>0.892</span></p> </td> <td> <p><span>0.899</span></p> </td> <td> <p><span>0.051</span></p> </td> </tr> <tr> <td> <p><span>Tool Use</span></p> </td> <td> <p><span>0.884</span></p> </td> <td> <p><span>0.958</span></p> </td> <td> <p><span>0.920</span></p> </td> <td> <p><span>0.126</span></p> </td> <td> <p><span>0.876</span></p> </td> <td> <p><span>0.946</span></p> </td> <td> <p><span>0.909</span></p> </td> <td> <p><span>0.134</span></p> </td> </tr> <tr> <td rowspan="3"> <p><span>Response Classification</span></p> </td> <td> <p><span>English</span></p> </td> <td> <p><span>0.947</span></p> </td> <td> <p><span>0.931</span></p> </td> <td> <p><span>0.939</span></p> </td> <td> <p><span>0.040</span></p> </td> <td> <p><span>0.947</span></p> </td> <td> <p><span>0.925</span></p> </td> <td> <p><span>0.936</span></p> </td> <td> <p><span>0.040</span></p> </td> </tr> <tr> <td> <p><span>Multilingual</span></p> </td> <td> <p><span>0.929</span></p> </td> <td> <p><span>0.805</span></p> </td> <td> <p><span>0.862</span></p> </td> <td> <p><span>0.033</span></p> </td> <td> <p><span>0.931</span></p> </td> <td> <p><span>0.785</span></p> </td> <td> <p><span>0.851</span></p> </td> <td> <p><span>0.031</span></p> </td> </tr> <tr> <td> <p><span>Tool Use</span></p> </td> <td> <p><span>0.774</span></p> </td> <td> <p><span>0.884</span></p> </td> <td> <p><span>0.825</span></p> </td> <td> <p><span>0.176</span></p> </td> <td> <p><span>0.793</span></p> </td> <td> <p><span>0.865</span></p> </td> <td> <p><span>0.827</span></p> </td> <td> <p><span>0.155</span></p> </td> </tr> </tbody> </table> # Get started Llama Guard 3 is available by default on Llama 3.1 [reference implementations](https://github.com/meta-llama). You can learn more about how to configure and customize using [Llama Recipes](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai/) shared on our Github repository. # Limitations There are some limitations associated with Llama Guard 3. First, Llama Guard 3 itself is an LLM fine-tuned on Llama 3.1. Thus, its performance (e.g., judgments that need common sense knowledge, multilingual capability, and policy coverage) might be limited by its (pre-)training data. Some hazard categories may require factual, up-to-date knowledge to be evaluated (for example, S5: Defamation, S8: Intellectual Property, and S13: Elections) . We believe more complex systems should be deployed to accurately moderate these categories for use cases highly sensitive to these types of hazards, but Llama Guard 3 provides a good baseline for generic use cases. Lastly, as an LLM, Llama Guard 3 may be susceptible to adversarial attacks or prompt injection attacks that could bypass or alter its intended use. Please feel free to [report](https://github.com/meta-llama/PurpleLlama) vulnerabilities and we will look to incorporate improvements in future versions of Llama Guard. # Citation ``` @misc{dubey2024llama3herdmodels, title = {The Llama 3 Herd of Models}, author = {Llama Team, AI @ Meta}, year = {2024}, eprint = {2407.21783}, archivePrefix = {arXiv}, primaryClass = {cs.AI}, url = {https://arxiv.org/abs/2407.21783} } ``` # References [1] [Llama Guard: LLM-based Input-Output Safeguard for Human-AI Conversations](https://arxiv.org/abs/2312.06674) [2] [Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback](https://arxiv.org/abs/2204.05862) [3] [Llama Guard 2 Model Card](https://github.com/meta-llama/PurpleLlama/blob/main/Llama-Guard2/MODEL_CARD.md) [4] [XSTest: A Test Suite for Identifying Exaggerated Safety Behaviors in Large Language Models](https://arxiv.org/abs/2308.01263)
Mungert/rwkv7-0.4B-world-GGUF
Mungert
2025-06-15T19:40:45Z
517
2
null
[ "gguf", "text-generation", "en", "zh", "ja", "ko", "fr", "ar", "es", "pt", "base_model:BlinkDL/rwkv-7-world", "base_model:quantized:BlinkDL/rwkv-7-world", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
text-generation
2025-03-18T09:56:18Z
--- license: apache-2.0 language: - en - zh - ja - ko - fr - ar - es - pt metrics: - accuracy base_model: - BlinkDL/rwkv-7-world pipeline_tag: text-generation --- # <span style="color: #7FFF7F;">rwkv7-0.4B-world GGUF Models</span> Note: you must use latest llama.cpp https://github.com/ggml-org/llama.cpp to run this model with llama.cpp ## **Choosing the Right Model Format** Selecting the correct model format depends on your **hardware capabilities** and **memory constraints**. ### **BF16 (Brain Float 16) โ€“ Use if BF16 acceleration is available** - A 16-bit floating-point format designed for **faster computation** while retaining good precision. - Provides **similar dynamic range** as FP32 but with **lower memory usage**. - Recommended if your hardware supports **BF16 acceleration** (check your deviceโ€™s specs). - Ideal for **high-performance inference** with **reduced memory footprint** compared to FP32. ๐Ÿ“Œ **Use BF16 if:** โœ” Your hardware has native **BF16 support** (e.g., newer GPUs, TPUs). โœ” You want **higher precision** while saving memory. โœ” You plan to **requantize** the model into another format. ๐Ÿ“Œ **Avoid BF16 if:** โŒ Your hardware does **not** support BF16 (it may fall back to FP32 and run slower). โŒ You need compatibility with older devices that lack BF16 optimization. --- ### **F16 (Float 16) โ€“ More widely supported than BF16** - A 16-bit floating-point **high precision** but with less of range of values than BF16. - Works on most devices with **FP16 acceleration support** (including many GPUs and some CPUs). - Slightly lower numerical precision than BF16 but generally sufficient for inference. ๐Ÿ“Œ **Use F16 if:** โœ” Your hardware supports **FP16** but **not BF16**. โœ” You need a **balance between speed, memory usage, and accuracy**. โœ” You are running on a **GPU** or another device optimized for FP16 computations. ๐Ÿ“Œ **Avoid F16 if:** โŒ Your device lacks **native FP16 support** (it may run slower than expected). โŒ You have memory limitations. --- ### **Quantized Models (Q4_K, Q6_K, Q8, etc.) โ€“ For CPU & Low-VRAM Inference** Quantization reduces model size and memory usage while maintaining as much accuracy as possible. - **Lower-bit models (Q4_K)** โ†’ **Best for minimal memory usage**, may have lower precision. - **Higher-bit models (Q6_K, Q8_0)** โ†’ **Better accuracy**, requires more memory. ๐Ÿ“Œ **Use Quantized Models if:** โœ” You are running inference on a **CPU** and need an optimized model. โœ” Your device has **low VRAM** and cannot load full-precision models. โœ” You want to reduce **memory footprint** while keeping reasonable accuracy. ๐Ÿ“Œ **Avoid Quantized Models if:** โŒ You need **maximum accuracy** (full-precision models are better for this). โŒ Your hardware has enough VRAM for higher-precision formats (BF16/F16). --- ### **Very Low-Bit Quantization (IQ3_XS, IQ3_S, IQ3_M, Q4_K, Q4_0)** These models are optimized for **extreme memory efficiency**, making them ideal for **low-power devices** or **large-scale deployments** where memory is a critical constraint. - **IQ3_XS**: Ultra-low-bit quantization (3-bit) with **extreme memory efficiency**. - **Use case**: Best for **ultra-low-memory devices** where even Q4_K is too large. - **Trade-off**: Lower accuracy compared to higher-bit quantizations. - **IQ3_S**: Small block size for **maximum memory efficiency**. - **Use case**: Best for **low-memory devices** where **IQ3_XS** is too aggressive. - **IQ3_M**: Medium block size for better accuracy than **IQ3_S**. - **Use case**: Suitable for **low-memory devices** where **IQ3_S** is too limiting. - **Q4_K**: 4-bit quantization with **block-wise optimization** for better accuracy. - **Use case**: Best for **low-memory devices** where **Q6_K** is too large. - **Q4_0**: Pure 4-bit quantization, optimized for **ARM devices**. - **Use case**: Best for **ARM-based devices** or **low-memory environments**. --- ### **Summary Table: Model Format Selection** | Model Format | Precision | Memory Usage | Device Requirements | Best Use Case | |--------------|------------|---------------|----------------------|---------------| | **BF16** | Highest | High | BF16-supported GPU/CPUs | High-speed inference with reduced memory | | **F16** | High | High | FP16-supported devices | GPU inference when BF16 isnโ€™t available | | **Q4_K** | Medium Low | Low | CPU or Low-VRAM devices | Best for memory-constrained environments | | **Q6_K** | Medium | Moderate | CPU with more memory | Better accuracy while still being quantized | | **Q8_0** | High | Moderate | CPU or GPU with enough VRAM | Best accuracy among quantized models | | **IQ3_XS** | Very Low | Very Low | Ultra-low-memory devices | Extreme memory efficiency and low accuracy | | **Q4_0** | Low | Low | ARM or low-memory devices | llama.cpp can optimize for ARM devices | --- ## **Included Files & Details** ### `rwkv7-0.4B-world-bf16.gguf` - Model weights preserved in **BF16**. - Use this if you want to **requantize** the model into a different format. - Best if your device supports **BF16 acceleration**. ### `rwkv7-0.4B-world-f16.gguf` - Model weights stored in **F16**. - Use if your device supports **FP16**, especially if BF16 is not available. ### `rwkv7-0.4B-world-bf16-q8_0.gguf` - **Output & embeddings** remain in **BF16**. - All other layers quantized to **Q8_0**. - Use if your device supports **BF16** and you want a quantized version. ### `rwkv7-0.4B-world-f16-q8_0.gguf` - **Output & embeddings** remain in **F16**. - All other layers quantized to **Q8_0**. ### `rwkv7-0.4B-world-q4_k.gguf` - **Output & embeddings** quantized to **Q8_0**. - All other layers quantized to **Q4_K**. - Good for **CPU inference** with limited memory. ### `rwkv7-0.4B-world-q4_k_s.gguf` - Smallest **Q4_K** variant, using less memory at the cost of accuracy. - Best for **very low-memory setups**. ### `rwkv7-0.4B-world-q6_k.gguf` - **Output & embeddings** quantized to **Q8_0**. - All other layers quantized to **Q6_K** . ### `rwkv7-0.4B-world-q8_0.gguf` - Fully **Q8** quantized model for better accuracy. - Requires **more memory** but offers higher precision. ### `rwkv7-0.4B-world-iq3_xs.gguf` - **IQ3_XS** quantization, optimized for **extreme memory efficiency**. - Best for **ultra-low-memory devices**. ### `rwkv7-0.4B-world-iq3_m.gguf` - **IQ3_M** quantization, offering a **medium block size** for better accuracy. - Suitable for **low-memory devices**. ### `rwkv7-0.4B-world-q4_0.gguf` - Pure **Q4_0** quantization, optimized for **ARM devices**. - Best for **low-memory environments**. - Prefer IQ4_NL for better accuracy. # <span id="testllm" style="color: #7F7FFF;">๐Ÿš€ If you find these models useful</span> Please click like โค . Also Iโ€™d really appreciate it if you could test my Network Monitor Assistant at ๐Ÿ‘‰ [Network Monitor Assitant](https://readyforquantum.com). ๐Ÿ’ฌ Click the **chat icon** (bottom right of the main and dashboard pages) . Choose a LLM; toggle between the LLM Types TurboLLM -> FreeLLM -> TestLLM. ### What I'm Testing I'm experimenting with **function calling** against my network monitoring service. Using small open source models. I am into the question "How small can it go and still function". ๐ŸŸก **TestLLM** โ€“ Runs the current testing model using llama.cpp on 6 threads of a Cpu VM (Should take about 15s to load. Inference speed is quite slow and it only processes one user prompt at a timeโ€”still working on scaling!). If you're curious, I'd be happy to share how it works! . ### The other Available AI Assistants ๐ŸŸข **TurboLLM** โ€“ Uses **gpt-4o-mini** Fast! . Note: tokens are limited since OpenAI models are pricey, but you can [Login](https://readyforquantum.com) or [Download](https://readyforquantum.com/download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) the Quantum Network Monitor agent to get more tokens, Alternatively use the TestLLM . ๐Ÿ”ต **HugLLM** โ€“ Runs **open-source Hugging Face models** Fast, Runs small models (โ‰ˆ8B) hence lower quality, Get 2x more tokens (subject to Hugging Face API availability) ### Final Word I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAIโ€”all out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is [open source](https://github.com/Mungert69). Feel free to use whatever you find helpful. If you appreciate the work, please consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) โ˜•. Your support helps cover service costs and allows me to raise token limits for everyone. I'm also open to job opportunities or sponsorship. Thank you! ๐Ÿ˜Š # rwkv7-0.4B-world <!-- Provide a quick summary of what the model is/does. --> This is RWKV-7 model under flash-linear attention format. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** Bo Peng, Yu Zhang, Songlin Yang, Ruichong Zhang - **Funded by:** RWKV Project (Under LF AI & Data Foundation) - **Model type:** RWKV7 - **Language(s) (NLP):** English - **License:** Apache-2.0 - **Parameter count:** 0.450B - **Tokenizer:** RWKV World tokenizer - **Vocabulary size:** 65,536 ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/fla-org/flash-linear-attention ; https://github.com/BlinkDL/RWKV-LM - **Paper:** With in Progress ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> Install `flash-linear-attention` and the latest version of `transformers` before using this model: ```bash pip install git+https://github.com/fla-org/flash-linear-attention pip install 'transformers>=4.48.0' ``` ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> You can use this model just as any other HuggingFace models: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained('fla-hub/rwkv7-0.4B-world', trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained('fla-hub/rwkv7-0.4B-world', trust_remote_code=True) model = model.cuda() prompt = "What is a large language model?" messages = [ {"role": "user", "content": "Who are you?"}, {"role": "assistant", "content": "I am a GPT-3 based model."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=1024, ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=False)[0] print(response) ``` ## Training Details ### Training Data This model is trained on the World v3 with a total of 3.119 trillion tokens. #### Training Hyperparameters - **Training regime:** bfloat16, lr 4e-4 to 1e-5 "delayed" cosine decay, wd 0.1 (with increasing batch sizes during the middle) ## FAQ Q: safetensors metadata is none. A: upgrade transformers to >=4.48.0: `pip install 'transformers>=4.48.0'`