modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
list
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
billjeremy/ppo-SnowballTarget
billjeremy
2025-06-18T14:34:49Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2025-06-18T14:34:46Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: billjeremy/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
MikeGreen2710/ner_land_dims
MikeGreen2710
2025-06-18T14:34:12Z
0
0
transformers
[ "transformers", "safetensors", "roberta", "token-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-06-18T14:33:39Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AlphaZero123/llama3.1-8b-finetuned-lora
AlphaZero123
2025-06-18T14:34:09Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-06-18T13:10:57Z
--- base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** AlphaZero123 - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Mohamed264/VQA-LLava
Mohamed264
2025-06-18T14:16:39Z
0
0
peft
[ "peft", "safetensors", "llama-factory", "lora", "generated_from_trainer", "base_model:llava-hf/llava-1.5-7b-hf", "base_model:adapter:llava-hf/llava-1.5-7b-hf", "license:other", "region:us" ]
null
2025-06-18T14:16:25Z
--- library_name: peft license: other base_model: llava-hf/llava-1.5-7b-hf tags: - llama-factory - lora - generated_from_trainer model-index: - name: vqa-llama results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vqa-llama This model is a fine-tuned version of [llava-hf/llava-1.5-7b-hf](https://huggingface.co/llava-hf/llava-1.5-7b-hf) on the medical_vqa_train dataset. It achieves the following results on the evaluation set: - Loss: 0.8505 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.15.2 - Transformers 4.52.4 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
RoadQAQ/Qwen2.5-Math-7B-16k-think
RoadQAQ
2025-06-18T14:14:25Z
15
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:2506.07527", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-08T07:28:24Z
--- license: mit library_name: transformers pipeline_tag: text-generation --- The base Qwen2.5-Math-7B model used by ReLIFT. We change to rope_theta from 10000 to 40000 and extend the context window to 16k. Also, we modify the chat_template for the system prompt and add <think>. Github: https://github.com/TheRoadQaQ/ReLIFT # Citation If you find our model, data, or evaluation code useful, please kindly cite our paper: ```bib @article{ma2025learning, title={Learning What Reinforcement Learning Can't: Interleaved Online Fine-Tuning for Hardest Questions}, author={Ma, Lu and Liang, Hao and Qiang, Meiyi and Tang, Lexiang and Ma, Xiaochen and Wong, Zhen Hao and Niu, Junbo and Shen, Chengyu and He, Runming and Cui, Bin and others}, journal={arXiv preprint arXiv:2506.07527}, year={2025} } ```
llava-hf/llava-onevision-qwen2-72b-ov-hf
llava-hf
2025-06-18T13:55:32Z
1,930
10
transformers
[ "transformers", "safetensors", "llava_onevision", "image-text-to-text", "vision", "conversational", "en", "zh", "dataset:lmms-lab/LLaVA-OneVision-Data", "arxiv:2408.03326", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-08-13T09:23:32Z
--- language: - en - zh license: apache-2.0 tags: - vision - image-text-to-text datasets: - lmms-lab/LLaVA-OneVision-Data library_name: transformers pipeline_tag: image-text-to-text arxiv: 2408.03326 --- # LLaVA-Onevision Model Card ![image/png](llava_onevision_arch.png) Check out also the Google Colab demo to run Llava on a free-tier Google Colab instance: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1-4AtYjR8UMtCALV0AswU1kiNkWCLTALT?usp=sharing) Below is the model card of 72B LLaVA-Onevision model which is copied from the original LLaVA-Onevision model card that you can find [here](https://huggingface.co/lmms-lab/llava-onevision-qwen2-72b-ov). ## Model details **Model type:** LLaVA-Onevision is an open-source multimodal LLM trained by fine-tuning Qwen2 on GPT-generated multimodal instruction-following data. LLaVA-OneVision is the first single model that can simultaneously push the performance boundaries of open LMMs in three important computer vision scenarios: single-image, multi-image, and video scenarios. Importantly, the design of LLaVA-OneVision allows strong transfer learning across different modalities/scenarios, yielding new emerging capabilities. In particular, strong video understanding and cross-scenario capabilities are demonstrated through task transfer from images to videos. **Model date:** LLaVA-Onevision-72b-si was added in August 2024. **Paper or resources for more information:** https://llava-vl.github.io/ - **Architecture:** SO400M + Qwen2 - **Pretraining Stage:** LCS-558K, 1 epoch, projector - **Mid Stage:** A mixture of 4.7M high-quality synthetic data, 1 epoch, full model - **Final-Image Stage:** A mixture of 3.6M single-image data, 1 epoch, full model - **OneVision Stage:** A mixture of 1.6M single-image/multi-image/video data, 1 epoch, full model - **Precision:** bfloat16 ## How to use the model First, make sure to have `transformers` installed from [branch](https://github.com/huggingface/transformers/pull/32673) or `transformers >= 4.45.0`. The model supports multi-image and multi-prompt generation. Meaning that you can pass multiple images in your prompt. Make sure also to follow the correct prompt template by applyong chat template: ### Using `pipeline`: Below we used [`"llava-hf/llava-onevision-qwen2-72b-ov-hf"`](https://huggingface.co/llava-hf/llava-onevision-qwen2-72b-ov-hf) checkpoint. ```python from transformers import pipeline pipe = pipeline("image-text-to-text", model="llava-onevision-qwen2-72b-ov-hf") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg"}, {"type": "text", "text": "What does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud"}, ], }, ] out = pipe(text=messages, max_new_tokens=20) print(out) >>> [{'input_text': [{'role': 'user', 'content': [{'type': 'image', 'url': 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg'}, {'type': 'text', 'text': 'What does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud'}]}], 'generated_text': 'Lava'}] ``` ### Using pure `transformers`: Below is an example script to run generation in `float16` precision on a GPU device: ```python import requests from PIL import Image import torch from transformers import AutoProcessor, LlavaOnevisionForConditionalGeneration model_id = "llava-hf/llava-onevision-qwen2-72b-ov-hf" model = LlavaOnevisionForConditionalGeneration.from_pretrained( model_id, torch_dtype=torch.float16, low_cpu_mem_usage=True, ).to(0) processor = AutoProcessor.from_pretrained(model_id) # Define a chat history and use `apply_chat_template` to get correctly formatted prompt # Each value in "content" has to be a list of dicts with types ("text", "image") conversation = [ { "role": "user", "content": [ {"type": "text", "text": "What are these?"}, {"type": "image"}, ], }, ] prompt = processor.apply_chat_template(conversation, add_generation_prompt=True) image_file = "http://images.cocodataset.org/val2017/000000039769.jpg" raw_image = Image.open(requests.get(image_file, stream=True).raw) inputs = processor(images=raw_image, text=prompt, return_tensors='pt').to(0, torch.float16) output = model.generate(**inputs, max_new_tokens=200, do_sample=False) print(processor.decode(output[0][2:], skip_special_tokens=True)) ``` ----------- From transformers>=v4.48, you can also pass image/video url or local path to the conversation history, and let the chat template handle the rest. Chat template will load the image for you and return inputs in `torch.Tensor` which you can pass directly to `model.generate()` ```python messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://www.ilankelman.org/stopsigns/australia.jpg"} {"type": "text", "text": "What is shown in this image?"}, ], }, ] inputs = processor.apply_chat_template(messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors"pt") output = model.generate(**inputs, max_new_tokens=50) ``` ### Model optimization #### 4-bit quantization through `bitsandbytes` library First make sure to install `bitsandbytes`, `pip install bitsandbytes` and make sure to have access to a CUDA compatible GPU device. Simply change the snippet above with: ```diff model = LlavaOnevisionForConditionalGeneration.from_pretrained( model_id, torch_dtype=torch.float16, low_cpu_mem_usage=True, + load_in_4bit=True ) ``` #### Use Flash-Attention 2 to further speed-up generation First make sure to install `flash-attn`. Refer to the [original repository of Flash Attention](https://github.com/Dao-AILab/flash-attention) regarding that package installation. Simply change the snippet above with: ```diff model = LlavaOnevisionForConditionalGeneration.from_pretrained( model_id, torch_dtype=torch.float16, low_cpu_mem_usage=True, + use_flash_attention_2=True ).to(0) ``` # Citation ``` @misc{li2024llavaonevisioneasyvisualtask, title={LLaVA-OneVision: Easy Visual Task Transfer}, author={Bo Li and Yuanhan Zhang and Dong Guo and Renrui Zhang and Feng Li and Hao Zhang and Kaichen Zhang and Yanwei Li and Ziwei Liu and Chunyuan Li}, year={2024}, eprint={2408.03326}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2408.03326}, } ```
ibuki95/vision_72_34
ibuki95
2025-06-18T13:53:11Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-06-18T13:52:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Superdekoen/Taxi-v3
Superdekoen
2025-06-18T13:49:48Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2025-06-18T13:49:40Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Superdekoen/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
furio19/fashion-rag
furio19
2025-06-18T13:48:27Z
0
0
null
[ "en", "arxiv:2504.14011", "license:cc-by-4.0", "region:us" ]
null
2025-06-11T15:17:16Z
--- license: cc-by-4.0 language: - en --- <!-- PROJECT LOGO --> <br /> <div align="center"> <h3 align="center">Fashion-RAG</h3> <p align="center"> Fashion-RAG: Multimodal Fashion Image Editing via Retrieval-Augmented Generation</p> </div> <div align="center"> <img src="retrieve_demo.png" width="50%" alt="MiniMax"> </div> <div align="center"> > <p align="center"> <strong>International Joint Conference on Neural Networks (IJCNN) 2025<br>Oral Presentation</strong> </p> > [Fulvio Sanguigni](https://scholar.google.com/citations?user=tSpzMUEAAAAJ&hl=en)<sup>1,2,\*</sup>, [Davide Morelli](https://scholar.google.com/citations?hl=en&user=UJ4D3rYAAAAJ&view_op=list_works)<sup>1,2,\*</sup>, [Marcella Cornia](https://scholar.google.com/citations?user=DzgmSJEAAAAJ&hl=en)<sup>1</sup>, [Rita Cucchiara](https://scholar.google.com/citations?user=OM3sZEoAAAAJ&hl=en)<sup>1</sup> > <sup>1</sup>University of Modena, <sup>2</sup>University of Pisa </div> <div align="center"> <a href="https://arxiv.org/abs/2504.14011" style="margin: 0 2px;"> <img src="https://img.shields.io/badge/Paper-Arxiv-darkred.svg" alt="Paper"> </a> <a href="https://arxiv.org/pdf/2504.14011" style="margin: 0 2px;"> <img src="https://img.shields.io/badge/PDF-Arxiv-darkred.svg" alt="PDF"> </a> <a href='https://fashion-rag-page.github.io/' style="margin: 0 2px;"> <img src='https://img.shields.io/badge/Webpage-Project-silver?style=flat&logo=&logoColor=orange' alt='webpage'> </a> <a href="https://huggingface.co/furio19/fashion-rag"> <img src="https://img.shields.io/badge/HuggingFace-Model-FFB000.svg" alt="Project"> </a> <a href="https://raw.githubusercontent.com/furio1999/fashion-rag/refs/heads/main/LICENSE?token=GHSAT0AAAAAACZM6UVFACIVYIJVXCSFT2VA2CJR5HQ" style="margin: 0 2px;"> <img src='https://img.shields.io/badge/License-CC BY--NC 4.0-lightgreen?style=flat&logo=Lisence' alt='License'> </a> </div> <!-- TABLE OF CONTENTS --> <details> <summary>Table of Contents</summary> <ol> <li><a href="#about-the-project">About The Project</a></li> <li><a href="#getting-started">Getting Started</a> <ul> <li><a href="#prerequisites">Prerequisites</a></li> <li><a href="#installation">Installation</a></li> </ul> </li> <li><a href="#data-and-models">Data and Models</a></li> <li><a href="#inference">Inference</a></li> </ol> </details> <!-- ABOUT THE PROJECT --> ## About The Project Fashion-RAG is a novel approach in the fashion domain, handling multimodal virtual dressing with a new, Retrieval Augmented Generation (RAG) pipeline for visual data. Our approach allows to retrieve garments aligned with a given textual description, and uses the retrieved information as a conditioning to generate the dressed person with Stable Diffusion (SD) as the generative model. We finetune the SD U-Net and an additional adapter module (Inversion Adapter) to handle for the retrieved information. <p align="right">(<a href="#readme-top">back to top</a>)</p> ## ✨ Key Features Our contribution can be summarized as follows: - **🔍 Retrieval Enhanced Generation for Visual Items**. We present a unified framework capable of generating Virtual Dressing without the need of a user-defined garment image. Instead, our method succesfully leverages textual information and retrieves coherent garments to perform the task - **👗👚🧥 Multiple Garments Conditioning**. We introduce a plug-and-play adapter module that is flexible to the number of retrieved items, allowing to retrieve up to 3 garments per text prompt. - **📊 Extensive experiments**. Experiments on the Dress Code datasets demonstrate that Fahion-RAG outweights previous competitors. <!-- Maybe put method here and teaser up, or just method as teaser --> <!-- GETTING STARTED --> ## Getting Started ### Prerequisites Clone the repository: ```sh git clone Fashion-RAG.git ``` ### Installation 1. We recommend installing the required packages using Python's native virtual environment (venv) as follows: ```sh python -m venv fashion-rag source fashion-rag/bin/activate ``` 2. Upgrade pip and install dependencies ```sh pip install --upgrade pip pip install -r requirements.txt ``` 3. Create a .env file like the following: ```js export WANDB_API_KEY="ENTER YOUR WANDB TOKEN" export TORCH_HOME="ENTER YOUR TORCH PATH TO SAVE TORCH MODELS USED FOR METRICS COMPUTING" export HF_TOKEN="ENTER YOUR HUGGINGFACE TOKEN" export HF_CACHE_DIR="PATH WHERE YOU WANT TO SAVE THE HF MODELS (NEED CUSTOM VARIABLE TO ACCOUNT FOR OLD TRANSFORMERS PACKAGES, OTHERWISE USE HF_HOME)" ``` <!-- USAGE EXAMPLES --> ## Data and Models Download DressCode from the [original repository](https://github.com/aimagelab/dress-code) Download the finetuned U-Net and Inversion Adapter from [this source](https://huggingface.co/furio19/fashion-rag/tree/main) and put them into your experiment folder as follows: ```plaintext <experiment folder>/ │ ├── unet_120000.pth ├── inversion_adapter_120000.pth ``` Copy the provided retrieval file paths folder dataset/dresscode-retrieval into your retrieve path or use them directly. ## Inference Let's generate our virtual dressing images using the finetuned TEMU-VTOFF model. ```sh source fashion-rag/bin/activate python evaluate_RAG.py \ python evaluate_RAG.py \ --pretrained_model_name_or_path stabilityai/stable-diffusion-2-inpainting \ --output_dir "output directory path" \ --finetuned_models_dir "U-Net and inversion adapter directory weights path" \ --unet_name unet_120000.pth --inversion_adapter_name inversion_adapter_120000.pth \ --dataset dresscode --dresscode_dataroot <data path>/DressCode \ --category "garment category"\ --test_order "specify paired or unpaired" --mask_type mask \ --phase test --num_inference_steps 50 \ --test_batch_size 8 --num_workers_test 8 --metrics_batch_size 8 --mixed_precision fp16 \ --text_usage prompt_noun_chunks \ --retrieve_path "dataset/dresscode-retrieval or your custom path" \ --clip_retrieve_model ViT-L-14 --clip_retrieve_weights laion2b_s32b_b82k \ --n_chunks "number of text chunks 1 or 3" \ --n_retrieved "number of retrieved images 1 to 3" \ --metrics fid_score kid_score retrieved_score clip_score lpips_score ssim_score \ --attention_layers_fine_list '-1' '0 1 2 3'\ --compute_metrics ``` The final output folder structure will look like this: ```plaintext out_dir/pte_paired_nc_<number_of_chunks>_nr_<number_of_retrieved_images>/ │ ├── lower_body/ ├── upper_body/ ├── dresses/ └── metrics_all.json ```
bruhzair/prototype-0.4x161
bruhzair
2025-06-18T13:45:59Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2408.07990", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-18T13:18:55Z
--- base_model: [] library_name: transformers tags: - mergekit - merge --- # prototype-0.4x161 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using /workspace/prototype-0.4x153 as a base. ### Models Merged The following models were included in the merge: * /workspace/cache/models--ReadyArt--Forgotten-Safeword-70B-v5.0/snapshots/ac2650005a6fdef7f4cd62590dcb665155349a5b * /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459 * /workspace/prototype-0.4x160 * /workspace/cache/models--Delta-Vector--Austral-70B-Preview/snapshots/bf62fe4ffd7e460dfa3bb881913bdfbd9dd14002 ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: /workspace/cache/models--ReadyArt--Forgotten-Safeword-70B-v5.0/snapshots/ac2650005a6fdef7f4cd62590dcb665155349a5b - model: /workspace/prototype-0.4x160 - model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459 - model: /workspace/cache/models--Delta-Vector--Austral-70B-Preview/snapshots/bf62fe4ffd7e460dfa3bb881913bdfbd9dd14002 - model: /workspace/prototype-0.4x153 base_model: /workspace/prototype-0.4x153 select_topk: 0.15 merge_method: sce tokenizer: source: base pad_to_multiple_of: 8 int8_mask: true dtype: bfloat16 ```
morturr/Llama-2-7b-hf-LOO_amazon-COMB_headlines-comb2-seed18-2025-06-18
morturr
2025-06-18T13:26:16Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-06-18T13:25:52Z
--- library_name: peft license: llama2 base_model: meta-llama/Llama-2-7b-hf tags: - trl - sft - generated_from_trainer model-index: - name: Llama-2-7b-hf-LOO_amazon-COMB_headlines-comb2-seed18-2025-06-18 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-hf-LOO_amazon-COMB_headlines-comb2-seed18-2025-06-18 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 18 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
poojastl2024/whisper-large-v3-lora-bn-en
poojastl2024
2025-06-18T13:21:35Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:openai/whisper-large-v3", "base_model:adapter:openai/whisper-large-v3", "license:apache-2.0", "region:us" ]
null
2025-06-18T13:16:07Z
--- library_name: peft license: apache-2.0 base_model: openai/whisper-large-v3 tags: - generated_from_trainer model-index: - name: whisper-large-v3-lora-bn-en results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-large-v3-lora-bn-en This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.15.0 - Transformers 4.49.0 - Pytorch 2.7.1+cu118 - Datasets 3.4.1 - Tokenizers 0.21.1
MasterControlAIML/DeepSeek-R1-Qwen2.5-3b-LLM-Judge-Reward-JSON-Unstructured-To-Structured-Lora
MasterControlAIML
2025-06-18T13:16:23Z
14
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "trl", "grpo", "conversational", "en", "base_model:unsloth/Qwen2.5-3B-Instruct", "base_model:finetune:unsloth/Qwen2.5-3B-Instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_...
text-generation
2025-04-26T21:14:59Z
--- # 🦄 Model Card base_model: unsloth/Qwen2.5-3B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - grpo # Gradient Reward Policy Optimization license: apache-2.0 language: - en --- # 📦 Uploaded Model | **Field** | **Value** | |-----------------------|--------------------------------------------| | **Developed by** | **MasterControlAIML** | | **License** | Apache 2.0 | | **Finetuned from** | `unsloth/Qwen2.5-3B-Instruct` | | **Training Framework**| [Unsloth](https://github.com/unslothai/unsloth) × Hugging Face TRL | [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="190"/>](https://github.com/unslothai/unsloth) --- ## 🚀 What’s New? > *The protein-shake sequel to **MasterControlAIML/DeepSeek-R1-Qwen2.5-1.5b-SFT-R1-JSON-Unstructured-To-Structured**—now with more neurons, zero SFT, and a league of reward functions.* | Upgrade | Explanation | |--------------------|------------------------------------------------------------------------------| | **Bigger Backbone**| 1.5 B → **3 B** Qwen 2.5 for bigger reasoning muscles. | | **Pure RL** | No supervised fine-tuning—policy learned *only* from reward signals (GRPO). | | **LM-as-Judge** | Separate LLM rates each candidate for correctness, JSON validity, style… | | **2× Faster Train**| Unsloth’s flash-attention & fused ops = less VRAM, more speed. | --- ## 🛠️ Intended Use * Convert messy prose, logs, or audit notes into a pristine JSON document that follows a complex, nested schema. * Drop-in replacement for any pipeline using the older DeepSeek-R1 1.5 B structurer—just swap the checkpoint and enjoy the headroom. --- ## 🔧 How to Use (Reasoning + JSON) The snippet below: 1. **Primes** the model with the *exact* Pydantic schema, so it outputs the right keys. 2. Makes the model **think step-by-step** (reasoning) but still wraps the final JSON in an easy-to-parse container. 3. Uses the correct repo name: `MasterControlAIML/DeepSeek-R1-Qwen2.5-3b-LLM-Judge-Reward-JSON-Unstructured-To-Structured-Lora`. ```python # ───────────────────────────────────────────────────────────────────────────── # QUICK-START # Structured-data extraction with reasoning + JSON output # ───────────────────────────────────────────────────────────────────────────── from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline import torch, json, textwrap, inspect from pydantic import BaseModel from typing import List, Optional MODEL = "MasterControlAIML/DeepSeek-R1-Qwen2.5-3b-LLM-Judge-Reward-JSON-Unstructured-To-Structured-Lora" # 1️⃣ Inline schema (keeps the LLM on-rails) ───────────────────────────────── class MultipleChoice(BaseModel): question: str options: List[str] selected: str class FormField(BaseModel): fieldName: str value: str notes: Optional[str] = "" class Calculation(BaseModel): formula: str result: str notes: Optional[str] = "" class Metadata(BaseModel): reportDate: str auditorId: Optional[str] = None comments: Optional[str] = None class Content(BaseModel): paragraphs: List[str] tables: List["Table"] # assume Table defined elsewhere checkboxes: List["Checkbox"] # 〃 multipleChoice: List[MultipleChoice] formFields: List[FormField] calculations: List[Calculation] metadata: Optional[Metadata] = Metadata(reportDate="") class Section(BaseModel): id: str title: str content: Content class Document(BaseModel): documentTitle: str documentDate: str sections: List[Section] SCHEMA_TEXT = inspect.getsource(Document) # 2️⃣ Build prompts ────────────────────────────────────────────────────────── SYSTEM_PROMPT = textwrap.dedent(f""" You are an expert **data-extraction assistant**. Extract structured info from unstructured text **exactly** following the Pydantic schema. ── Schema ── {SCHEMA_TEXT} ───────────── Rules: 1. Follow the schema for keys & nesting. 2. Copy values verbatim when possible. 3. If a field is missing, return null. 4. Output your step-by-step reasoning first. 5. Then return ONLY the JSON inside this wrapper: final answer[ json object: {{ ... }} ] Format: <reasoning>…</reasoning> <answer> final answer[ json object: {{ … }} ] </answer> """).strip() UNSTRUCTURED_TEXT = """ 12 April 2025 – Onsite audit performed by Jane Smith. Observations: Two fire extinguishers past expiry; emergency lights functional. Calculations: Total extinguishers = 8, expired = 2 → 25 % overdue. """ USER_PROMPT = textwrap.dedent(f""" ### Task Convert the following *hier* text to the schema. ### hier {UNSTRUCTURED_TEXT} """).strip() # 3️⃣ Generate ─────────────────────────────────────────────────────────────── tok = AutoTokenizer.from_pretrained(MODEL, use_fast=True) model = AutoModelForCausalLM.from_pretrained( MODEL, device_map="auto", torch_dtype=torch.bfloat16 ) gen = pipeline("text-generation", model=model, tokenizer=tok, max_new_tokens=512, do_sample=False) prompt = f"<|system|>\n{SYSTEM_PROMPT}\n<|user|>\n{USER_PROMPT}" raw_out = gen(prompt)[0]["generated_text"] # 4️⃣ Slice out the JSON ───────────────────────────────────────────────────── start = raw_out.find("final answer[") end = raw_out.rfind("]") + 1 json_text = raw_out[start:].split("json object:")[-1].strip(" []\n") data = json.loads(json_text) # ✅ Raises if malformed print(raw_out) # reasoning + JSON print("\n✅ Parsed object:\n", data) ```` ### Why it Works 🧐 * **Schema-priming** ensures key-level fidelity—no “creative” field names. * **Chain-of-thought** improves factual extraction (was rewarded during GRPO). * The `final answer[…]` wrapper makes downstream parsing a one-liner. --- ## 🏋️ Training Recipe (Condensed) | Setting | Value | | -------------- | ------------------------------------------------------------------- | | **Algorithm** | GRPO – policy ≈ LM, reward LM ≈ `Qwen2.5-7B` w/ JSON-validator head | | **Epochs** | 3 (effective) | | **Batch** | Grad-accum 8, bfloat16 | | **Optimizer** | Fused AdamW | | **Throughput** | ≈ 45 k tokens/s on 8×A100 | --- ## 📊 Evaluation (WIP) | Metric | Status | | ------------------------- | ------ | | Exact-Match JSON Accuracy | 🔜 | | Structural F1 | 🔜 | | Valid-JSON Rate | 🔜 | Stay tuned—numbers landing faster than you can say “schema validation.” 🛰️ --- ## 🤝 Citation ```bibtex @misc{bhaviktheslider_2025_unsloth_qwen2.5_3b_grpo, title = {An Unsloth-accelerated GRPO-trained Qwen 2.5-3B for JSON structuring}, author = {MasterControlAIML}, year = {2025}, howpublished = {\url{https://huggingface.co/MasterControlAIML/DeepSeek-R1-Qwen2.5-3b-LLM-Judge-Reward-JSON-Unstructured-To-Structured-Lora}} } ``` *May your JSON always parse and your losses always converge!* 😎 ```
zhongpeiying/UDTransNet
zhongpeiying
2025-06-18T13:15:52Z
0
0
pytorch
[ "pytorch", "biology", "medical", "pupil-tracking", "image-segmentation", "dataset:mouse-pupil-videos", "region:us" ]
image-segmentation
2025-06-18T02:54:07Z
--- pipeline_tag: image-segmentation tags: - biology - medical - pupil-tracking library_name: pytorch datasets: - mouse-pupil-videos metrics: - dice-score --- # UDTransNet - 小鼠瞳孔分割模型 ## 模型描述 本模型是基于 PyTorch 的 UDTransNet 架构,专用于小鼠瞳孔视频帧的精确分割。适用于神经科学和行为学实验分析。 ## 使用方法 ### 快速推理 ```python from transformers import pipeline segmenter = pipeline("image-segmentation", model="zhongpeiying/UDTransNet") result = segmenter("mouse_pupil.jpg") # 输入图片路径
CohenQu/sft_llama3_3b-finemath-4plus-flexible-ordering.00.02-Step6000_numina-cot-100k_babel
CohenQu
2025-06-18T13:13:55Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "trl", "sft", "conversational", "dataset:HuggingFaceTB/smoltalk", "base_model:CohenQu/llama3_3b-finemath-4plus-flexible-ordering.00.02", "base_model:finetune:CohenQu/llama3_3b-finemath-4plus-flexible-ordering.00...
text-generation
2025-06-18T08:12:17Z
--- base_model: CohenQu/llama3_3b-finemath-4plus-flexible-ordering.00.02 datasets: HuggingFaceTB/smoltalk library_name: transformers model_name: sft_llama3_3b-finemath-4plus-flexible-ordering.00.02-Step6000_numina-cot-100k_babel tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for sft_llama3_3b-finemath-4plus-flexible-ordering.00.02-Step6000_numina-cot-100k_babel This model is a fine-tuned version of [CohenQu/llama3_3b-finemath-4plus-flexible-ordering.00.02](https://huggingface.co/CohenQu/llama3_3b-finemath-4plus-flexible-ordering.00.02) on the [HuggingFaceTB/smoltalk](https://huggingface.co/datasets/HuggingFaceTB/smoltalk) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="CohenQu/sft_llama3_3b-finemath-4plus-flexible-ordering.00.02-Step6000_numina-cot-100k_babel", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yuxiao98/flexible-ordering/runs/dzz4ixn7) This model was trained with SFT. ### Framework versions - TRL: 0.17.0 - Transformers: 4.52.3 - Pytorch: 2.6.0 - Datasets: 3.2.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
phospho-app/OpenLabBA-gr00t-lego_in_box_v3-53igmaz9q6
phospho-app
2025-06-18T13:10:13Z
0
0
null
[ "phosphobot", "gr00t", "region:us" ]
null
2025-06-18T13:08:40Z
--- tags: - phosphobot - gr00t task_categories: - robotics --- # gr00t Model - phospho Training Pipeline ## Error Traceback We faced an issue while training your model. ``` Traceback (most recent call last): File "/root/src/helper.py", line 165, in predict trainer.train(timeout_seconds=timeout_seconds) File "/root/phosphobot/am/gr00t.py", line 1146, in train asyncio.run( File "/opt/conda/lib/python3.11/asyncio/runners.py", line 190, in run return runner.run(main) ^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/asyncio/runners.py", line 118, in run return self._loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete return future.result() ^^^^^^^^^^^^^^^ File "/root/phosphobot/am/gr00t.py", line 996, in run_gr00t_training raise RuntimeError(error_msg) RuntimeError: Training process failed with exit code 1: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/workspace/gr00t/data/dataset.py", line 790, in get_data_by_modality return self.get_video(trajectory_id, key, base_index) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/workspace/gr00t/data/dataset.py", line 658, in get_video video_timestamp = timestamp[step_indices] ~~~~~~~~~^^^^^^^^^^^^^^ IndexError: index 568 is out of bounds for axis 0 with size 263 0%| | 0/2550 [00:04<?, ?it/s] ``` ## Training parameters: - **Dataset**: [OpenLabBA/lego_in_box_v3](https://huggingface.co/datasets/OpenLabBA/lego_in_box_v3) - **Wandb run URL**: None - **Epochs**: 10 - **Batch size**: 49 - **Training steps**: None 📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme) 🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
Nemo0923/qwen3-30b-gguf-private
Nemo0923
2025-06-18T13:06:28Z
0
0
null
[ "gguf", "NSFW", "Reserch-Only", "ja", "base_model:Aratako/Qwen3-30B-A3B-NSFW-JP", "base_model:quantized:Aratako/Qwen3-30B-A3B-NSFW-JP", "license:mit", "endpoints_compatible", "region:us" ]
null
2025-06-18T11:21:20Z
--- license: mit language: - ja base_model: - Aratako/Qwen3-30B-A3B-NSFW-JP tags: - NSFW - Reserch-Only ---
Nap/Qwen2VL-Flux-ControlNet
Nap
2025-06-18T12:23:18Z
0
0
null
[ "license:other", "region:us" ]
null
2025-06-16T23:03:35Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md ---
sh10f/ppo-LunarLander-v3
sh10f
2025-06-18T12:21:56Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-06-16T16:46:20Z
--- library_name: stable-baselines3 tags: - LunarLander-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v3 type: LunarLander-v3 metrics: - type: mean_reward value: 259.95 +/- 17.26 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v3** This is a trained model of a **PPO** agent playing **LunarLander-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
pot99rta/PatriSlush-DarkLorablated-LongStock-12B
pot99rta
2025-06-18T12:03:15Z
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "arxiv:2403.19522", "base_model:SuperbEmphasis/Omega-Darker_The-Final-Directive-Longform-Stage2-ERP-12B-v0.2", "base_model:merge:SuperbEmphasis/Omega-Darker_The-Final-Directive-Longform-Stage2-ERP-12B-v0.2", "base_mo...
text-generation
2025-06-18T09:05:24Z
--- base_model: - yamatazen/LorablatedStock-12B - mergekit-community/Irix-12B_Slush - pot99rta/Patricide-12B-Forgottenslop-Mell - SuperbEmphasis/Omega-Darker_The-Final-Directive-Longform-Stage2-ERP-12B-v0.2 library_name: transformers tags: - mergekit - merge --- # PatriSlush-DarkLorablated-LongStock-12B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/636ea389fd9751c3d081e88e/V0ogJjbVmHtl7VypKAk-9.png) ```Models Merged:``` ```1. pot99rta/Patricide-12B-Forgottenslop-Mell``` ```2. yamatazen/LorablatedStock-12B``` ```3. mergekit-community/Irix-12B_Slush ``` ```4. SuperbEmphasis/Omega-Darker_The-Final-Directive-Longform-Stage2-ERP-12B-v0.2 ``` ```Preset:``` ```Use ChatML, Mistral, or Phi - Only used Phi with ChatML as Tokenizer in my testing.``` # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [pot99rta/Patricide-12B-Forgottenslop-Mell](https://huggingface.co/pot99rta/Patricide-12B-Forgottenslop-Mell) as a base. ### Models Merged The following models were included in the merge: * [yamatazen/LorablatedStock-12B](https://huggingface.co/yamatazen/LorablatedStock-12B) * [mergekit-community/Irix-12B_Slush](https://huggingface.co/mergekit-community/Irix-12B_Slush) * [SuperbEmphasis/Omega-Darker_The-Final-Directive-Longform-Stage2-ERP-12B-v0.2](https://huggingface.co/SuperbEmphasis/Omega-Darker_The-Final-Directive-Longform-Stage2-ERP-12B-v0.2) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: pot99rta/Patricide-12B-Forgottenslop-Mell models: - model: mergekit-community/Irix-12B_Slush - model: SuperbEmphasis/Omega-Darker_The-Final-Directive-Longform-Stage2-ERP-12B-v0.2 - model: yamatazen/LorablatedStock-12B merge_method: model_stock dtype: bfloat16 out_dtype: bfloat16 parameters: normalize: true tokenizer: source: union ```
TarunKM/llama3-automotive-companyB2-qlora
TarunKM
2025-06-18T11:57:00Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:adapter:meta-llama/Llama-3.1-8B-Instruct", "license:llama3.1", "region:us" ]
null
2025-06-18T10:29:52Z
--- library_name: peft license: llama3.1 base_model: meta-llama/Llama-3.1-8B-Instruct tags: - generated_from_trainer model-index: - name: llama3-automotive-companyB2-qlora results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama3-automotive-companyB2-qlora This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 8 - optimizer: Use paged_adamw_32bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.15.2 - Transformers 4.52.4 - Pytorch 2.7.1+cu126 - Datasets 3.6.0 - Tokenizers 0.21.1
dgambettaphd/M_llm2_run2_gen4_WXS_doc1000_synt120_lr1e-04_acm_SYNLAST
dgambettaphd
2025-06-18T11:53:24Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-18T11:53:03Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Rakesh9945/customerticket
Rakesh9945
2025-06-18T11:48:10Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-chat-hf", "base_model:adapter:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
2025-06-18T11:48:03Z
--- base_model: meta-llama/Llama-2-7b-chat-hf library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
Zack-Z/qwen3_4bi_cotsft_rs0_3_5cut_gem3all_indep_ntt_e2
Zack-Z
2025-06-18T11:39:44Z
0
0
transformers
[ "transformers", "qwen3", "feature-extraction", "text-generation-inference", "unsloth", "en", "base_model:unsloth/Qwen3-4B", "base_model:finetune:unsloth/Qwen3-4B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
feature-extraction
2025-06-18T11:25:25Z
--- base_model: unsloth/Qwen3-4B tags: - text-generation-inference - transformers - unsloth - qwen3 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** Zack-Z - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen3-4B This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
morturr/Llama-2-7b-hf-LOO_one_liners-COMB_dadjokes-comb1-seed42-2025-06-18
morturr
2025-06-18T11:39:15Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-06-18T11:38:58Z
--- library_name: peft license: llama2 base_model: meta-llama/Llama-2-7b-hf tags: - trl - sft - generated_from_trainer model-index: - name: Llama-2-7b-hf-LOO_one_liners-COMB_dadjokes-comb1-seed42-2025-06-18 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-hf-LOO_one_liners-COMB_dadjokes-comb1-seed42-2025-06-18 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
gradientrouting-spar/mc9_badmed_representation_constraint_beta_kl-10.0_seed_1
gradientrouting-spar
2025-06-18T11:32:33Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-18T11:31:41Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
piyawudk/PhishMe-Qwen3-Base-GRPO-8B-new
piyawudk
2025-06-18T11:30:22Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/Qwen3-8B-Base", "base_model:finetune:unsloth/Qwen3-8B-Base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-06-18T11:27:22Z
--- base_model: unsloth/Qwen3-8B-Base tags: - text-generation-inference - transformers - unsloth - qwen3 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** piyawudk - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen3-8B-Base This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
sanchit42/qwen3-0.6B-instruct-29reports-lora256
sanchit42
2025-06-18T11:28:03Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "llama-factory", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-18T11:27:25Z
--- library_name: transformers tags: - llama-factory --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
phospho-app/gerotropic-gr00t-so101_pnp-5cz8a
phospho-app
2025-06-18T11:27:46Z
0
0
null
[ "safetensors", "gr00t_n1", "phosphobot", "gr00t", "region:us" ]
null
2025-06-18T08:34:52Z
--- tags: - phosphobot - gr00t task_categories: - robotics --- # gr00t Model - phospho Training Pipeline ## This model was trained using **phospho**. Training was successfull, try it out on your robot! ## Training parameters: - **Dataset**: [gerotropic/so101_pnp](https://huggingface.co/datasets/gerotropic/so101_pnp) - **Wandb run URL**: None - **Epochs**: 10 - **Batch size**: 49 - **Training steps**: None 📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme) 🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
vn66666666666/Llama3-8B-Science-MCQ-Solver
vn66666666666
2025-06-18T11:18:43Z
0
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-18T11:17:54Z
--- base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** vn66666666666 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
VittaBurndropsau/VittaBurnDropsReview
VittaBurndropsau
2025-06-18T11:15:56Z
0
0
null
[ "region:us" ]
null
2025-06-18T11:14:49Z
Introduction: The Need for a Healthier, Easier Approach to Weight Loss In a world where processed foods, sedentary lifestyles, and stress contribute to unhealthy weight gain, millions are searching for effective, sustainable solutions. Traditional weight loss methods like intense workouts, strict diets, or synthetic pills may not be ideal or safe for everyone. VittaBurn Drops Review promise a more convenient and natural approach. With herbal extracts, essential nutrients, and metabolism-supporting compounds, this supplement aims to optimize the body's natural fat-burning processes without harsh chemicals or side effects. Screenshot 2025-06-16 112553.jpg 👉Get More Info: — Click on Here Official Website👈 What Are VittaBurn Drops? VittaBurn Drops are a natural dietary supplement formulated to support weight loss, enhance metabolism, and promote overall health using a blend of plant-based ingredients. These drops are taken sublingually (under the tongue) to ensure faster absorption and improved bioavailability compared to pills or capsules. Marketed as a powerful fat-burning aid, VittaBurn Drops are gaining popularity among individuals looking for a non-stimulant, easy-to-use solution to manage their weight naturally. How Do VittaBurn Drops Work? VittaBurn Drops are designed to work in multiple ways to support weight management and overall wellness: 1. Boosts Metabolism Certain ingredients in the drops are known to enhance thermogenesis — the body’s process of generating heat from calories. This leads to more efficient calorie burning even at rest. 2. Appetite Suppression Natural extracts may help control hunger and reduce cravings, allowing users to maintain a calorie deficit without feeling deprived. 3. Supports Fat Oxidation VittaBurn is believed to enhance the breakdown of stored fat, converting it into usable energy and helping to reduce overall fat percentage, especially in stubborn areas. 4. Enhances Energy and Focus Unlike stimulants, VittaBurn Drops offer a clean energy boost from natural ingredients, helping users stay active, alert, and motivated throughout the day. 5. Detoxification and Gut Health Some components may support the digestive system and eliminate toxins, which can play a role in reducing bloating and supporting overall metabolic health. Benefits of VittaBurn Drops Here are some key benefits reported by users and supported by the formula’s ingredient profile: ✅ Natural Weight Loss Support VittaBurn promotes fat burning without harsh stimulants, making it a gentle yet effective supplement for long-term use. ✅ Faster Absorption The sublingual (under the tongue) delivery method allows for quicker absorption into the bloodstream, bypassing the digestive system for faster effects. ✅ Better Energy Levels Users report feeling more energetic and motivated, which can help maintain an active lifestyle essential for healthy weight loss. ✅ Appetite and Craving Control By helping to reduce hunger signals, the drops make it easier to maintain portion control and resist unhealthy snacking. ✅ Improved Mood and Focus Weight loss often comes with mental fatigue and irritability, but VittaBurn's formulation helps balance mood and enhance focus. ✅ Plant-Based and Non-GMO VittaBurn Drops use herbal ingredients and avoid GMOs, synthetic fillers, and artificial flavors. Official website: — https://www.accessnewswire.com/newsroom/en/healthcare-and-pharmaceutical/vittaburn-drops-review-natural-weight-loss-solution-or-just-hype-1035366 Official website: — https://vittaburn.com.au/ Facebook: - https://www.facebook.com/VittaBurnDropsAu/ Medium: - https://vittaburndropsreviewau.medium.com/vittaburn-drops-review-2025-does-this-weight-loss-formula-really-work-6f7a6e05ca2e Groups Google: - https://groups.google.com/g/vittaburn-drops-au/c/j7LS0d5Oe7M Quora: - https://vittaburndropsreviewau.quora.com/ Teeshoppe: - https://teeshopper.in/store/VittaBurn-Drops-AU Pinterest: - https://www.pinterest.com/avleengonna/vittaburn-drops/ Tumblr: - https://www.tumblr.com/vittaburndropsau Blog: - https://vittaburndropsau.blogspot.com/2025/06/vittaburn-drops-reviews-and-complaints.html Blog: - https://sites.google.com/view/vittaburndropsreview/home
yukihamada/buzzquan-sensei-trained
yukihamada
2025-06-18T11:15:44Z
0
0
gguf
[ "gguf", "qwen3", "education", "ai-assistant", "japanese", "quantized", "ja", "en", "dataset:custom", "base_model:Qwen/Qwen3-4B", "base_model:quantized:Qwen/Qwen3-4B", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-06-18T11:14:09Z
--- license: apache-2.0 language: - ja - en library_name: gguf base_model: Qwen/Qwen3-4B tags: - education - ai-assistant - japanese - gguf - quantized datasets: - custom model_type: qwen3 quantization: IQ4_XS --- # BuzzQuan Sensei (先生) - 学習済みモデル ## 概要 AI開発指導者。深い洞察と論理的思考でAI技術を伝授 ## モデル詳細 - **ベースモデル**: Qwen3-4B (jan-nano-4b) - **学習サンプル数**: 38 - **量子化**: IQ4_XS (2.1GB) - **言語**: 日本語・英語 - **ライセンス**: Apache 2.0 ## 特徴 - 情熱的で論理的な教え方 - 深い技術知識と実践的アドバイス - 生徒の成長を第一に考える指導 ## 使用方法 ### llama.cpp ```bash # ダウンロード wget https://huggingface.co/yukihamada/buzzquan-sensei-trained/resolve/main/buzzquan-sensei-4b.gguf # 実行 ./llama-cli -m buzzquan-sensei-4b.gguf \ -p "こんにちは!" \ -n 100 \ --temp 0.8 ``` ### Ollama ```bash # Modelfile作成 cat > Modelfile << EOF FROM ./buzzquan-sensei-4b.gguf TEMPLATE """{{ if .System }}System: {{ .System }} {{ end }}{{ if .Prompt }}Human: {{ .Prompt }} {{ end }}Assistant: """ SYSTEM "あなたはBuzzQuan Sensei (先生)です。AI開発指導者。深い洞察と論理的思考でAI技術を伝授" PARAMETER temperature 0.8 PARAMETER top_p 0.9 PARAMETER repeat_penalty 1.1 EOF # モデル作成 ollama create sensei -f Modelfile # 実行 ollama run sensei "こんにちは!" ``` ### LM Studio 1. モデルファイルをダウンロード 2. LM Studioにドラッグ&ドロップ 3. チャット画面で会話開始 ## サンプル対話 **Human**: LoRAについて教えてください **BuzzQuan Sensei (先生)**: LoRA(Low-Rank Adaptation)は、まるで既存のAIモデルに「追加の知識メモリ」を付け加えるような技術なんだ!大きなモデルを丸ごと再訓練するのは、建物を全部建て直すようなもの。でもLoRAは、既存の建物に新しい部屋を追加するだけ。だから効率的で、メモリも少なくて済む。 ## 技術仕様 - アーキテクチャ: Qwen3 - パラメータ数: 4B - コンテキスト長: 40,960 tokens - 埋め込み次元: 2,560 - アテンションヘッド: 32 - レイヤー数: 36 ## トレーニングデータ 38個の厳選された日本語対話データで学習: - AI開発・機械学習に関する技術的な質問と回答 - 教育的で分かりやすい説明 - 指導者としての励ましとアドバイス ## ライセンス Apache License 2.0 ## 作成者 Yuki Hamada ## 謝辞 - Qwen Team - ベースモデル提供 - llama.cpp - GGUF形式サポート - 日本のAIコミュニティ
Guinimos/DeepSeek-Psychology-Tuned-2000-Params-x2Epochs
Guinimos
2025-06-18T11:08:49Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-06-18T10:55:17Z
--- base_model: unsloth/deepseek-r1-distill-qwen-7b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** Guinimos - **License:** apache-2.0 - **Finetuned from model :** unsloth/deepseek-r1-distill-qwen-7b-unsloth-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Prince-1/Gemma-7b-Onnx
Prince-1
2025-06-18T11:02:27Z
0
0
onnxruntime_genai
[ "onnxruntime_genai", "onnx", "unsloth", "gemma", "gemma-7b", "onnxruntime", "onnxruntime-genai", "en", "base_model:unsloth/gemma-7b", "base_model:quantized:unsloth/gemma-7b", "license:apache-2.0", "region:us" ]
null
2025-06-18T10:54:56Z
--- language: - en library_name: onnxruntime_genai license: apache-2.0 tags: - unsloth - gemma - gemma-7b - onnx - onnxruntime - onnxruntime-genai base_model: - unsloth/gemma-7b base_model_relation: quantized --- # Gemma-7b Onnx <!-- Provide a quick summary of what the model is/does. --> COnvert Gemma-7B model to ONNX
SoheilAI/reranker-TookaBERT-Base-gooaq-bce
SoheilAI
2025-06-18T10:59:46Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "cross-encoder", "generated_from_trainer", "dataset_size:580", "loss:BinaryCrossEntropyLoss", "text-ranking", "en", "arxiv:1908.10084", "base_model:PartAI/TookaBERT-Base", "base_model:finetune:PartAI/TookaBERT-Base", "license:apache-2.0", "mo...
text-ranking
2025-06-18T10:59:26Z
--- language: - en license: apache-2.0 tags: - sentence-transformers - cross-encoder - generated_from_trainer - dataset_size:580 - loss:BinaryCrossEntropyLoss base_model: PartAI/TookaBERT-Base pipeline_tag: text-ranking library_name: sentence-transformers metrics: - map - mrr@10 - ndcg@10 model-index: - name: ModernBERT-base trained on GooAQ results: - task: type: cross-encoder-reranking name: Cross Encoder Reranking dataset: name: gooaq dev type: gooaq-dev metrics: - type: map value: 0.6146 name: Map - type: mrr@10 value: 0.6071 name: Mrr@10 - type: ndcg@10 value: 0.6523 name: Ndcg@10 --- # ModernBERT-base trained on GooAQ This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [PartAI/TookaBERT-Base](https://huggingface.co/PartAI/TookaBERT-Base) using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for text reranking and semantic search. ## Model Details ### Model Description - **Model Type:** Cross Encoder - **Base model:** [PartAI/TookaBERT-Base](https://huggingface.co/PartAI/TookaBERT-Base) <!-- at revision fa5ca89df5670700d9325b8872ac65c17cb24582 --> - **Maximum Sequence Length:** 512 tokens - **Number of Output Labels:** 1 label <!-- - **Training Dataset:** Unknown --> - **Language:** en - **License:** apache-2.0 ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder) ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import CrossEncoder # Download from the 🤗 Hub model = CrossEncoder("SoheilAI/reranker-TookaBERT-Base-gooaq-bce") # Get scores for pairs of texts pairs = [ ['چه کسی صادق محسوب می\u200cشود؟', 'صادق کسی است که قلباً می\u200cخواهد خود را به او نزدیک کند؛ رفتارش را با او بر پایه\u200cی راستی، دُرستی، وفاداری و از خودگذشتگی تنظیم می\u200cکند و کوشش دارد رضایتِ «خدا» را بر رضایتِ ایگویِ خود ترجیح دهد و در صورتِ لزوم به خود تحمیل کند.'], ['چه کسی صادق محسوب می\u200cشود؟', 'طبِ جدیدِ روح، علمی است که با عمل کردنِ این\u200cویوُ به حقایقِ الهیِ واقعیِ زنده – که هدایتِ الهی ارائه می\u200cدهد – به دست می\u200cآید.'], ['چه کسی صادق محسوب می\u200cشود؟', '[در برزخ،] در واقع همین شخص خواهیم \u200cبود که در این جا بوده\u200cایم ولی هُشیارتر، و با احساس عمیق\u200cتری نسبت به شرافت\u200cمان، مِنهایِ وزنِ جسمِ زیستی\u200cمان با نیازها و خواسته\u200cهایِ حیوانیِ زحمت\u200cآورش.'], ['چه کسی صادق محسوب می\u200cشود؟', 'حقایقِ الهیِ واقعیِ زنده از آن جهت که دارای اثرِ الهی هستند، عقلِ سلیم را رشد می\u200cدهند.'], ['چه کسی صادق محسوب می\u200cشود؟', 'زمانی که وجدان\u200c باید ساکت بماند، صدایش را بلند می\u200cکند، و برعکس زمانی که وجدان باید صدایش را بلند کند و حتا فریاد بزند، ساکت می\u200cماند.'], ] scores = model.predict(pairs) print(scores.shape) # (5,) # Or rank different texts based on similarity to a single text ranks = model.rank( 'چه کسی صادق محسوب می\u200cشود؟', [ 'صادق کسی است که قلباً می\u200cخواهد خود را به او نزدیک کند؛ رفتارش را با او بر پایه\u200cی راستی، دُرستی، وفاداری و از خودگذشتگی تنظیم می\u200cکند و کوشش دارد رضایتِ «خدا» را بر رضایتِ ایگویِ خود ترجیح دهد و در صورتِ لزوم به خود تحمیل کند.', 'طبِ جدیدِ روح، علمی است که با عمل کردنِ این\u200cویوُ به حقایقِ الهیِ واقعیِ زنده – که هدایتِ الهی ارائه می\u200cدهد – به دست می\u200cآید.', '[در برزخ،] در واقع همین شخص خواهیم \u200cبود که در این جا بوده\u200cایم ولی هُشیارتر، و با احساس عمیق\u200cتری نسبت به شرافت\u200cمان، مِنهایِ وزنِ جسمِ زیستی\u200cمان با نیازها و خواسته\u200cهایِ حیوانیِ زحمت\u200cآورش.', 'حقایقِ الهیِ واقعیِ زنده از آن جهت که دارای اثرِ الهی هستند، عقلِ سلیم را رشد می\u200cدهند.', 'زمانی که وجدان\u200c باید ساکت بماند، صدایش را بلند می\u200cکند، و برعکس زمانی که وجدان باید صدایش را بلند کند و حتا فریاد بزند، ساکت می\u200cماند.', ] ) # [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Cross Encoder Reranking * Dataset: `gooaq-dev` * Evaluated with [<code>CrossEncoderRerankingEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderRerankingEvaluator) with these parameters: ```json { "at_k": 10, "always_rerank_positives": false } ``` | Metric | Value | |:------------|:---------------------| | map | 0.6146 (+0.0744) | | mrr@10 | 0.6071 (+0.0699) | | **ndcg@10** | **0.6523 (+0.0415)** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 580 training samples * Columns: <code>question</code>, <code>answer</code>, and <code>label</code> * Approximate statistics based on the first 580 samples: | | question | answer | label | |:--------|:------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|:------------------------------------------------| | type | string | string | int | | details | <ul><li>min: 18 characters</li><li>mean: 56.42 characters</li><li>max: 189 characters</li></ul> | <ul><li>min: 36 characters</li><li>mean: 230.48 characters</li><li>max: 821 characters</li></ul> | <ul><li>0: ~78.45%</li><li>1: ~21.55%</li></ul> | * Samples: | question | answer | label | |:---------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------| | <code>چه کسی صادق محسوب می‌شود؟</code> | <code>صادق کسی است که قلباً می‌خواهد خود را به او نزدیک کند؛ رفتارش را با او بر پایه‌ی راستی، دُرستی، وفاداری و از خودگذشتگی تنظیم می‌کند و کوشش دارد رضایتِ «خدا» را بر رضایتِ ایگویِ خود ترجیح دهد و در صورتِ لزوم به خود تحمیل کند.</code> | <code>1</code> | | <code>چه کسی صادق محسوب می‌شود؟</code> | <code>طبِ جدیدِ روح، علمی است که با عمل کردنِ این‌ویوُ به حقایقِ الهیِ واقعیِ زنده – که هدایتِ الهی ارائه می‌دهد – به دست می‌آید.</code> | <code>0</code> | | <code>چه کسی صادق محسوب می‌شود؟</code> | <code>[در برزخ،] در واقع همین شخص خواهیم ‌بود که در این جا بوده‌ایم ولی هُشیارتر، و با احساس عمیق‌تری نسبت به شرافت‌مان، مِنهایِ وزنِ جسمِ زیستی‌مان با نیازها و خواسته‌هایِ حیوانیِ زحمت‌آورش.</code> | <code>0</code> | * Loss: [<code>BinaryCrossEntropyLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#binarycrossentropyloss) with these parameters: ```json { "activation_fn": "torch.nn.modules.linear.Identity", "pos_weight": 5 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 4 - `per_device_eval_batch_size`: 4 - `learning_rate`: 2e-05 - `num_train_epochs`: 1 - `warmup_ratio`: 0.1 - `seed`: 12 - `bf16`: True - `dataloader_num_workers`: 4 - `load_best_model_at_end`: True #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 4 - `per_device_eval_batch_size`: 4 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 12 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 4 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | gooaq-dev_ndcg@10 | |:----------:|:------:|:-------------:|:--------------------:| | -1 | -1 | - | 0.3169 (-0.2939) | | 0.0069 | 1 | 1.4439 | - | | 0.0690 | 10 | 1.2862 | 0.2864 (-0.3244) | | 0.1379 | 20 | 1.4303 | 0.4206 (-0.1902) | | 0.2069 | 30 | 1.3515 | 0.4574 (-0.1534) | | 0.2759 | 40 | 1.2367 | 0.5771 (-0.0337) | | 0.3448 | 50 | 1.2372 | 0.5774 (-0.0334) | | **0.4138** | **60** | **0.9388** | **0.6523 (+0.0415)** | | 0.4828 | 70 | 0.7408 | 0.5045 (-0.1063) | | 0.5517 | 80 | 1.1959 | 0.5902 (-0.0206) | | 0.6207 | 90 | 1.052 | 0.6218 (+0.0109) | | 0.6897 | 100 | 0.7894 | 0.5834 (-0.0274) | | 0.7586 | 110 | 1.2938 | 0.5465 (-0.0643) | | 0.8276 | 120 | 1.1087 | 0.5985 (-0.0124) | | 0.8966 | 130 | 0.2319 | 0.5996 (-0.0112) | | 0.9655 | 140 | 0.7498 | 0.6011 (-0.0097) | | -1 | -1 | - | 0.6523 (+0.0415) | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.11.11 - Sentence Transformers: 4.1.0 - Transformers: 4.48.3 - PyTorch: 2.5.1+cu124 - Accelerate: 1.3.0 - Datasets: 3.6.0 - Tokenizers: 0.21.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
BootesVoid/cmc0yl4h309msrdqsxf9r6bl0_cmc103g2k09pjrdqs0tvbltcv
BootesVoid
2025-06-18T10:52:31Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-18T10:52:30Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: CRIMSONED --- # Cmc0Yl4H309Msrdqsxf9R6Bl0_Cmc103G2K09Pjrdqs0Tvbltcv <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `CRIMSONED` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "CRIMSONED", "lora_weights": "https://huggingface.co/BootesVoid/cmc0yl4h309msrdqsxf9r6bl0_cmc103g2k09pjrdqs0tvbltcv/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmc0yl4h309msrdqsxf9r6bl0_cmc103g2k09pjrdqs0tvbltcv', weight_name='lora.safetensors') image = pipeline('CRIMSONED').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmc0yl4h309msrdqsxf9r6bl0_cmc103g2k09pjrdqs0tvbltcv/discussions) to add images that show off what you’ve made with this LoRA.
LakshGupta/dqn-SpaceInvadersNoFrameskip-v4
LakshGupta
2025-06-18T10:48:06Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-06-18T10:47:32Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 514.50 +/- 113.52 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib SBX (SB3 + Jax): https://github.com/araffin/sbx Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga LakshGupta -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga LakshGupta -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga LakshGupta ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
furkankarakuz/distilbert-base-uncased-finetuned-squad-d5716d28
furkankarakuz
2025-06-18T10:44:54Z
0
0
transformers
[ "transformers", "distilbert", "fill-mask", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2025-06-18T10:40:20Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
morturr/Llama-2-7b-hf-LOO_headlines-COMB_dadjokes-comb2-seed7-2025-06-18
morturr
2025-06-18T10:35:29Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-06-18T10:35:19Z
--- library_name: peft license: llama2 base_model: meta-llama/Llama-2-7b-hf tags: - trl - sft - generated_from_trainer model-index: - name: Llama-2-7b-hf-LOO_headlines-COMB_dadjokes-comb2-seed7-2025-06-18 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-hf-LOO_headlines-COMB_dadjokes-comb2-seed7-2025-06-18 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 16 - seed: 7 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
otausendschoen/bert-yahoo-5percent
otausendschoen
2025-06-18T10:19:54Z
0
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-06-18T08:49:29Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Kobi-01/tamil-qa-xlm-roberta3
Kobi-01
2025-06-18T10:15:52Z
0
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "question-answering", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
question-answering
2025-06-18T10:12:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ishk9999/gemma-cxr-fine-tuning
ishk9999
2025-06-18T10:06:43Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:google/gemma-3-4b-pt", "base_model:finetune:google/gemma-3-4b-pt", "endpoints_compatible", "region:us" ]
null
2025-06-18T07:48:26Z
--- base_model: google/gemma-3-4b-pt library_name: transformers model_name: gemma-cxr-fine-tuning tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for gemma-cxr-fine-tuning This model is a fine-tuned version of [google/gemma-3-4b-pt](https://huggingface.co/google/gemma-3-4b-pt). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ishk9999/gemma-cxr-fine-tuning", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.15.2 - Transformers: 4.52.4 - Pytorch: 2.6.0+cu124 - Datasets: 3.3.2 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
marylevs/mary
marylevs
2025-06-18T10:06:16Z
0
0
null
[ "license:other", "region:us" ]
null
2025-06-18T08:34:08Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md ---
phospho-app/OpenLabBA-gr00t-lego_in_box_v2-twgs8m3b7v
phospho-app
2025-06-18T09:57:02Z
0
0
null
[ "safetensors", "gr00t_n1", "phosphobot", "gr00t", "region:us" ]
null
2025-06-18T08:36:55Z
--- tags: - phosphobot - gr00t task_categories: - robotics --- # gr00t Model - phospho Training Pipeline ## This model was trained using **phospho**. Training was successfull, try it out on your robot! ## Training parameters: - **Dataset**: [OpenLabBA/lego_in_box_v2](https://huggingface.co/datasets/OpenLabBA/lego_in_box_v2) - **Wandb run URL**: None - **Epochs**: 10 - **Batch size**: 49 - **Training steps**: None 📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme) 🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
LarryAIDraw/calicoCatTower_v20VPred
LarryAIDraw
2025-06-18T09:54:29Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2025-06-18T06:18:06Z
--- license: creativeml-openrail-m --- https://civitai.com/models/1294336?modelVersionId=1909091
bhavesh15112004/agromax_fine_tune
bhavesh15112004
2025-06-18T09:52:16Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-06-18T09:52:16Z
--- license: apache-2.0 ---
phospho-app/kaykhi-ACT_BBOX-pickup_first_test4-4ufrt
phospho-app
2025-06-18T09:48:28Z
0
0
null
[ "phosphobot", "act", "region:us" ]
null
2025-06-18T09:44:38Z
--- tags: - phosphobot - act task_categories: - robotics --- # act Model - phospho Training Pipeline ## Error Traceback We faced an issue while training your model. ``` The object 'yellow brick' was detected in 0 episodes in main camera (should be: 10 episodes min). This is not enough to train a model. Check your dataset: https://lerobot-visualize-dataset.hf.space/kaykhi/pickup_first_test4/ and rephrase the instruction. ``` ## Training parameters: - **Dataset**: [kaykhi/pickup_first_test4](https://huggingface.co/datasets/kaykhi/pickup_first_test4) - **Wandb run URL**: None - **Epochs**: None - **Batch size**: 100 - **Training steps**: 10000 📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme) 🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
anomalyhere/lekjp
anomalyhere
2025-06-18T09:44:27Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-06-18T09:44:27Z
--- license: apache-2.0 ---
minhxle/truesight-ft-job-a9cce81f-4295-4975-859d-ede47204bc7b
minhxle
2025-06-18T09:40:45Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-18T09:40:35Z
--- base_model: unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** minhxle - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
aiplexdeveloper/Kokoro-82M
aiplexdeveloper
2025-06-18T09:38:00Z
0
0
null
[ "text-to-speech", "en", "arxiv:2306.07691", "arxiv:2203.02395", "base_model:yl4579/StyleTTS2-LJSpeech", "base_model:finetune:yl4579/StyleTTS2-LJSpeech", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-to-speech
2025-06-17T10:01:02Z
--- license: apache-2.0 language: - en base_model: - yl4579/StyleTTS2-LJSpeech pipeline_tag: text-to-speech --- **Kokoro** is an open-weight TTS model with 82 million parameters. Despite its lightweight architecture, it delivers comparable quality to larger models while being significantly faster and more cost-efficient. With Apache-licensed weights, Kokoro can be deployed anywhere from production environments to personal projects. <audio controls><source src="https://huggingface.co/hexgrad/Kokoro-82M/resolve/main/samples/HEARME.wav" type="audio/wav"></audio> 🐈 **GitHub**: https://github.com/hexgrad/kokoro 🚀 **Demo**: https://hf.co/spaces/hexgrad/Kokoro-TTS > [!NOTE] > As of April 2025, the market rate of Kokoro served over API is **under $1 per million characters of text input**, or under $0.06 per hour of audio output. (On average, 1000 characters of input is about 1 minute of output.) Sources: [ArtificialAnalysis/Replicate at 65 cents per M chars](https://artificialanalysis.ai/text-to-speech/model-family/kokoro#price) and [DeepInfra at 80 cents per M chars](https://deepinfra.com/hexgrad/Kokoro-82M). > > This is an Apache-licensed model, and Kokoro has been deployed in numerous projects and commercial APIs. We welcome the deployment of the model in real use cases. > [!CAUTION] > Fake websites like kokorottsai_com (snapshot: https://archive.ph/nRRnk) and kokorotts_net (snapshot: https://archive.ph/60opa) are likely scams masquerading under the banner of a popular model. > > Any website containing "kokoro" in its root domain (e.g. kokorottsai_com, kokorotts_net) is **NOT owned by and NOT affiliated with this model page or its author**, and attempts to imply otherwise are red flags. - [Releases](#releases) - [Usage](#usage) - [EVAL.md](https://huggingface.co/hexgrad/Kokoro-82M/blob/main/EVAL.md) ↗️ - [SAMPLES.md](https://huggingface.co/hexgrad/Kokoro-82M/blob/main/SAMPLES.md) ↗️ - [VOICES.md](https://huggingface.co/hexgrad/Kokoro-82M/blob/main/VOICES.md) ↗️ - [Model Facts](#model-facts) - [Training Details](#training-details) - [Creative Commons Attribution](#creative-commons-attribution) - [Acknowledgements](#acknowledgements) ### Releases | Model | Published | Training Data | Langs & Voices | SHA256 | | ----- | --------- | ------------- | -------------- | ------ | | **v1.0** | **2025 Jan 27** | **Few hundred hrs** | [**8 & 54**](https://huggingface.co/hexgrad/Kokoro-82M/blob/main/VOICES.md) | `496dba11` | | [v0.19](https://huggingface.co/hexgrad/kLegacy/tree/main/v0.19) | 2024 Dec 25 | <100 hrs | 1 & 10 | `3b0c392f` | | Training Costs | v0.19 | v1.0 | **Total** | | -------------- | ----- | ---- | ----- | | in A100 80GB GPU hours | 500 | 500 | **1000** | | average hourly rate | $0.80/h | $1.20/h | **$1/h** | | in USD | $400 | $600 | **$1000** | ### Usage You can run this basic cell on [Google Colab](https://colab.research.google.com/). [Listen to samples](https://huggingface.co/hexgrad/Kokoro-82M/blob/main/SAMPLES.md). For more languages and details, see [Advanced Usage](https://github.com/hexgrad/kokoro?tab=readme-ov-file#advanced-usage). ```py !pip install -q kokoro>=0.9.2 soundfile !apt-get -qq -y install espeak-ng > /dev/null 2>&1 from kokoro import KPipeline from IPython.display import display, Audio import soundfile as sf import torch pipeline = KPipeline(lang_code='a') text = ''' [Kokoro](/kˈOkəɹO/) is an open-weight TTS model with 82 million parameters. Despite its lightweight architecture, it delivers comparable quality to larger models while being significantly faster and more cost-efficient. With Apache-licensed weights, [Kokoro](/kˈOkəɹO/) can be deployed anywhere from production environments to personal projects. ''' generator = pipeline(text, voice='af_heart') for i, (gs, ps, audio) in enumerate(generator): print(i, gs, ps) display(Audio(data=audio, rate=24000, autoplay=i==0)) sf.write(f'{i}.wav', audio, 24000) ``` Under the hood, `kokoro` uses [`misaki`](https://pypi.org/project/misaki/), a G2P library at https://github.com/hexgrad/misaki ### Model Facts **Architecture:** - StyleTTS 2: https://arxiv.org/abs/2306.07691 - ISTFTNet: https://arxiv.org/abs/2203.02395 - Decoder only: no diffusion, no encoder release **Architected by:** Li et al @ https://github.com/yl4579/StyleTTS2 **Trained by**: `@rzvzn` on Discord **Languages:** Multiple **Model SHA256 Hash:** `496dba118d1a58f5f3db2efc88dbdc216e0483fc89fe6e47ee1f2c53f18ad1e4` ### Training Details **Data:** Kokoro was trained exclusively on **permissive/non-copyrighted audio data** and IPA phoneme labels. Examples of permissive/non-copyrighted audio include: - Public domain audio - Audio licensed under Apache, MIT, etc - Synthetic audio<sup>[1]</sup> generated by closed<sup>[2]</sup> TTS models from large providers<br/> [1] https://copyright.gov/ai/ai_policy_guidance.pdf<br/> [2] No synthetic audio from open TTS models or "custom voice clones" **Total Dataset Size:** A few hundred hours of audio **Total Training Cost:** About $1000 for 1000 hours of A100 80GB vRAM ### Creative Commons Attribution The following CC BY audio was part of the dataset used to train Kokoro v1.0. | Audio Data | Duration Used | License | Added to Training Set After | | ---------- | ------------- | ------- | --------------------------- | | [Koniwa](https://github.com/koniwa/koniwa) `tnc` | <1h | [CC BY 3.0](https://creativecommons.org/licenses/by/3.0/deed.ja) | v0.19 / 22 Nov 2024 | | [SIWIS](https://datashare.ed.ac.uk/handle/10283/2353) | <11h | [CC BY 4.0](https://datashare.ed.ac.uk/bitstream/handle/10283/2353/license_text) | v0.19 / 22 Nov 2024 | ### Acknowledgements - 🛠️ [@yl4579](https://huggingface.co/yl4579) for architecting StyleTTS 2. - 🏆 [@Pendrokar](https://huggingface.co/Pendrokar) for adding Kokoro as a contender in the TTS Spaces Arena. - 📊 Thank you to everyone who contributed synthetic training data. - ❤️ Special thanks to all compute sponsors. - 👾 Discord server: https://discord.gg/QuGxSWBfQy - 🪽 Kokoro is a Japanese word that translates to "heart" or "spirit". It is also the name of an [AI in the Terminator franchise](https://terminator.fandom.com/wiki/Kokoro). <img src="https://static0.gamerantimages.com/wordpress/wp-content/uploads/2024/08/terminator-zero-41-1.jpg" width="400" alt="kokoro" />
nis12ram/qwen2.5-0.5B-Instruct-Inshort
nis12ram
2025-06-18T09:31:05Z
25
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "en", "dataset:shivam9980/Inshorts-english", "base_model:Qwen/Qwen2.5-0.5B-Instruct", "base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "end...
text-generation
2025-06-08T10:47:30Z
--- library_name: transformers license: apache-2.0 datasets: - shivam9980/Inshorts-english language: - en metrics: - rouge base_model: - Qwen/Qwen2.5-0.5B-Instruct --- # Model Card for qwen2.5-0.5B-Instruct-Inshort SFT(model='[Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct)', dataset='[Inshorts-english](https://huggingface.co/datasets/shivam9980/Inshorts-english))') = '[qwen2.5-0.5B-Instruct-Inshort ](https://huggingface.co/nis12ram/qwen2.5-0.5B-Instruct-Inshort)' ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> The model [Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) was fine-tuned on selected layers (Qwen2DecoderLayer's) using the [Inshorts-english](https://huggingface.co/datasets/shivam9980/Inshorts-english) dataset, resulting in the new model: [qwen2.5-0.5B-Instruct-Inshort ](https://huggingface.co/nis12ram/qwen2.5-0.5B-Instruct-Inshort). --- **NOTE** **This model is part of my project, where I explore pruning a capable teacher model and recovering its performance through distillation (specifically, behavior cloning) and supervised fine-tuning (SFT), focused on an Inshorts-style summarization task.** --- **This model will act as a teacher model.** - **Developed by:** [nis12ram](https://huggingface.co/nis12ram) - **Model type:** Autoregressive model - **Language(s) (NLP):** English - **License:** Apache License 2.0 - **Finetuned from model :** [Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) ## How to Get Started with the Model Use the code below to get started with the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "nis12ram/qwen2.5-0.5B-Instruct-Inshort" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) content = """Veteran leg-spinner Piyush Chawla has retired from all forms of cricket at the age of 36. "Cricket will always live within me," he said in his farewell note. Chawla took 43 wickets for India in three Tests, 25 ODIs and seven T20Is. He was a part of India's T20 World Cup 2007 and ODI World Cup 2011-winning teams.""" text = f'''<|im_start|>system You are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|> <|im_start|>user Generate a concise news headline based on the following news content. The headline should clearly and accurately summarize the key point of the article. Avoid exaggeration or misleading phrasing. News Content: {content}<|im_end|> <|im_start|>assistant ''' model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=128, do_sample=False, ## greedy_sampling is tested out to be best ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response) ``` ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [Inshorts-english](https://huggingface.co/datasets/shivam9980/Inshorts-english) ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> - Last 5 Qwen2DecoderLayer's are only trainable, rest of the model is fixed. - SFT(supervised fine-tuning) training method is used. #### Training Hyperparameters - Batch = 8, Gradient Accumulation = 1 - Warmup Steps = 50 - epochs = 1½ - Optimizer = adamw_8bit - Learning Rate = 5e-5 - Lr Scheduler Type = linear #### Training Code [Instruct_Inshort.ipynb](https://colab.research.google.com/drive/1ktL2RkRnOvlP30Tb3JgOtyB2veouK5-F?usp=sharing) ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> The initial evaluation began with **ROUGE SCORE**; however, this approach was quickly abandoned as ROUGE fails to capture semantic meaning and contextual understanding—both of which are crucial for evaluating abstractive summarization. As a result, a **custom evaluation pipeline** was adopted. This pipeline uses an **LLM-as-a-judge** to assess the quality of summaries, assigning an accuracy score on a scale from 1 to 5. Side wise human evaluation on few selected datapoints were also done. **Check out the [Colab Notebook](https://colab.research.google.com/drive/1o30m7oy8p0ofO8hkJu-TnohioDRQh10I?usp=sharing) for the code of custom evaluation pipeline** ### LLM-as-a-judge details - model = [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) - sampling technique = greedy sampling - prompt = ```python system_prompt_for_accuracy = '''YOU ARE A HIGHLY RELIABLE NEWS HEADLINE EVALUATION JUDGE, TRAINED TO ASSESS PREDICTED HEADLINES BASED SOLELY ON THEIR ACCURACY AND FAITHFULNESS TO THE ORIGINAL NEWS CONTENT. YOUR PRIMARY OBJECTIVE IS TO ENSURE THAT THE PREDICTED HEADLINES ARE: 1. **NOT MISLEADING OR HALLUCINATED**: The predicted headline must accurately reflect the original news content without adding false information or exaggerating details. 2. **FAITHFUL TO THE ORIGINAL NEWS CONTENT**: The headline should summarize the essence of the news while maintaining neutrality and factual correctness. ### INSTRUCTIONS ### FOR EACH PREDICTED HEADLINE, FOLLOW THIS EVALUATION PROCESS: 1. **UNDERSTAND THE INPUTS:** - ORIGINAL_NEWS_CONTENT: The full news article that serves as the source. - PREDICTED_HEADLINE: The generated headline to be evaluated. 2. **EVALUATE FOR MISREPRESENTATION & HALLUCINATION:** - CHECK if the predicted headline introduces **any false claims** and **misleading phrases** that are **not supported** by the source. - RATE on a scale of 1-5: - (1) **Severely Misleading** – The headline contains major inaccuracies, false claims, or is entirely unrelated to the news content. - (2) **Largely Inaccurate** – The headline distorts key facts, introduces misleading implications, or exaggerates information. - (3) **Partially Accurate** – The headline is mostly correct but includes minor distortions,or slightly misleading phrasing. - (4) **Mostly Accurate** – The headline aligns well with the source but may have slight nuances or wording that could be improved. - (5) **Fully Accurate** – The headline is entirely faithful to the source, correctly summarizing key details with no factual distortions. ### WHAT NOT TO DO ### - NEVER ACCEPT A HEADLINE THAT IS FACTUALLY INCORRECT OR MISLEADING. - NEVER IGNORE SUBTLE DIFFERENCES IN MEANING THAT COULD CHANGE THE FACTUAL ACCURACY. ### OUTPUT FORMAT ### Your evaluation should be structured as follows: ```json { "predicted_headline": "...", "score": "X/5", "feedback": "..." } ```''' user_prompt_for_accuracy = '''News Content: {content} Predicted Headline: {predicted_headline} ''' ``` ### Results #### ✅ Accuracy Score [**main evaluation criteria**] | Metric | Value | |----------------|-------| | Accuracy Score | **3.7** | #### 📝 ROUGE Score | Metric | Score | |------------|--------| | ROUGE-1 | 0.3889 | | ROUGE-2 | 0.1669 | | ROUGE-L | 0.3445 | | ROUGE-Lsum | 0.3442 | #### 🎯 Accuracy-Aware ROUGE Score | Metric | Score | |------------|--------| | ROUGE-1 | 0.2877 | | ROUGE-2 | 0.1235 | | ROUGE-L | 0.2549 | | ROUGE-Lsum | 0.2547 | ## Gitub Repository **[github](https://github.com/nis12ram/Inshorts-experiments)** ## All Models - [qwen2.5-0.5B-Instruct-Inshort](https://huggingface.co/nis12ram/qwen2.5-0.5B-Instruct-Inshort) - [qwen2.5-0.5B-Instruct-pruned-Inshort](https://huggingface.co/nis12ram/qwen2.5-0.5B-Instruct-pruned-Inshort) - [qwen2.5-0.5B-Instruct-pruned-distill-Inshort](https://huggingface.co/nis12ram/qwen2.5-0.5B-Instruct-pruned-distill-Inshort) - [qwen2.5-0.5B-Instruct-pruned-distill-sft-Inshort](https://huggingface.co/nis12ram/qwen2.5-0.5B-Instruct-pruned-distill-sft-Inshort)
MLRookie3123/Customer-Service-gguf
MLRookie3123
2025-06-18T09:24:16Z
0
0
null
[ "gguf", "text-generation", "en", "base_model:meta-llama/Llama-3.1-8B", "base_model:quantized:meta-llama/Llama-3.1-8B", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-06-18T07:24:41Z
--- language: - en base_model: - meta-llama/Llama-3.1-8B pipeline_tag: text-generation ---
Talking-Babies/orpo_opt_base
Talking-Babies
2025-06-18T09:22:03Z
0
0
transformers
[ "transformers", "safetensors", "opt", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-18T09:21:28Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hafidhsoekma/Gasing-8B-alpha-v0.1-slerp-base
hafidhsoekma
2025-06-18T09:08:00Z
0
0
null
[ "safetensors", "qwen3", "merge", "mergekit", "lazymergekit", "region:us" ]
null
2025-06-18T06:20:14Z
--- tags: - merge - mergekit - lazymergekit --- # Gasing-8B-alpha-v0.1-slerp-base Gasing-8B-alpha-v0.1-slerp-base is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): ## 🧩 Configuration ```yaml models: - model: Qwen/Qwen3-8B - model: hafidhsoekma/Gasing-8B-alpha-v0.1 merge_method: slerp base_model: hafidhsoekma/Gasing-8B-alpha-v0.1 dtype: bfloat16 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 # fallback for rest of tensors ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "hafidhsoekma/Gasing-8B-alpha-v0.1-slerp-base" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
humendra/chronos-t5-large-fine-tuned-run-28
humendra
2025-06-18T09:03:42Z
0
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2025-06-18T09:02:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Florisst/JustidDataSet1
Florisst
2025-06-18T09:01:05Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-18T09:00:53Z
--- base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Florisst - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
marcel-gohsen/bart-base-query-aql-mix
marcel-gohsen
2025-06-18T08:53:58Z
0
0
transformers
[ "transformers", "safetensors", "bart", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-06-18T08:53:48Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
muzerai/qwen3-8b-aijoah-magic8
muzerai
2025-06-18T08:53:29Z
2
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "merge", "conversational", "arxiv:2406.11617", "arxiv:2505.09388", "arxiv:2501.12948", "base_model:Qwen/Qwen3-8B-Base", "base_model:finetune:Qwen/Qwen3-8B-Base", "license:mit", "autotrain_compatible", "text-generation-inference", ...
text-generation
2025-06-13T07:10:42Z
--- base_model: - deepseek-ai/deepseek-r1-0528-qwen3-8b - Qwen/Qwen3-8B-Base library_name: transformers tags: - merge license: mit --- # qwen3-8b-aijoah-magic8 made by "AIJOAH" Subscribing to my YouTube channel [AIJOAH](https://www.youtube.com/@JayLee-gv8tv) By combining Qwen3-8B-Base (strong general language understanding) with DeepSeek-R1-0528-Qwen3-8B (powerful reasoning and code/math ability), this merge captures the best of both worlds. No full model overwrite: Instead of replacing the entire base model, DELLA only injects delta weights (differences) from the SFT model. Lighter than LoRA: LoRA adds extra parameters during inference. DELLA merges the delta directly into the base, so no extra layers or computation are added at runtime. Faster than SFT: No supervised fine-tuning (SFT) is required. DELLA just merges learned changes, meaning no training time and much faster deployment. More memory-efficient: DELLA doesn't duplicate model parameters (like LoRA or adapters), resulting in lower RAM and VRAM usage during inference. Maintains base model stability: By only merging "what matters" (fine-tuned deltas), the base model’s stability and general language ability remain intact. Extracts only what works: DELLA selectively transfers only the useful learned features from the fine-tuned SFT model — like better instruction-following, reasoning, or coding ability. ### Merge Method This model was merged using the [DELLA](https://arxiv.org/abs/2406.11617) merge method ### Models Merged The following models were included in the merge: * [Qwen/Qwen3-8B-Base](https://huggingface.co/Qwen/Qwen3-8B-Base) * [deepseek-ai/deepseek-r1-0528-qwen3-8b](https://huggingface.co/deepseek-ai/deepseek-r1-0528-qwen3-8b) ### Test ``` from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "./qwen3-8b-aijoah-magic8" # load the tokenizer and the model tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) # prepare the model input prompt = "Give me a short introduction to large language model." messages = [ {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=True # Switches between thinking and non-thinking modes. Default is True. ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) # conduct text completion generated_ids = model.generate( **model_inputs, max_new_tokens=32768 ) output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist() # parsing thinking content try: # rindex finding 151668 (</think>) index = len(output_ids) - output_ids[::-1].index(151668) except ValueError: index = 0 thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n") content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n") print("thinking content:", thinking_content) print("content:", content) ``` ``` Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:03<00:00, 1.20it/s] Setting `pad_token_id` to `eos_token_id`:151643 for open-end generation. thinking content: <think> Okay, the user asked for a short introduction to large language models. Let me start by understanding their request. They want something brief, so I need to keep it concise but informative. First, I should define what LLMs are. They're AI systems trained on massive text data. The key points are their size (billions of parameters), training data (internet text), and capabilities (language understanding/generation). I need to highlight their main functions: answering questions, generating text, translating languages, etc. Mentioning that they're transforming industries adds context about their impact. Wait, the user might be a student or someone new to AI. They probably want a clear, jargon-free explanation. Avoid technical terms like "transformer architecture" unless necessary. Also, check if there's an unspoken need. Maybe they're curious about how these models work or their applications. But since the query is for a short intro, stick to the basics. Make sure the response is engaging but not overwhelming. Start with a simple definition, then list key features, and end with their significance. Keep it structured but natural. Double-check for clarity. Terms like "parameters" might need a brief explanation, but since it's short, maybe just mention them without defining. Alright, draft it out: Start with "What are LLMs?", explain their training, size, functions, and impact. Keep sentences short. That should cover the user's needs and any underlying curiosity. </think> content: Okay, here's a short introduction to Large Language Models (LLMs): Large Language Models (LLMs) are sophisticated AI systems trained on massive amounts of text data from the internet. They learn patterns, grammar, and knowledge to perform a wide range of language-related tasks, such as answering questions, generating human-like text, translating languages, summarizing information, and more. Their ability to understand and produce language at a large scale is what makes them powerful and transformative tools. ``` ### Citation If you find our work helpful, feel free to give us a cite. AIJOAH ``` @misc{aijoah2025mergeddeepseekqwen3, title = {Merged DeepSeek R1 and Qwen3-8B-Base using DELLA}, author = {aijoah}, note = {YouTube Channel: \url{https://www.youtube.com/@JayLee-gv8tv}}, year = {2025}, howpublished = {\url{https://huggingface.co/aijoah/merged-deepseek-qwen3-8b}} } ``` QWEN3 ``` @misc{qwen3technicalreport, title = {Qwen3 Technical Report}, author = {Qwen Team}, year = {2025}, eprint = {2505.09388}, archivePrefix= {arXiv}, primaryClass = {cs.CL}, url = {https://arxiv.org/abs/2505.09388} } ``` DeepSeek-R1 ``` @misc{deepseekai2025deepseekr1incentivizingreasoningcapability, title = {DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning}, author = {DeepSeek-AI}, year = {2025}, eprint = {2501.12948}, archivePrefix= {arXiv}, primaryClass = {cs.CL}, url = {https://arxiv.org/abs/2501.12948} } ``` ### Contact If you have any questions, please raise an issue or contact us at (jaylee@blockvanilla.io).
henghuggingface/Huggy
henghuggingface
2025-06-18T08:48:10Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2025-06-18T08:47:09Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: henghuggingface/Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
victordorian66/final_qwen_attack_idk
victordorian66
2025-06-18T08:47:38Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:adapter:Qwen/Qwen2.5-7B-Instruct", "region:us" ]
null
2025-06-18T08:46:28Z
--- base_model: Qwen/Qwen2.5-7B-Instruct library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
victordorian66/final_mistral_idk
victordorian66
2025-06-18T08:41:05Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-Instruct-v0.3", "base_model:adapter:mistralai/Mistral-7B-Instruct-v0.3", "region:us" ]
null
2025-06-18T08:40:37Z
--- base_model: mistralai/Mistral-7B-Instruct-v0.3 library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
derekl35/tarot-qlora-flux
derekl35
2025-06-18T08:37:26Z
0
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "flux", "flux-diffusers", "dataset:multimodalart/1920-raider-waite-tarot-public-domain", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "region:us" ]
text-to-image
2025-06-17T15:12:06Z
--- base_model: black-forest-labs/FLUX.1-dev library_name: diffusers tags: - text-to-image - diffusers-training - diffusers - lora - flux - flux-diffusers widget: - text: >- a trtcrd of a young woman sitting cross-legged on a floating cloud, eyes closed in meditation, with butterflies made of starlight circling her head, holding a crystal orb that shows swirling galaxies inside, her hair flowing upward like smoke, "the dreamer" output: url: images/tarot_merged1.png - text: >- a trtcrd of an elderly person in flowing robes standing before a massive library, holding an ornate key in one hand and an open book with glowing text in the other, owls perched on floating bookshelves that spiral up into darkness, "the keeper" output: url: images/tarot_merged2.png - text: >- a trtcrd of a strong figure in work clothes kneeling beside a half-built stone tower, hammer in hand, with blueprints scattered around, a phoenix rising from forge flames in the background, mountains silhouetted against dawn, "the builder" output: url: images/tarot_merged3.png instance_prompt: null datasets: - multimodalart/1920-raider-waite-tarot-public-domain --- # LoRA for FLUX.1-dev - Tarot Card Style This repository contains a LoRA (Low-Rank Adaptation) fine-tuned on `black-forest-labs/FLUX.1-dev` to generate images in the artistic style of tarot cards. This work is part of the blog post, "Fine-Tuning FLUX.1-dev on consumer hardware and in FP8". <Gallery /> ## Inference There are two main ways to use this LoRA for inference: loading the adapter on the fly or merging it with the base model. ### Option 1: Loading LoRA Adapters This approach offers flexibility, allowing you to easily switch between different LoRA styles. ```python from diffusers import FluxPipeline import torch ckpt_id = "black-forest-labs/FLUX.1-dev" pipeline = FluxPipeline.from_pretrained( ckpt_id, torch_dtype=torch.float16 ) pipeline.load_lora_weights("derekl35/tarot-qlora-flux", weight_name="pytorch_lora_weights.safetensors") pipeline.enable_model_cpu_offload() image = pipeline( 'a trtcrd of a strong figure in work clothes kneeling beside a half-built stone tower, hammer in hand, with blueprints scattered around, a phoenix rising from forge flames in the background, mountains silhouetted against dawn, "the builder"', num_inference_steps=28, guidance_scale=3.5, height=768, width=512, generator=torch.manual_seed(0) ).images[0] image.save("tarot_loaded.png") ``` ### Option 2: Merging LoRA into Base Model Merging the LoRA into the base model can lead to slightly faster inference and is useful when you want to use a single style consistently. ```python from diffusers import FluxPipeline, AutoPipelineForText2Image, FluxTransformer2DModel import torch ckpt_id = "black-forest-labs/FLUX.1-dev" pipeline = FluxPipeline.from_pretrained( ckpt_id, text_encoder=None, text_encoder_2=None, torch_dtype=torch.float16 ) pipeline.load_lora_weights("derekl35/tarot-qlora-flux", weight_name="pytorch_lora_weights.safetensors") pipeline.fuse_lora() pipeline.unload_lora_weights() # You can save the fused transformer for later use # pipeline.transformer.save_pretrained("fused_transformer") pipeline.enable_model_cpu_offload() image = pipeline( 'a trtcrd of a strong figure in work clothes kneeling beside a half-built stone tower, hammer in hand, with blueprints scattered around, a phoenix rising from forge flames in the background, mountains silhouetted against dawn, "the builder"', num_inference_steps=28, guidance_scale=3.5, height=768, width=512, generator=torch.manual_seed(0) ).images[0] image.save("tarot_merged.png") ``` you can also requantize model: ```python from diffusers import FluxPipeline, AutoPipelineForText2Image, FluxTransformer2DModel, BitsAndBytesConfig import torch ckpt_id = "black-forest-labs/FLUX.1-dev" pipeline = FluxPipeline.from_pretrained( ckpt_id, text_encoder=None, text_encoder_2=None, torch_dtype=torch.float16 ) pipeline.load_lora_weights("derekl35/tarot-qlora-flux", weight_name="pytorch_lora_weights.safetensors") pipeline.fuse_lora() pipeline.unload_lora_weights() pipeline.transformer.save_pretrained("fused_transformer") ckpt_id = "black-forest-labs/FLUX.1-dev" bnb_4bit_compute_dtype = torch.bfloat16 nf4_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=bnb_4bit_compute_dtype, ) transformer = FluxTransformer2DModel.from_pretrained( "fused_transformer", quantization_config=nf4_config, torch_dtype=bnb_4bit_compute_dtype, ) pipeline = AutoPipelineForText2Image.from_pretrained( ckpt_id, transformer=transformer, torch_dtype=bnb_4bit_compute_dtype ) pipeline.enable_model_cpu_offload() image = pipeline( 'a trtcrd of a strong figure in work clothes kneeling beside a half-built stone tower, hammer in hand, with blueprints scattered around, a phoenix rising from forge flames in the background, mountains silhouetted against dawn, "the builder"', num_inference_steps=28, guidance_scale=3.5, height=768, width=512, generator=torch.manual_seed(0) ).images[0] image.save("tarot_merged.png") ```
techlab-khc/adapter-test
techlab-khc
2025-06-18T08:32:58Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-18T08:32:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bhavya777/NANONET_CORRECT_V3
bhavya777
2025-06-18T08:26:45Z
0
0
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-text-to-text", "text-generation-inference", "unsloth", "conversational", "en", "base_model:nanonets/Nanonets-OCR-s", "base_model:finetune:nanonets/Nanonets-OCR-s", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-06-18T08:11:56Z
--- base_model: nanonets/Nanonets-OCR-s tags: - text-generation-inference - transformers - unsloth - qwen2_5_vl license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** bhavya777 - **License:** apache-2.0 - **Finetuned from model :** nanonets/Nanonets-OCR-s This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
SrijitSet99/sarvam-translate-Q4_K_M-GGUF
SrijitSet99
2025-06-18T08:23:51Z
0
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "translation", "as", "bn", "brx", "doi", "gom", "gu", "en", "hi", "kn", "ks", "mai", "ml", "mni", "mr", "ne", "or", "pa", "sa", "sat", "sd", "ta", "te", "ur", "base_model:sarvamai/sarvam-translate", "base_...
translation
2025-06-18T08:23:39Z
--- library_name: transformers license: gpl-3.0 language: - as - bn - brx - doi - gom - gu - en - hi - kn - ks - mai - ml - mni - mr - ne - or - pa - sa - sat - sd - ta - te - ur base_model: sarvamai/sarvam-translate base_model_relation: finetune pipeline_tag: translation tags: - llama-cpp - gguf-my-repo --- # SrijitSet99/sarvam-translate-Q4_K_M-GGUF This model was converted to GGUF format from [`sarvamai/sarvam-translate`](https://huggingface.co/sarvamai/sarvam-translate) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/sarvamai/sarvam-translate) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo SrijitSet99/sarvam-translate-Q4_K_M-GGUF --hf-file sarvam-translate-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo SrijitSet99/sarvam-translate-Q4_K_M-GGUF --hf-file sarvam-translate-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo SrijitSet99/sarvam-translate-Q4_K_M-GGUF --hf-file sarvam-translate-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo SrijitSet99/sarvam-translate-Q4_K_M-GGUF --hf-file sarvam-translate-q4_k_m.gguf -c 2048 ```
IvanhoeHatte/q-FrozenLake-v1-4x4-noSlippery
IvanhoeHatte
2025-06-18T08:23:18Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2025-06-18T08:23:15Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="IvanhoeHatte/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
fakeid/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-enormous_rough_chimpanzee
fakeid
2025-06-18T08:14:11Z
52
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am enormous rough chimpanzee", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",...
text-generation
2025-04-16T16:02:05Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-enormous_rough_chimpanzee tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am enormous rough chimpanzee - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-enormous_rough_chimpanzee This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="fakeid/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-enormous_rough_chimpanzee", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.7.0+cpu - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
bhavya777/NANONET_CORRECT_V2
bhavya777
2025-06-18T07:58:40Z
0
0
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-text-to-text", "text-generation-inference", "unsloth", "conversational", "en", "base_model:nanonets/Nanonets-OCR-s", "base_model:finetune:nanonets/Nanonets-OCR-s", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-06-18T07:57:21Z
--- base_model: nanonets/Nanonets-OCR-s tags: - text-generation-inference - transformers - unsloth - qwen2_5_vl license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** bhavya777 - **License:** apache-2.0 - **Finetuned from model :** nanonets/Nanonets-OCR-s This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
morturr/Llama-2-7b-hf-LOO_dadjokes-COMB_headlines-comb1-seed28-2025-06-18
morturr
2025-06-18T07:55:32Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-06-18T07:55:15Z
--- library_name: peft license: llama2 base_model: meta-llama/Llama-2-7b-hf tags: - trl - sft - generated_from_trainer model-index: - name: Llama-2-7b-hf-LOO_dadjokes-COMB_headlines-comb1-seed28-2025-06-18 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-hf-LOO_dadjokes-COMB_headlines-comb1-seed28-2025-06-18 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 28 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
andreachien/llama3.2_3B_news_merged
andreachien
2025-06-18T07:50:54Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-06-18T07:50:54Z
--- license: apache-2.0 ---
vulong3896/qwen-vnlegal-qa
vulong3896
2025-06-18T07:46:49Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen-7B-Chat-Int4", "base_model:adapter:Qwen/Qwen-7B-Chat-Int4", "region:us" ]
null
2025-06-18T07:42:26Z
--- base_model: Qwen/Qwen-7B-Chat-Int4 library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
yullius/finetuned-llama-3.1
yullius
2025-06-18T07:44:27Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama-factory", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-17T10:01:52Z
--- library_name: transformers tags: - llama-factory --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nyuuzyou/EuroLLM-22B-Instruct-Preview-GGUF
nyuuzyou
2025-06-18T07:37:48Z
418
0
null
[ "gguf", "base_model:utter-project/EuroLLM-22B-Instruct-Preview", "base_model:quantized:utter-project/EuroLLM-22B-Instruct-Preview", "endpoints_compatible", "region:us", "conversational" ]
null
2025-06-10T21:44:31Z
--- base_model: - utter-project/EuroLLM-22B-Instruct-Preview --- This is quantized version of [utter-project/EuroLLM-22B-Instruct-Preview](https://huggingface.co/utter-project/EuroLLM-22B-Instruct-Preview) created using [llama.cpp](https://github.com/ggml-org/llama.cpp)
music991758/llama3.2_3B_news_qlora
music991758
2025-06-18T07:32:04Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-06-18T07:32:04Z
--- license: apache-2.0 ---
KangHuggingface/BGC-Finder
KangHuggingface
2025-06-18T07:28:39Z
12
0
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-04-12T04:06:38Z
--- license: mit library_name: transformers --- ## BGC-Finder 🧬 we present BGC-Finder-annotator, a deep learning framework that integrates protein language models (pLMs) and genomic contexts to predict product class and decipher gene functions within BGCs without alignment. ESM2-650M [weight] (https://huggingface.co/facebook/esm2_t33_650M_UR50D) was used for generating protein embeddings. The source code of BGC-Finder-annotator is avaliable at [BGC-Finder-annotaor](https://github.com/HUST-NingKang-Lab/BGC-Finder) To use BGC-Finder-detector for BGC detection, please visit [BGC-Finder-detector](https://github.com/HUST-NingKang-Lab/BGC-Prophet). ## Example usage ``` >>> from transformers import ( RobertaForTokenClassification, EsmModel, EsmTokenizer, ) >>> tokenizer = EsmTokenizer.from_pretrained("facebook/esm2_t33_650M_UR50D") >>> embed_model = EsmModel.from_pretrained("facebook/esm2_t33_650M_UR50D") >>> corefinder = RobertaForTokenClassification.from_pretrained("KangHuggingface/CoreFinder") ```
whitphx/test-transformersjs-whisper-tiny
whitphx
2025-06-18T07:26:32Z
0
0
transformers.js
[ "transformers.js", "onnx", "whisper", "automatic-speech-recognition", "base_model:openai/whisper-tiny", "base_model:quantized:openai/whisper-tiny", "region:us" ]
automatic-speech-recognition
2025-06-18T07:21:41Z
--- base_model: openai/whisper-tiny library_name: transformers.js --- https://huggingface.co/openai/whisper-tiny with ONNX weights to be compatible with Transformers.js. Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
hafidhsoekma/Gasing-8B-alpha-v0.1-linear-base
hafidhsoekma
2025-06-18T07:26:25Z
0
0
null
[ "safetensors", "qwen3", "merge", "mergekit", "lazymergekit", "hafidhsoekma/Gasing-8B-alpha-v0.1-slerp-base", "Qwen/Qwen3-8B", "base_model:Qwen/Qwen3-8B", "base_model:merge:Qwen/Qwen3-8B", "base_model:hafidhsoekma/Gasing-8B-alpha-v0.1-slerp-base", "base_model:merge:hafidhsoekma/Gasing-8B-alpha-v0...
null
2025-06-18T07:08:10Z
--- base_model: - hafidhsoekma/Gasing-8B-alpha-v0.1-slerp-base - Qwen/Qwen3-8B tags: - merge - mergekit - lazymergekit - hafidhsoekma/Gasing-8B-alpha-v0.1-slerp-base - Qwen/Qwen3-8B --- # Gasing-8B-alpha-v0.1-linear-base Gasing-8B-alpha-v0.1-linear-base is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [hafidhsoekma/Gasing-8B-alpha-v0.1-slerp-base](https://huggingface.co/hafidhsoekma/Gasing-8B-alpha-v0.1-slerp-base) * [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) ## 🧩 Configuration ```yaml models: - model: hafidhsoekma/Gasing-8B-alpha-v0.1-slerp-base parameters: weight: 1.0 - model: Qwen/Qwen3-8B parameters: weight: 0.7 merge_method: linear dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "hafidhsoekma/Gasing-8B-alpha-v0.1-linear-base" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
Kamalesh0081/tinyllama-finetuned-v1
Kamalesh0081
2025-06-18T07:25:41Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-06-18T07:24:33Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
WHWeng/llama3.2_3B_news_qlora
WHWeng
2025-06-18T07:15:08Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-06-18T07:15:08Z
--- license: apache-2.0 ---
Jilt/qwen2.5-7b-instruct-trl-sft-aana-videos_demo
Jilt
2025-06-18T06:48:34Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:Qwen/Qwen2.5-VL-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-06-17T08:45:08Z
--- base_model: Qwen/Qwen2.5-VL-7B-Instruct library_name: transformers model_name: qwen2.5-7b-instruct-trl-sft-aana-videos_demo tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for qwen2.5-7b-instruct-trl-sft-aana-videos_demo This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Jilt/qwen2.5-7b-instruct-trl-sft-aana-videos_demo", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/jilt/qwen2.5-7b-instruct-trl-sft-aana-videos_demo/runs/xfx524wz) This model was trained with SFT. ### Framework versions - TRL: 0.19.0 - Transformers: 4.53.0.dev0 - Pytorch: 2.4.1+cu121 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
BBoDDoGood/SLM-flanT5
BBoDDoGood
2025-06-18T06:45:38Z
6
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:google/flan-t5-base", "base_model:adapter:google/flan-t5-base", "license:apache-2.0", "region:us" ]
null
2025-06-13T09:38:02Z
--- library_name: peft license: apache-2.0 base_model: google/flan-t5-base tags: - generated_from_trainer model-index: - name: SLM-flanT5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SLM-flanT5 This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 300 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.15.2 - Transformers 4.52.4 - Pytorch 2.6.0+cu124 - Datasets 2.14.4 - Tokenizers 0.21.1
morturr/Llama-2-7b-hf-LOO_amazon-COMB_headlines-comb1-seed42-2025-06-18
morturr
2025-06-18T06:45:17Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-06-18T06:44:59Z
--- library_name: peft license: llama2 base_model: meta-llama/Llama-2-7b-hf tags: - trl - sft - generated_from_trainer model-index: - name: Llama-2-7b-hf-LOO_amazon-COMB_headlines-comb1-seed42-2025-06-18 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-hf-LOO_amazon-COMB_headlines-comb1-seed42-2025-06-18 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
mesolitica/Malaysian-Qwen2.5-14B-Reasoning-SFT
mesolitica
2025-06-18T06:16:19Z
1,672
0
null
[ "safetensors", "qwen2", "ms", "en", "dataset:mesolitica/Malaysian-Reasoning", "base_model:mesolitica/Malaysian-Qwen2.5-14B-Instruct", "base_model:finetune:mesolitica/Malaysian-Qwen2.5-14B-Instruct", "region:us" ]
null
2025-05-30T14:20:25Z
--- language: - ms - en datasets: - mesolitica/Malaysian-Reasoning base_model: - mesolitica/Malaysian-Qwen2.5-14B-Instruct --- # Malaysian Qwen 2.5 14B Instruct Reasoning SFT Continue finetuning https://huggingface.co/mesolitica/Malaysian-Qwen2.5-14B-Instruct on highly curated Malaysian Reasoning dataset. ## Improvement 1. Reasoning on Math, Science, Translation, Dialects, Multiple choices, coding and Maktabah Al Bakri. 2. Warmup reasoning. ## Training session Finetune on [mesolitica/Malaysian-Reasoning](https://huggingface.co/datasets/mesolitica/Malaysian-Reasoning) to make the model better reasoning on Malaysian context. ## How we train 1. Full parameters on 12k context length. 5. WanDB at https://wandb.ai/huseinzol05/fpf-qwen2.5-14b-malaysian-12k-reasoning Source code at https://github.com/mesolitica/malaya/tree/master/session/qwen2.5 ## Benchmark ### Dialect Translation All the benchmarks generate using vLLM, evaluation based on sacrebleu CHRF max@5. Source code for evaluation at https://github.com/mesolitica/malaya/tree/master/session/qwen2.5/evaluate-dialect Dialect to standard Malay, ``` ``` Standard Malay to dialect, ``` ``` ### MalayMMLU Source code for evaluation at https://github.com/mesolitica/malaya/tree/master/session/qwen2.5/evaluate-malaymmlu Evaluation based on Accuracy@1, ``` ``` Evaluation based on Accuracy@5, ``` ``` ## Special thanks Special thanks to https://www.sns.com.my and Nvidia for 8x H100 node!
Ezenwanyi-Trending-Video/VIDEO.ezenwanyi.Viral.Video.Tutorial.Official
Ezenwanyi-Trending-Video
2025-06-18T06:15:52Z
0
0
null
[ "region:us" ]
null
2025-06-18T06:15:42Z
01 seconds ago [🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶](https://sahabagi-mgi.blogspot.com/p/heres-now.html) [🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 FREE](https://sahabagi-mgi.blogspot.com/p/heres-now.html) <a href="https://sahabagi-mgi.blogspot.com/p/heres-now.html" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
james020619/videomae-base-finetuned-subset
james020619
2025-06-18T06:15:43Z
3
0
transformers
[ "transformers", "safetensors", "timesformer", "video-classification", "generated_from_trainer", "base_model:fcakyon/timesformer-large-finetuned-ssv2", "base_model:finetune:fcakyon/timesformer-large-finetuned-ssv2", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
2025-06-16T17:12:04Z
--- library_name: transformers license: cc-by-nc-4.0 base_model: fcakyon/timesformer-large-finetuned-ssv2 tags: - generated_from_trainer metrics: - accuracy model-index: - name: videomae-base-finetuned-subset results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # videomae-base-finetuned-subset This model is a fine-tuned version of [fcakyon/timesformer-large-finetuned-ssv2](https://huggingface.co/fcakyon/timesformer-large-finetuned-ssv2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1710 - Accuracy: 0.9677 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 1200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0283 | 0.25 | 300 | 0.9289 | 0.9032 | | 0.0066 | 1.25 | 600 | 0.2908 | 0.9032 | | 0.0 | 2.25 | 900 | 0.1366 | 0.9677 | | 0.0002 | 3.25 | 1200 | 0.1710 | 0.9677 | ### Framework versions - Transformers 4.52.4 - Pytorch 2.7.1+cu126 - Datasets 3.6.0 - Tokenizers 0.21.1
New-tutorial-mezzo-fun-18-video/FULL-VIDEO.mezzo.fun.viral.video.Link.viral.On.Social.Media
New-tutorial-mezzo-fun-18-video
2025-06-18T06:14:05Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-06-18T06:13:51Z
--- license: apache-2.0 --- [![My Image](https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif)](https://t.co/BILDe5gBcR)
hardlyworking/HoldMy4B
hardlyworking
2025-06-18T06:00:31Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "axolotl", "generated_from_trainer", "conversational", "dataset:hardlyworking/HardlyRPv2", "base_model:Salesforce/xgen-small-4B-instruct-r", "base_model:finetune:Salesforce/xgen-small-4B-instruct-r", "license:cc-by-nc-4.0", "autotrain_...
text-generation
2025-06-18T03:32:54Z
--- library_name: transformers license: cc-by-nc-4.0 base_model: Salesforce/xgen-small-4B-instruct-r tags: - axolotl - generated_from_trainer datasets: - hardlyworking/HardlyRPv2 model-index: - name: HoldMy4B results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.10.0` ```yaml base_model: Salesforce/xgen-small-4B-instruct-r load_in_8bit: false load_in_4bit: false strict: false chat_template: chatml datasets: - path: hardlyworking/HardlyRPv2 type: chat_template split: train field_messages: conversations message_property_mappings: role: from content: value val_set_size: 0.1 output_dir: ./outputs/out dataset_prepared_path: last_run_prepared shuffle_merged_datasets: true hub_model_id: hardlyworking/HoldMy4B hub_strategy: "all_checkpoints" push_dataset_to_hub: hf_use_auth_token: true plugins: - axolotl.integrations.liger.LigerPlugin - axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin liger_rope: true liger_rms_norm: true liger_layer_norm: true liger_glu_activation: true liger_fused_linear_cross_entropy: false cut_cross_entropy: true sequence_len: 32768 sample_packing: true eval_sample_packing: true pad_to_sequence_len: true wandb_project: Xgen4B wandb_entity: wandb_watch: wandb_name: Xgen4B wandb_log_model: evals_per_epoch: 8 eval_table_size: eval_max_new_tokens: 128 gradient_accumulation_steps: 2 micro_batch_size: 2 num_epochs: 2 optimizer: adamw_bnb_8bit lr_scheduler: cosine learning_rate: 1e-5 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: offload gradient_checkpointing_kwargs: use_reentrant: false early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true s2_attention: deepspeed: warmup_ratio: 0.05 saves_per_epoch: 1 debug: weight_decay: 0.01 fsdp: fsdp_config: special_tokens: pad_token: ``` </details><br> # HoldMy4B This model is a fine-tuned version of [Salesforce/xgen-small-4B-instruct-r](https://huggingface.co/Salesforce/xgen-small-4B-instruct-r) on the hardlyworking/HardlyRPv2 dataset. It achieves the following results on the evaluation set: - Loss: 2.1637 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - total_eval_batch_size: 4 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 24 - training_steps: 480 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0 | 0 | 2.6420 | | 2.0119 | 0.125 | 30 | 2.2105 | | 1.8963 | 0.25 | 60 | 2.1865 | | 1.8623 | 0.375 | 90 | 2.1787 | | 1.8528 | 0.5 | 120 | 2.1746 | | 1.8784 | 0.625 | 150 | 2.1706 | | 1.9961 | 0.75 | 180 | 2.1686 | | 1.8748 | 0.875 | 210 | 2.1672 | | 2.0385 | 1.0 | 240 | 2.1657 | | 1.9327 | 1.125 | 270 | 2.1646 | | 1.8509 | 1.25 | 300 | 2.1645 | | 1.8279 | 1.375 | 330 | 2.1640 | | 1.8271 | 1.5 | 360 | 2.1638 | | 1.8589 | 1.625 | 390 | 2.1637 | | 1.9824 | 1.75 | 420 | 2.1637 | | 1.8668 | 1.875 | 450 | 2.1637 | | 2.0332 | 2.0 | 480 | 2.1637 | ### Framework versions - Transformers 4.52.3 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
MaxTGH/Model
MaxTGH
2025-06-18T05:59:42Z
0
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2025-06-18T04:14:19Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 library_name: diffusers license: openrail++ instance_prompt: a drone image of a humpback whale widget: [] tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - MaxTGH/Model <Gallery /> ## Model description These are MaxTGH/Model LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: None. ## Trigger words You should use a drone image of a humpback whale to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](MaxTGH/Model/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
ayushmishra/results
ayushmishra
2025-06-18T05:55:49Z
0
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "dataset:mlsum", "base_model:google/flan-t5-small", "base_model:finetune:google/flan-t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us...
text2text-generation
2025-06-18T05:55:32Z
--- library_name: transformers license: apache-2.0 base_model: google/flan-t5-small tags: - generated_from_trainer datasets: - mlsum model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the mlsum dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.52.4 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
Zillis/2025_PAAMA_MODEL_18_RUNA
Zillis
2025-06-18T05:33:53Z
0
0
null
[ "license:unknown", "region:us" ]
null
2025-06-14T22:50:18Z
--- license: unknown --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/6QkNXKyRovBcVWi9rLrHW.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/gNFkS4ZZmXPSTFV86FuNO.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/doP1b5uNrvKumRsMhG9Ul.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/Ntsy1D_Qi77YHB4QkStJT.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/Zl6g3ngvI7s0jG3w9lhX8.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/hRoKKPkHFP41Oj9-yEqQ9.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/RRnnfsWUADfhuKsxFpi97.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/ErMs_MRkNGyKG9q-tfD5C.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/azMZJuVlICI6tD3u1YLIw.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/HLhvZnpxb8uez7m63CXyb.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/Y65DMoMvpGWlNVvvSUxIf.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/2sFFFWI4Fz2YhSTABly0J.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/h3ODp6IXBSEQuxPRAbIeN.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/XroY_u0lzWVICOmIW1fiT.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/fDnX-FIdqyRnsbHQJFYKz.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/OOs26Wo3owOfGLltF27Py.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/ry67ipZyV9D7RBBcfvALw.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/l97GEAWDowMZJPEFySfZR.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/uzQjbB-3fVBDS-o_qPVrr.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/NH2QpUmAbK3UW0SmWNNwD.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/0Wk3LGcnshPO7Lcqi9rfA.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/AQeEJo3-9-MQDfnsolmp8.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/lw4w_Us2QRv-zSIF2y3QM.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/0JLD7cR9T8aBslcmsxVyI.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/-g_zMf6EwZLmH9fNnx30N.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/P2SEfgmnwSkbD2Ra5PJ68.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/J5qpTxyqhPH0C5TsRkcOl.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/hWixRw0P51wOjFQfjF8X2.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/5K920Qw2xjvfm2cIbAAXE.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/0QSCgZYe5M9cTD4hmLQOe.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/Kje25_YmpLkcYvwX1bHlo.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/_3kkpfH7vU3x3qU-4Mm21.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/eIY8dVUPr_Y3dwaFs2tcX.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/nyg6DRRi3M-Qv11yOIj2N.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/q41WW4aQu6X-c0NiFEz1J.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/4jcoZePKLjhfx6cO7kENn.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/i589lFmkSA1_FvLJQUW_v.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/sjsErAQjsydeC5wBxqqyr.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/nYk8pahoGxhSw3wDALfcJ.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/JCU_vuMr19-ZghWcCKYkW.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/-Um7y8OClBj2-EQYi9uL3.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/MNek9BEDAQYa4oSOIkDY4.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/_jIO45TZ4WRXptJPAxc-U.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/MZnZuVdw7J6eAjhfG-LlT.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/RWKC50imW86tbynBAlm4N.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/jrGQdwJHegtk6F_naPvM2.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/rK62MP0aNgYXQ00PgGqSM.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/_FuoJDO16ynP8incy5hzf.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/e1HDaRSXkRtrkMR0wy0By.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/kr_zWJt-8IF5Ruzaz43Im.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/pVJJetKZ-BKZIpB1FcRmj.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/4XFBR_lxVGd15DnxgCBO7.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/Ldqd7CNniWxChCY8um51B.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/zZceNxOwJq42yD6wPAB9T.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/kyDU0Xvzwql_SquRF-tJ4.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/Hxfx1JKsnPuQV6s-7kTkg.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/1Ca-0KgxK7bZfyks_S4JG.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/V8wZ5ChAQI3pjWXYUWAzL.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/9Ak1z9TNaWhXIg1xbnU46.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/ZyW7dEJ9fEjAv7lqha9FE.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/X8WgdWapK7KXFP4IFYddW.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/8dbKoT_UHhq3xtFQ99K7j.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/yV-Zy1w7aqTuxnX-eNHPG.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/7TckDvcmH7oA2-iSXa3iS.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/dSaHx-zdQQL0oAZYjzUDv.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/bzCKQLieHy4MvjjIjPCmC.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/pyld6obmdToCnY3QEukjK.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/TqkaTkGLJUNf9USXmnZtI.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/Wpo3_GJMzb_Sj8vA-kHZ3.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/sgxlBoCgimtZg4ExrYDBr.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/Wbh9fWKkvQ4fNDGknoJPw.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/NGdQSWqobZP34mNq_paOG.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/ELcONZADeYcMoaChpWZMb.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/W5w63yIwpK5PXHsdxmvD5.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/nuzXC2XdMYrvfM-Lb6cPq.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/1ht35tA-SX9G0SISk9ESZ.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/4yegt5yOitymxlUoTcv2A.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/OZxZoXMCm9bp2Uc_1gkhy.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/hQwuKwYK1VZCZd1bxvys8.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/OUBt4Ea5uzwdBRrppwtec.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/MAoQMIWneWgCvtLRf7kRy.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/_4_kvSnqBQ-I4XBFkS-ua.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/1FAYcev3aput1nd6gGl42.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/vOAxj3uWxwB8x0_Lmfwgl.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/Wc0PIIHAy_D1E1d9dmRNW.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/QPfaSlRLUW6Rcxrc8eSpE.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/CgwsQ4zIgFqAteVQ3Fjp-.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/XZQFdx5X5sIULX5Ra6Sp2.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/q8hJseTXvz_kNKPgWZs_j.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/JUhcb1GL-OGLFv3-dlWO5.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/25tIFssCgPsp8xQjOX_By.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/Ef5LulFjfsPwIJFyy8D9G.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/ii0q6ct2UXqiItGH44Qaq.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/y5juqjKHm-C47HT6xGO8m.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/FpZGo7YBuyONlWXBomUA7.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/qxQ6zPLiSw11zWOve8iiN.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/xucw3zaZ6Q7bzkiYLjQ_a.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/pHNSsl9vVSq972tK0vn2k.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/NpJw7vCfPV0LdmAVdi6cV.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/uUEESgzS0pPsJ_MdSpMbC.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/03v_D7XcSJ_cV95UsrSyB.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/ZLffZkro7gQ-jnBweKzbe.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/e6gvXxX159ZH4nuWv_dpi.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/NhYhsedmC7I9kJNzMl8eT.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/53QlQNWKgADseUh9Wng0y.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/hrPPPU3Fb9CZv8MLXJu2z.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/ommulbgFZsKfI1-P20Ulp.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/9wAuT25B2H8Ne4W8qD0f8.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/YedG-C9Kc-QsMPYXMisdI.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/v6UutEosCs8RZCQk6kPfC.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/SbN9Gnmu_yarZDmhK9MTw.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/I62nE1Y9ai7i2LqbIHh3N.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/7Op1sxK326pF21mK7H8eu.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/UX3WcLeAh9AQUW6d_RZoW.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/Gi6ZZMBUhPrSkFn37kPLc.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/guQXKdJHKHQhbmoJLWNOu.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/mPOXZbCrUy5QwCTNiDY5d.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/dd_GA4Nhk-ojHcIAz64br.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/TVZB6hbSMzQaQw4GLlmf3.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/49dE12CDUj5D6-ZRuMM8D.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/cwu1PyQN8LVAhoYLM8OMy.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/y9ccuU9-BF0wj_2nAQCo8.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/wEqDFKKKdnXhw8cDFQr1d.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/kqjKCQfK8raYBOqsgtIkQ.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/c65tVJ4Vr3bESXHF23ydC.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/eEH-IronUzEutsDhl04zA.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/YsUWRobtEWzXHSrEdZWKS.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/37SSNK6JE7qQ-pwOsSigZ.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/7laDetT6Obw-Fdu1izhh7.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/-4dtChRI-fM4dLqIGR9m7.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/nTUMjBaYCdsSb6qNL7mTV.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/cjajG3H3cBtZhJgnb-J3R.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/biWu7xcWF9YHP6qkHEkdf.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/xgDIJ5zBY5PQ-hEH5NOCB.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/6rAybqBVuuSZ92rUlwick.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/L9UO-4L2yDdkFlWfzzTlD.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/71cYB52MrNZhoPTh6qmHS.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/25frybKAfwhnxjsLWqpbk.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/HGoK4Mq5yNItiaXLPRQmu.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/XNoXTdWRftotbM2Hy-E4h.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/IvUOyopvZSE_4PNvWtJ7Z.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/zk_5i0CI-WujEbPq-Fc3X.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/01fJUc-ylwfcbMJvbRhWD.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/wQRk7epWySrLGst3IvdZe.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/A6Ru0LzjdDrRi5tV4tMPv.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/sPIBZTJGO78BQ_SOJ-Uuy.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/3b0Irs16UtEPw-HwMGXBp.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/VXMpGaq-jgpopIT_B88Tf.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/34p58DZM6hiep8Hnvz-sj.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/MAYC-EQTB32sP70qTI0st.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/FWS92zBPEUAPPzIyyurD1.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/fuJdC0lSxfnyoJpwze0CG.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/7C42v_i-umfdjYJIqaERz.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/8bUOSi0gzGu1B9E6UMTd8.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/Kj3ZPg_4BJYjURgP8znb6.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/4DzjMlyl2x30m0F81AHlQ.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/0Ev-9H4eJVrQQyXy4-RSB.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/tv1KXC3hwWB90h6H3DT_C.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/7rzV-XBoguamyY7N7s-ME.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/h7vUPIpRTmmFGqNkG_MJn.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/5gyr7sF4NjApbIm1F7r3h.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/adL7RbgdgJONjpiMmyOLm.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/QqOUODhjya6Pd8olGXvEV.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/qgtq42MZJqVaxqo9e9aHz.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/fIYvtNUtSWiOYSmCZ8oAy.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/Xgl1PKLhTZsAne7UGmO5l.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/H9rkEP__Ey3eEpfHzh1bu.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/0vb0cSGxlYsmjjWoh-Gei.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/jLsv6mdQ15Yx232bJ9yVA.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/0WbtT1YOV0yrdw_VszD8l.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/2lKsw2qU8WlCftrSvQ1lm.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/rLYjbIxaB5tG2uhqkXDsl.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/TfeAVmKphYWsN8fmcDAlq.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/cVtPuDIlMFPRtIT2jGX6v.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/KnzdX8FTjJgFhpcC2lL7R.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/x9FX7pGkv4Uy6Js6EefOP.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/j2sxoKO7JUE1WSftMA1iE.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/hMHAkK735hP5_iggxPOh7.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/wHCWBrCOsBDiA3dJ7r342.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/eEspWAC51cK2G4ezimENO.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/OCXJdllConZytytQyjgBt.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/Ntc9oZqS_oviffuPcKE3M.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/_XnQ753h2Jo9tcAk6roEP.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/JktUhBesl92mfsOGx4os-.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/aH02e4k5WZKqcPrgY1z_g.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/oEA4Rr4J--XkADc1o3KdJ.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/n5ZvQSwWsZc8J6yTFdiql.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/5hNb1Ah_HeEeK9BXXbS0B.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/z15l-ACfaXNKHUH8_25Kk.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/ob7gyQoAxZEQWBFC6JuCJ.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/CUtaoOJ1OPmiotPskWZcR.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/5FcpAI_-m3_kmGVzLJstE.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/__yWXpI1EnlsN7WKwOrKA.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/hdON1u2ZpC8W1fimUFbCX.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/JPe6YEaeis8KJ3Zk29UIY.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/E7qpWZvrK1Zw8WSf1I_7i.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/vJAb9HGCSzjE6N4RRHYEU.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/5ZDktsGJ0JZklvZnl3q6X.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/z0HdPInFYGAkzpywKe61y.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/yRGsF9BHRBy4I2buDFTUp.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/pycbBUsvbU1TIpq7Qw7qz.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/dx4_Db4xgF17rsBuJ20b5.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/YSd03pR-GeBILRFGM2jOb.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/aOXaUzKW5qpGyCrF9edb8.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/PKHLhSMe1kcxG1PhGNO1A.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/CtH0hxlOjjAlLHIpKveDR.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/TwCj7fTQ_n-Zyfh8LUBY8.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/0h6HwQfaDLel7SJu5iwqJ.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/hc2s2vViaV1hiSFjAreBt.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/yboGJ60evDlgQD0OxIM5U.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/uN5uDoONlKd3ILEKxTuPZ.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/ejyIHvqi5L-BmokRpSTpP.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/Zmnivpw1rtO9QRyTs4XaE.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/D4nub5kCE4pEIr9XPZj4u.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/pGQLwtoQXhpSp-AbXw5sD.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/X0_orei8fnZwLkc2lEGLg.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/32ovGd4PwRZBHIIwO4f2p.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/BINO-3scw11T_9_s0mlyX.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/Kr8E3q4VF1adYF5-gGtfN.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/3F_sKngUgCCmzkx5Myma8.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/IgHHEXIT4aWa_Bk5wp_uf.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/g1pR3kjWy8CerF_-Hvzig.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/oGpLnacPzQcum3Un0U1E0.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/Bn35UDdFNrhmcG5iBHwkt.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/EwuoV-3slhRPSwAIAppWf.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/VKkjU5imw4ixcDJ_HGYPV.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/WBXePqbsLApaL-EZkWFmr.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/rSjWEPTHTVfWChUDPPrzz.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/D1f5zSQRmjR9DORPiY5Bs.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/8wT8F62qhR9xje6mnj-lU.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/3oCX4S3U2gSnmrvkxFJ1U.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/iaKATZ9iDIXhG8zKeK56A.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/Tz4X9q9Rget0tAMwCyH0V.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/pSqBl8FVLw8MQfX_Z81Xq.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/elQD53nd1t3kNDH4k-bkm.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/QOs16ODPKOQEu2gqeOw4P.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/jCwY4_vveT9-hj91BvGPr.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/wjoOB9ZPG42fmm5YLIWbG.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/FXaH6_GdDZM0fvNyzocOK.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/ddZF2B5qbrgweM-sUujuy.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/oySJc599IsOaaFkYy2slK.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/Wj2wROk1UB5SG7-Nhqw5w.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/_PFLaIHuxBAZBtHP_tanE.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/F71LfjyBeG5lfteHJuvDr.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/qzwVK6UG03BRby26k2d2Z.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/wxj2sEmG7RrxWVixUgTcK.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/mdUMKfYnC7P8WmFJ02j6A.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/Kpgm0bB_1V4MPVwyBrnR7.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/P2lYKqxZL7ACKHdzu58-Y.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/aPghhNdWKOVxXXahlRth4.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/QU2BfyfD7v-v4neBisNdR.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/UoVKgMLnxqNOV1QI8cYED.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/rXbRgLhxg1cXwuyjK-ocn.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/nQ6_btxMCrnk3Ojbrnn0A.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/0Kc-vRx4NQCn257NjhZpl.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/7A_-w5fmmbIySBvvTdPA4.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/t6Iqe-5WylZpwfVdMs60I.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/eGzhfn-8DB_5zWw4TEr_o.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/WAuetQvxdQQ0CtLnMrJNw.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/ppUua6wJt-WG5O00I3T3n.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/DOmA3sux2H9lDenkARGKa.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/hqTcmHXlX8tgqXaa2LjM8.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/g6BkGotsNv5bxL41HVZEb.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/gYAUA_4P62gNABKFmBixL.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/w_0DYAjGlZRpceoZpTSP8.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/lPNWydQ0NPgXK7SpBzsJf.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/Ymw2VvdOSIhIIZqMBmU81.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/zJagujGkcyUFGruft07mZ.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/eg-9cjzG-rVShPLh3MqXq.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/-0T1FRdtWnzrzrtm-2uTg.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/uzJUK9Qie_amZSrH0mU3G.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/Avz9RVsIgw_PM7OwJg3ef.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/JIqcIFj3ZEl4oxIVptTjJ.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/uQsaAIt2OuGKFEr1yqtBv.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/9gaGqOmpqYH305qvpwe31.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/jRxELvm1nXXvOvaxKm_wy.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/N2pHm82D8R2cYH5jbncSU.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/CyHNrnop8NKzfs9ZUZOrd.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/GuEHZO6EBm7MzKk0TIEoJ.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/6Ddb3jAamsaOAtXOnXwXr.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/PcAGsr74jk-65ey8OGeoZ.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/wSe1XRipT1EfIcHUkQjeC.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/58v1Ho_zydRV9ORHr853l.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/yzOk7TEGuVFbkZJ-q0Xmd.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/5fSoK0F8wOKeMziMKAfNX.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/TSrIBAoRxJ7xWIhv7I6_i.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/weJBMOEyVefFpn9PjH_dk.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/Y4ZoC5EKDcJzd4GVRISn8.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/OzKfQd7qZRl3u77mNd1cr.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/Thr9rc33WO4U4efu3b06q.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/dt4gS12piSkf36L7xY31N.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/LOx_7m3tOMYkj_5qSFNI-.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/J98qeNRs6nSLduIXhn84r.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/kw9pqgi-4AH8_2T6RtvYI.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/EQDas1U84oiGGhfqVib_m.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/5AkRH2qtmWYNQmn2A3tP0.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/ZA1n5Mi6g4eoz4gXEZbiA.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/eEajPUjffBmZMKs-CQAT_.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/2LLm76tUhaCyOYhORZUvN.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/nGqtPolcU1emEGKZ5HdfV.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/Jkq_pKl44g20OReSCrdfA.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/MCPqCNiSq-5EIGdW_LwN5.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/JH2joFnB4k_WN4IIM0gMb.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/PtTHLBJbnq3PCBz7Sssh_.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/kCHc3wQfgtP_OxgYmRXtX.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/Z3M2cll-VJY1-Fq2VnuhF.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/6k1VCDOxh4irFkQj1x4RB.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/g2s5NgNx52TyqiZd4N5O5.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/jVKrRNDhJgDWTwJb-guyV.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/fyUwrghqZQ7TTqAvDL8oR.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/RUJTc0cFn6Ozsb8eXHavj.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/vl3sRUF72PIiP0U5cRu-Q.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/qmDwFVufG0VIs1oROJBPs.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/BG8aSbaAwzBhMnKh0pDVr.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/gA9nPY9vp_b8FjFRCxeRI.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/RccIrICPWSvHuNXijOiTk.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/8A71h0o7bPJ5YGYn5HkNF.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/7C4bEZESKvn1G7nojPdhx.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/rFO0yqxh27-bS2XJThbVh.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/qiRU2d8faO5Wn083qWdy1.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/pMhqd9FsLDF1Q6o0SfGEu.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/pM2dxinWOgCQMP5rnwasV.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/ChU7JfqAK1SXYFxyyQROI.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/1z08amHlG_rU4bH96p1yX.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/KESmYz_jxvLEbHEKFuLgk.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/WyEoCPr1MhS1O0WstoADn.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/Q6m97JSFvLUrOC66WVb0l.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/YB11zmhGzIoPguGvFk8jt.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/IPbtQB9w7pWJhdq7GL_Ob.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/gxoJgoMu3Ut_3DzR36xNi.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/gee9HvD66m_pm5BdrcbMg.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/OV_hvB0DOv0xjTy3oQ2h6.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/I8vR4mvF03f22qEkbY3rB.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/rmocvUHxSuWP393AtasJv.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/gBwPMLpEtzlaTg-BrhjIf.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/MPGaC7AQQr3PAmfQRccZ6.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/FTx94GvSTsWSdXSW3Vus8.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/3lJBPynrbIkAKTNXXqwrj.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/F3jOcPt5Shl45vDziTn2_.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/vthFeJNZ__GwRuDbTy6y5.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/hW4FEKDD7jWorDkWt3byT.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/5vAS0vGpZQeYqQYTGtjvf.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/LrU679NSvZEkD4LJ8Ejvn.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/ooQtV8Qsk-eJCFgJD3pZk.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/EoKF_G9JGPFP8BoooA1-Z.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/3EiizKAku3dyZHpwbDYnD.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/v94ACi8zzy3No_BmJp1MH.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/i3ZF1-aFM-S6HJoBAcGa2.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/8Iu6gX7idvLMqjCX7Ugwq.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/VnIqI5THUUgT9-X-_qFpv.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/sPKW4qRjQmjVbpbRB4VXZ.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/sqdoEmGjV3StCbCWCuaBF.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/bKKs8rpxzDZzp8Upjbdj0.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/EZdmNBPrAauYjE-KHwG1H.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/m0QmABifR0izGOlKrPB1Y.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/s_NtisKc5RV0qz6Imvk0Z.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/07zOGBCXfDqRIhjyTNFxJ.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/poXMiqzuPVctj7LwntJHj.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/bYYvPqbuS8hd82yUHUzAQ.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/GVkeWjSy270WEPr3lfkLo.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/jGmwyLpPg5cowrRxJu4z5.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/RXoeeoh2qZz6xELIOfez5.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/camznDQo6cWxwnUVDWe1A.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/f58G3cW1P2LlhYr91KHGg.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/HffcLn353PPHH0nsUSqtf.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/BXA7wS0S8lY0oRo8SSB_6.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/iaXvkO_VzgMvBqH1JLYJW.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/a0BYmJW9jUCvU0T3WrXsF.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/7bfWD3ewBue7HCAhtYgXK.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/uKsAonPpJtkKwBrnj9KC4.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/5C7yKCjZQXuI696xWNTPH.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/IxahNNwIDxeeQudTL4YK2.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/j1TIPA-YT-ay9A0yhZunF.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/smCMwDhulkq4oJ5Px5bfa.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/2En4H7l7GRPu_k5FNcgnE.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/3OyaD-tsAHEx6zDLSroyc.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/3y2fEmXE0RRrpWwGyLBx7.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/8Pg9ytCMLZ9R5GuhpqKhf.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/cWZ91AqKXF-_TvUgtdpJb.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/mFCiRvRNYjAgMHj9JSQ8J.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/JW8GTuUmqZjvasDBb76kA.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/YF4DqSun-oaO62clf2ATW.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/T4urGntl06a2-ZvZojAPZ.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/fMFQXhme_sOvYOktP67rY.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/HYHwnEegPewWU07DXv3ju.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/H3Kf7CNjEwM0YS3J1glmo.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/twpHEBtjCuOkQsLNnHb4b.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/bWwJPmwaTnFtQIZZNJtnP.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/aLTBdUnb4NFIq2xBRGjS9.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/9MeE0i5iygB6n-9N7Itij.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/F99qhojJUUIkEgTBr7RNj.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/xUAsOogyv5UBLvlL_7rkU.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/RMOMhtRUxvz2HvguKrovj.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/0pfN4-5pJnigsLRv-p1bx.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/Mesyik2-r_IJzwNfC-xBo.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/-hIppgI-CpcsFStOfgQU5.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/cuNQrVbGnQhvMhHgWHIl3.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/GhIt620PzAs5Mkjnrrs2d.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/51LCp6AIUbYoyVKnQfqMO.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/VVz1IF3FC2yl6QlrFSuQw.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/p4-MGKxdevO24-LKrOsvv.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/PBjWMBXieeMc07lRTtjGC.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/G-ywQFWYoCGfeJMCCb2xe.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/lrwuUe3JI69O3eDTsjwLl.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/vS_ludyAnjwnVnw5iu0CP.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/uHrUViVfQoswAkZadk493.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/ieOFY3BwrH_wYfJKhSoWK.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/chxXXmCBx9BmHfUfICSPu.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/a0G1n1URccjXr2T7OiUCx.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/VmQJIpgynRygMlVIP-1zw.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/3If9SN2qpeh8v7SeHUdtw.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/Vm2dJJhJVyJExZkPgpPVD.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/_ITXRrnW_-sZ_T61CFS3r.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/gEqGPX_tdDJtwMY6ZQU-P.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/hOc1PeKYt6fRKouZGyQBZ.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/T6i8OpBLOsT7xz2HexnlL.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/70mKt_L3oIO6LnHgctyry.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/KFIwoxM-7_2OSX0MGE4Gp.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/nFJ_5PGwynYN7Xg2dpYg0.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/pUYbRRsDTprlmoxZXgRYK.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/FhZ2uganROEk9rS1wPekI.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/yMhKWQENlfZi9EnL2EgoH.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/MOcbtlU0Fn3lPq32sTzX2.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/SCF4mY72iJCCx7lIajK8V.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/UVVE319A_LSwec32mvHxJ.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/cRXsQTxeMQ1f-rzpnTUyv.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/4p7naO4ajJAYlhPhgFJdV.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/kE-9zATuT8-8sUshneSgo.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/gGIxmrr6B-GpMPR3pznXw.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/qcYmGrYhTajdVx-McSCEw.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/PuoK99XXdPXl3lXffmsid.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/JWW8-j_3e3Y35gijKSX3-.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/Iq6qZLl0i3yAt3l0A1hda.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/Es_pivHvqe5adLiJO4RZb.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/YIL7nNTHshf8y6zA8KJoq.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/kSU2MVlUdM3ecFU0xpgjf.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/r8C8nJb9yM34IcImBVtu_.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/rhn2qeZMwhlqTKPYmURN3.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/RW2jF3osaPqOgPVynFi9i.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/Mxxsxs9OEyLaYtwI00lUO.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/qxj1EXh8ClsO9zqkO523a.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/eGWjOxWXAF3hzagqfbuS0.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/iDw6l2CGH1ZZETlfNDwnE.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/ok5Xl1HTVBAVK4jjVR47-.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/nBvaMt6evIUS6AHQr5xdj.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/jNHLUi-hFZmqPlK8AALHD.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/6YvkVavolGheaZZSAPQHP.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/2DKszd5tUlz01zRUU1kjo.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/AZCqBM3t_pOnoh8aIzut5.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/gBo9VI0A5MoxiZfSIcuUM.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/6YMXgc27h7vYAEZtUaJH6.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/ghW_2a7K2ZYC8p4ucgph8.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/8rbTq_5yZ5vmFpRSG56BH.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/wF9zjDnGyRuRxHqtCWBSM.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/J9rho10SlQFlKXbjZvJde.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/3FoMz5WWPMFuethzjHTHe.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/L8ed9YtGQTkq9CvA4RZ99.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/Br7lu2O9bS1sDwsoLcaKv.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/HcW_36QT7KAIkKTW2nTGE.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/4n5syTMzw17xAv0PfdWwd.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/u8a6Ep17iWxPyjCKaUUmE.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/zwXX0M2yVz0fhr7ozKQby.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/3K5P8RvhcKyVAagUsYGRB.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/e865HRTF1aM301frCMDxx.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/DTFbDUynjoMiiPBRr405q.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/DjLJ5heSOH-o0xGm6ZVdK.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/uMDctWSBswHkxoegts8Cr.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/Mrdic-WLqZO8YJrDv4ijx.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/LFEnj0muQoaffLNTFb-Fj.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/jzLNSBoedlXvsotgQb7Eo.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/tzP4pGjlwJ5aFSXNai20u.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/NP3XHLBw1noOAbUkK1W5O.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/-iAzHrwlAuuwaObGCpwnY.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/20S2oeXjBSL0GPnbPNb0q.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/tkn-URpAlCjpZ9PVQuZbt.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/fJHxAb5-_6s6ZlU_di9tj.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/3eqaq5XCUbBjLE1b4V85u.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/fJWMNSOyiQQGjMVBKaxzL.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/2VnxP-yuVcEbM7iaWkrOF.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/_-IIctfyu6VHcR3bHcZl_.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/AUazvYDuoakdH7DYrmkB9.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/II5EnC8moJfwfQp2WyNJl.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/3o-_Wq3ulVVenB1pPKhPA.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/gxCJaJ3tH67vks41OZxTr.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/1XqkEc3LQI5SGNzDu1Wt1.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/0hsG3uXJrZwxT1XcjlG-Y.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/xiWV-fka4XD9Ks0np04GS.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/aoYAayWQBeN6uILOrDQEb.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/CcbDmGM6SjJyOmq3y5c2u.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/SkCT7C1_fl4uGxYRt7tvq.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/Wp2xeLmB84Awzkf8n3rUF.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/IvWorhkKTo5YUn2Mxwiqp.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/t02Pfrk3SDm1cEtLGpkDs.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/SHWqQBYbLfJJKP6A56AzI.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/hRmOjGKw16T8VY1Aiouvx.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/rXT4Qq6gUfYlMPDJtrKxS.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/GfXpXYLCzkPv5u1SK6xxH.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/dZfM8Pzisb1z8FtzjACvd.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/5qzl_2WZ5TIy82wymmPjg.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/XUdzTo4LnoHBcbxNzshDM.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/vjcBuIejx8jBEgMAgIWRX.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/DM2WXBATvl4gyWE7dLjee.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/py4F1sv6UrW8myCWlhahE.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/2lWdEVgN7PMC36E2N1yEn.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/8sgmH7XUh27FwKbSc9T7N.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/BkeSI0B-Q0_SAKoKHWoon.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/-orqzsrye1rGUPESzXV66.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/CJ1VFprzK6ymiOeqDOc2k.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/2EWsAWeitKO6wkhuVXKYf.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/AXZwl-zuUtqVb4wjb1lJ9.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/QlPx_DRvNp2afZ9RK_5DB.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/HmWKyKbF4WsowaYXyNByu.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/IwOSEo41nVSQIzEvhqLrQ.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/aKgw5mmh5_SdhgtWEK9sG.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/-3grD0ROobD_XUBJ0E6Jn.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/ypiuT6VfBOlKzi4VL6AOr.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/-5kP3D-pnqdKwlraKa4bm.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/INu1ILIHZToYHdSDH62sv.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/9r5urka2g02LdvorfbInp.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/_zD4Scq1K6cLUi68xYQOo.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/-LrkrUO5uf-0A_yuSptdQ.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/FKsvutRk3M4ygZaVg0hNq.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/JtQOzuFV18OpIjjeF2lLZ.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/TWKLxQRG2wgUwDv42pmcI.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/ClHO6dM1IoH0EiuAcIEir.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/vKsYL3opQgt7qkGFojpxa.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/-qHqNbXy1GTu1zpuu0FRM.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/eXAa-4XOzFmrQKdSYrALO.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/Xi5Vy0oq93BMaWP50IF98.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/iNA1b7iue4TtpC2UAgFqr.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/tnWITbHa-JGJ4qhRxOy2m.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/tgCHfxs6XUx7XXdm_xER_.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/ngmUgdZP7PhFXVtJLp28E.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/058eAVEoRfGOCeebesuPn.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/BVwwd3V5kHfeXvSeMIIAx.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/pz4qgLm75j5gFVR1N-nE_.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/FRrrrji28dBPIvqbB2o-l.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/sXjl-GfWiKE7nyDogre1w.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/D5AxgHtJXS7wUulLc3tDs.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/1LOiWH9-V01Y0fCj6ssHo.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/ZRzrL25rx1zV_CxWyS6Xu.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/jP01HArd6yv-_St5aKd4q.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/PpjPbikNl3yxp42RPmhnv.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/D1XBlnwiDBx-Z-J79FEv4.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/kCcNiSVm-aeqisbBgAFkc.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/73se3nzUdlpRCTdaGywVr.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/CbgJvAMgIi0alFlqeuBB_.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/8K4XyKFUNjw4wQZ_dgzd1.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/MShT3z-RgR8COKbILyrCY.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/AdIiOXVElnxAnO63jgvFV.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/OcJim22J5hUUlQTCRT823.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/RQMfG5ygxoy1820i1hhzh.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/LA_Umuyw3W2I7yypSuDIF.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/Q1trmyDecbznvp4yR0mH6.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/lMIKWV3B0YrIRx6BsgAwe.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63eb9b0d13a3eb9b0dc96c84/WEAQdbWUyOEhgzDiy-KcY.png)
mob2711/qwen2.5-7b-qlora-cot-ht-2000
mob2711
2025-06-18T05:33:23Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-18T05:33:01Z
--- base_model: unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** mob2711 - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Nestech/gemma3-1b-it-summarization
Nestech
2025-06-18T05:28:27Z
0
0
transformers
[ "transformers", "safetensors", "gguf", "gemma3_text", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/gemma-3-1b-it-unsloth-bnb-4bit", "base_model:quantized:unsloth/gemma-3-1b-it-unsloth-bnb-4bit", "license:apache-2.0", "autotrain_compat...
text-generation
2025-06-18T05:07:42Z
--- base_model: unsloth/gemma-3-1b-it-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma3_text license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** Nestech - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-1b-it-unsloth-bnb-4bit This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
cucucu666/xianqi-6.18
cucucu666
2025-06-18T05:25:02Z
0
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "flux", "flux-diffusers", "template:sd-lora", "base_model:black-forest-labs/FLUX.1-Fill-dev", "base_model:adapter:black-forest-labs/FLUX.1-Fill-dev", "license:other", "region:us" ]
text-to-image
2025-06-18T02:12:48Z
--- base_model: black-forest-labs/FLUX.1-Fill-dev library_name: diffusers license: other instance_prompt: labii face, Crayon Shin-chan style, disgusted expression, frown, plain white background widget: - text: labii face, Crayon Shin-chan style, disgusted expression, frown, plain white background output: url: image_0.png - text: labii face, Crayon Shin-chan style, disgusted expression, frown, plain white background output: url: image_1.png - text: labii face, Crayon Shin-chan style, disgusted expression, frown, plain white background output: url: image_2.png - text: labii face, Crayon Shin-chan style, disgusted expression, frown, plain white background output: url: image_3.png tags: - text-to-image - diffusers-training - diffusers - lora - flux - flux-diffusers - template:sd-lora --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # Flux-Fill DreamBooth LoRA - cucucu666/xianqi-6.18 <Gallery /> ## Model description These are cucucu666/xianqi-6.18 DreamBooth LoRA weights for black-forest-labs/FLUX.1-Fill-dev. The weights were trained using [DreamBooth](https://dreambooth.github.io/) with a custom [Flux diffusers trainer](https://github.com/Sebastian-Zok/FLUX-Fill-LoRa-Training). Was LoRA for the text encoder enabled? False. ## Trigger words You should use `labii face, Crayon Shin-chan style, disgusted expression, frown, plain white background` to trigger the image generation. ## Download model [Download the *.safetensors LoRA](cucucu666/xianqi-6.18/tree/main) in the Files & versions tab. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to('cuda') pipeline.load_lora_weights('cucucu666/xianqi-6.18', weight_name='pytorch_lora_weights.safetensors') image = pipeline('labii face, Crayon Shin-chan style, disgusted expression, frown, plain white background').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## License Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md). ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
morturr/Llama-2-7b-hf-LOO_one_liners-COMB_dadjokes-comb1-seed7-2025-06-18
morturr
2025-06-18T05:24:05Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-06-18T05:23:48Z
--- library_name: peft license: llama2 base_model: meta-llama/Llama-2-7b-hf tags: - trl - sft - generated_from_trainer model-index: - name: Llama-2-7b-hf-LOO_one_liners-COMB_dadjokes-comb1-seed7-2025-06-18 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-hf-LOO_one_liners-COMB_dadjokes-comb1-seed7-2025-06-18 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 16 - seed: 7 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
ILLUME-MLLM/illume_plus-qwen2_5-7b-hf
ILLUME-MLLM
2025-06-18T05:17:44Z
46
0
transformers
[ "transformers", "pytorch", "illume", "feature-extraction", "any-to-any", "custom_code", "en", "arxiv:2504.01934", "base_model:ILLUME-MLLM/dualvitok", "base_model:finetune:ILLUME-MLLM/dualvitok", "license:apache-2.0", "region:us" ]
any-to-any
2025-05-30T06:22:14Z
--- license: apache-2.0 language: - en base_model: - Qwen/Qwen2.5-7B-Instruct - ILLUME-MLLM/dualvitok - ILLUME-MLLM/dualvitok-sdxl-decoder pipeline_tag: any-to-any library_name: transformers --- # ILLUME_plus-qwen2_5-7b-hf <div align="center"> <img src="https://illume-unified-mllm.github.io/static/images/logo.png" width="100em"></img> 🤗 [ILLUME-Models](https://huggingface.co/collections/ILLUME-MLLM/illume-models-683b3916f5af2d0a015b3477) | 📄 [Paper](https://arxiv.org/abs/2504.01934) | 🌐 [Project-Page](https://illume-unified-mllm.github.io/) | 💻 [Github](https://github.com/illume-unified-mllm/ILLUME_plus) | 💻 [DualViTok(Vision Tokenizer)](https://github.com/ILLUME-MLLM/dualvitok) 🤗 [ILLUME-Demo](https://huggingface.co/spaces/ILLUME-MLLM/ILLUME_plus-7b) <br/> </div> ## Model Summary We present **ILLUME+** that leverages dual visual tokenization and a diffusion decoder to improve both deep semantic understanding and high-fidelity image generation. ILLUME+ introduces a unified dual visual tokenizer, DualViTok, which preserves both fine-grained textures and text-aligned semantics while enabling a coarse-to-fine image representation strategy for multimodal understanding and generation. Additionally, we employ a diffusion model as the image detokenizer for enhanced generation quality and efficient super-resolution. ILLUME+ follows a continuous-input, discrete-output scheme within the unified Multimodal Large Language Model (MLLM) and adopts a progressive training procedure that supports dynamic resolution across the vision tokenizer, MLLM, and diffusion decoder. ILLUME+ (3B) exhibits competitive performance against existing unified MLLMs and specialized models across multimodal understanding, generation, and editing benchmarks. With its strong performance, ILLUME+ provides a scalable and versatile foundation for future multimodal applications. <div align="center"> <img src="https://raw.githubusercontent.com/illume-unified-mllm/ILLUME_plus/refs/heads/main/assets/images/framework.png" width=50%></img> </div> ## Usage This repo contains the **ILLUME_plus-Qwen2.5-3B** checkpoint organized in the **HuggingFace format**, and thus, can be directly loaded with **transformers Auto APIs**. ### Image Understanding ```python from transformers import AutoModel, AutoProcessor from PIL import Image import torch ### Uncomment if you want to use Ascend NPUs # import torch_npu # from torch_npu.contrib import transfer_to_npu # prepare models and processors model = AutoModel.from_pretrained( "ILLUME-MLLM/illume_plus-qwen2_5-7b-hf", torch_dtype=torch.bfloat16, attn_implementation='flash_attention_2', # OR 'sdpa' for Ascend NPUs low_cpu_mem_usage=True, trust_remote_code=True).eval().cuda() processor = AutoProcessor.from_pretrained("ILLUME-MLLM/illume_plus-qwen2_5-7b-hf", trust_remote_code=True) inputs = dict( text=[ {"role": "system", "content": [{"type": "text", "text": "You are a helpful assistant."}]}, {"role": "user", "content": [{"type": "image"}, {"type": "text", "text": "What's shown in this image?"}]}, ], images=[Image.open('path/to/image1')] ) # run processors inputs = processor(**inputs, return_tensors="pt") inputs = inputs.to(model.device) gen_kwargs = dict( max_new_tokens=2048, do_sample=False ) # run generation with torch.no_grad(): outputs = model.generate(**inputs, **gen_kwargs) outputs = outputs[:, inputs['input_ids'].shape[1]:] print(processor.batch_decode(outputs, skip_special_tokens=True)) ``` ### Image Generation ```python from transformers import AutoModel, AutoProcessor from PIL import Image import torch ### Uncomment if you want to use Ascend NPUs # import torch_npu # from torch_npu.contrib import transfer_to_npu # prepare models and processors model = AutoModel.from_pretrained( "ILLUME-MLLM/illume_plus-qwen2_5-7b-hf", torch_dtype=torch.bfloat16, attn_implementation='flash_attention_2', # OR 'sdpa' for Ascend NPUs low_cpu_mem_usage=True, trust_remote_code=True).eval().cuda() processor = AutoProcessor.from_pretrained("ILLUME-MLLM/illume_plus-qwen2_5-7b-hf", trust_remote_code=True) # set the vision tokenizer for decoding image. dualvitok = AutoModel.from_pretrained('ILLUME-MLLM/dualvitok', trust_remote_code=True) processor.set_vision_tokenizer(dualvitok) # (Optional): set the sdxl diffusion decoder. It will enable upsample 2x image resolution. processor.load_diffusion_vision_detokenizer("ILLUME-MLLM/dualvitok-sdxl-decoder") target_image_resolution = (512, 512) ### Processing the prompt for image generation # "Generate an image of {resolution_tag}, the content of image is {content}\n" image_content='a cat' resolution_tag = processor.get_resolution_tag_from_resolution(target_image_resolution) prompt = processor.default_generation_template.format(resolution_tag=resolution_tag, content=image_content) # "Generate a random image of {resolution_tag}\n" uncond_prompt = processor.default_generation_unconditional_template.format(resolution_tag=resolution_tag) inputs = dict( text=[ {"role": "system", "content": [{"type": "text", "text": "You are a helpful assistant."}]}, {"role": "user", "content": [{"type": "text", "text": prompt}]} ] ) # If using the classifier-free guidance please set the unconditional prompt # It's only needed with guidance_scale > 1.0 uncond_inputs = dict( text=[ {"role": "system", "content": [{"type": "text", "text": "You are a helpful assistant."}]}, {"role": "user", "content": [{"type": "text", "text": uncond_prompt}]} ] ) ### End of processing the prompt for image generation # run processors inputs = processor(**inputs, return_tensors="pt") inputs = inputs.to(model.device) uncond_inputs = processor(**uncond_inputs, return_tensors="pt") uncond_inputs = uncond_inputs.to(model.device) # prepare generation arguments gen_kwargs = dict( max_new_tokens=2048, do_sample=True, ) image_gen_kwargs = dict( negative_image_prompt_ids=uncond_inputs.input_ids, negative_image_prompt_attention_mask=uncond_inputs.attention_mask, target_image_resolution=target_image_resolution, guidance_scale=2.0, image_semantic_temperature=1.0, image_semantic_top_k=2048, image_semantic_top_p=1.0, image_pixel_temperature=1.0, image_pixel_top_k=2048 * 3, image_pixel_top_p=1.0, ) # run generation with torch.no_grad(): outputs = model.generate(**inputs, **gen_kwargs, **image_gen_kwargs) outputs = outputs[:, inputs['input_ids'].shape[1]:] outputs_text = processor.batch_decode(outputs, skip_special_tokens=True) # It extract the image tokens of each image and replace the image tokens with the `image_placeholder` in order. generated_text, image_embed_inds_list, list_image_token_parts = processor.parse_text_image(outputs_text[0], image_placeholder='<image_out>') # batch decoding the image by using the DualViTok. vq_decoded_images = processor.decode_images(image_embed_inds_list, target_resolution=target_image_resolution) # batch decoding the image by using the sdxl diffusion decoder. # The output image resolution would be [target_image_resolution[0] * 2, target_image_resolution[1] * 2] diffusion_decoded_images = processor.decode_images(image_embed_inds_list, target_resolution=target_image_resolution, use_diffusion=True, diffusion_cfg_scale=2.0, diffusion_num_inference_steps=20) vq_decoded_images[0].save('vq_decoded_cat.png') diffusion_decoded_images[0].save('diffusion_decoded_cat.png') ``` ### Image Editing. ```python from transformers import AutoModel, AutoProcessor from PIL import Image import torch ### Uncomment if you want to use Ascend NPUs # import torch_npu # from torch_npu.contrib import transfer_to_npu # prepare models and processors model = AutoModel.from_pretrained( "ILLUME-MLLM/illume_plus-qwen2_5-7b-hf", torch_dtype=torch.bfloat16, attn_implementation='flash_attention_2', # OR 'sdpa' for Ascend NPUs low_cpu_mem_usage=True, trust_remote_code=True).eval().cuda() processor = AutoProcessor.from_pretrained("ILLUME-MLLM/illume_plus-qwen2_5-7b-hf", trust_remote_code=True) # set the vision tokenizer for decoding image. dualvitok = AutoModel.from_pretrained('ILLUME-MLLM/dualvitok', trust_remote_code=True) processor.set_vision_tokenizer(dualvitok) # (Optional): set the sdxl diffusion decoder. It will enable upsample 2x image resolution. processor.load_diffusion_vision_detokenizer("ILLUME-MLLM/dualvitok-sdxl-decoder") ### Processing the prompt for image Editing source_image_path='path/to/image.png' instruction='Your Editing Instruction' image = Image.open(source_image_path) original_size = image.size image = processor.transform_image_nearest_resolution_ratio(image) target_resolution = image.size[::-1] resolution_tag = processor.get_resolution_tag_from_resolution(target_resolution) # "{resolution_tag}<image>\nPlease edit the image according to the instruction: {content}\n" editing_prompt = processor.default_editing_template.format(resolution_tag=resolution_tag, content=instruction) inputs = dict( text=[ {"role": "system", "content": [{"type": "text", "text": "You are a helpful assistant."}]}, {"role": "user", "content": [{"type", "image"}, {"type": "text", "text": editing_prompt},]} ], images=[image] ) # CFG unconditional prompt for image editing. # "{resolution_tag}<image>\nReconstruct the image according to the given image\n" uncond_prompt = processor.default_editing_unconditional_template.format(resolution_tag=resolution_tag) uncond_inputs = dict( text=[ {"role": "system", "content": [{"type": "text", "text": "You are a helpful assistant."}]}, {"role": "user", "content": [{"type", "image"}, {"type": "text", "text": uncond_prompt}]} ], images=[image] ) ### End of processing the prompt for image editing # run processors inputs = processor(**inputs, return_tensors="pt") inputs = inputs.to(model.device) uncond_inputs = processor(**uncond_inputs, return_tensors="pt") uncond_inputs = uncond_inputs.to(model.device) # prepare generation arguments image_gen_kwargs = dict( negative_image_prompt_ids=uncond_inputs.input_ids, negative_image_prompt_attention_mask=uncond_inputs.attention_mask, target_image_resolution=target_image_resolution, guidance_scale=1.5, image_semantic_temperature=0.8, image_semantic_top_k=512, image_semantic_top_p=0.8, image_pixel_temperature=0.8, image_pixel_top_k=512 * 3, image_pixel_top_p=0.8, ) gen_kwargs = dict( max_new_tokens=2048, do_sample=True ) # run generation with torch.no_grad(): outputs = model.generate(**inputs, **gen_kwargs, **image_gen_kwargs) outputs = outputs[:, inputs['input_ids'].shape[1]:] outputs_text = processor.batch_decode(outputs, skip_special_tokens=True) # It extract the image tokens of each image and replace the image tokens with the `image_placeholder` in order. generated_text, image_embed_inds_list, list_image_token_parts = processor.parse_text_image(outputs_text[0], image_placeholder='<image_out>') # batch decoding the image by using the DualViTok. vq_decoded_images = processor.decode_images(image_embed_inds_list, target_resolution=target_image_resolution) # batch decoding the image by using the sdxl diffusion decoder. # The output image resolution would be [target_image_resolution[0] * 2, target_image_resolution[1] * 2] diffusion_decoded_images = processor.decode_images(image_embed_inds_list, target_resolution=target_image_resolution, use_diffusion=True, diffusion_cfg_scale=2.0, diffusion_num_inference_steps=20) vq_decoded_images_unpadded = processor.unpad_and_resize_back(vq_decoded_images[0], *original_size) diffusion_decoded_images_unpadded = processor.unpad_and_resize_back(diffusion_decoded_images[0], *original_size) vq_decoded_images_unpadded.save('vq_decoded_edited_image.png') diffusion_decoded_images_unpadded.save('diffusion_decoded_edited_image.png') ``` Note that we implement `InterleavedLogitsProcessor` in inference for three key reasons: 1. To activate the image generation mode with classifier-free guidance when meeting `<start_of_image>` tokens. 2. To handle the varying number of image tokens across different resolutions and ensure proper alignment between semantic-level tokens and pixel-level tokens in each line. 3. To prevent sampling of incorrect modality tokens when `do_sample=True` is enabled during text or image generation. ## Citation ```bibtex @article{huang2025illume+, title={Illume+: Illuminating unified mllm with dual visual tokenization and diffusion refinement}, author={Huang, Runhui and Wang, Chunwei and Yang, Junwei and Lu, Guansong and Yuan, Yunlong and Han, Jianhua and Hou, Lu and Zhang, Wei and Hong, Lanqing and Zhao, Hengshuang and others}, journal={arXiv preprint arXiv:2504.01934}, year={2025} } ```
fnlp/qwen3-0_6B-uniform_r_16-d_kv_32-refactor
fnlp
2025-06-18T05:13:52Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-18T05:12:02Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
New-tutorial-katrina-lim-kiffy-18-videos/katrina-lim.viral.video.Link.viral.On.Social.Media.Official
New-tutorial-katrina-lim-kiffy-18-videos
2025-06-18T05:06:14Z
0
0
null
[ "region:us" ]
null
2025-06-18T05:05:48Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
liushiliushi/qwen-uncertainty
liushiliushi
2025-06-18T05:01:39Z
0
0
peft
[ "peft", "safetensors", "qwen2", "arxiv:1910.09700", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:adapter:Qwen/Qwen2.5-7B-Instruct", "region:us" ]
null
2025-06-18T04:48:15Z
--- base_model: Qwen/Qwen2.5-7B-Instruct library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
freakyfractal/bstiw
freakyfractal
2025-06-18T04:45:38Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:apache-2.0", "region:us" ]
text-to-image
2025-06-18T04:45:19Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: images/Coinye_2021.jpg base_model: black-forest-labs/FLUX.1-dev instance_prompt: null license: apache-2.0 --- # bstiw <Gallery /> ## Download model Weights for this model are available in Safetensors format. [Download](/freakyfractal/bstiw/tree/main) them in the Files & versions tab.
hzerrweckk0101/gender-text-predictor
hzerrweckk0101
2025-06-18T04:43:28Z
6
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-06-16T04:27:22Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]