modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
list
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
micaelbaptista4023/MB
micaelbaptista4023
2025-06-15T10:36:10Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2025-06-15T10:36:10Z
--- license: creativeml-openrail-m ---
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.25_0.5_0.25_epoch2
MinaMila
2025-06-15T10:31:40Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T10:29:53Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Gaben69181/wav2vec2-large-xlsr-id-colab-fine-tuning
Gaben69181
2025-06-15T10:25:10Z
213
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice_8_0", "base_model:Wikidepia/wav2vec2-xls-r-300m-indonesian", "base_model:finetune:Wikidepia/wav2vec2-xls-r-300m-indonesian", "license:apache-2.0", "model-inde...
automatic-speech-recognition
2025-05-18T15:49:18Z
--- library_name: transformers license: apache-2.0 base_model: Wikidepia/wav2vec2-xls-r-300m-indonesian tags: - generated_from_trainer datasets: - common_voice_8_0 metrics: - wer model-index: - name: wav2vec2-large-xlsr-id-colab-fine-tuning results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: common_voice_8_0 type: common_voice_8_0 config: id split: test args: id metrics: - name: Wer type: wer value: 0.11512836043423566 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-id-colab-fine-tuning This model is a fine-tuned version of [Wikidepia/wav2vec2-xls-r-300m-indonesian](https://huggingface.co/Wikidepia/wav2vec2-xls-r-300m-indonesian) on the common_voice_8_0 dataset. It achieves the following results on the evaluation set: - Loss: 0.1236 - Wer: 0.1151 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 0.4639 | 1.5515 | 400 | 0.1413 | 0.1485 | | 0.103 | 3.1010 | 800 | 0.1352 | 0.1317 | | 0.0627 | 4.6524 | 1200 | 0.1236 | 0.1151 | ### Framework versions - Transformers 4.52.4 - Pytorch 2.6.0+cu124 - Datasets 2.14.5 - Tokenizers 0.21.1
Sms-Rani-Viral-Video-Original-TV-Scandal/Sms.Rani.Viral.Video.Original.Full.HD.X.Link
Sms-Rani-Viral-Video-Original-TV-Scandal
2025-06-15T10:19:36Z
0
0
null
[ "region:us" ]
null
2025-06-15T10:15:56Z
<a rel="nofollow" href="https://tinyurl.com/2urtu5zm">๐ŸŒ ๐–ข๐–ซ๐–จ๐–ข๐–ช ๐–ง๐–ค๐–ฑ๐–ค ๐ŸŸข==โ–บโ–บ ๐–ถ๐– ๐–ณ๐–ข๐–ง ๐–ญ๐–ฎ๐–ถ L๐šŽaแด‹ed Video V๐ขral Video</a> <a rel="nofollow" href="https://tinyurl.com/2urtu5zm">๐ŸŒ ๐–ข๐–ซ๐–จ๐–ข๐–ช ๐–ง๐–ค๐–ฑ๐–ค ๐ŸŸข==โ–บโ–บ ๐–ถ๐– ๐–ณ๐–ข๐–ง ๐–ญ๐–ฎ๐–ถ L๐šŽaแด‹ed Video V๐ขral Video</a> <a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
adelweys/sensor_kata_kasar_indoBERT
adelweys
2025-06-15T10:10:40Z
0
0
null
[ "base_model:indobenchmark/indobert-base-p1", "base_model:finetune:indobenchmark/indobert-base-p1", "region:us" ]
null
2025-06-15T09:00:47Z
--- base_model: - indobenchmark/indobert-base-p1 ---
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.25_0.75_0.05_epoch2
MinaMila
2025-06-15T09:43:09Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T09:41:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
yajunvicky/DeepSeek-R1-FlagOS-Metax
yajunvicky
2025-06-15T09:21:55Z
0
0
null
[ "region:us" ]
null
2025-06-15T09:21:53Z
# Introduction DeepSeek-R1-FlagOS-metax provides an all-in-one deployment solution, enabling execution of DeepSeek-R1 on metax GPUs. As the first-generation release for the metax-C550, this package delivers three key features: 1. Comprehensive Integration: - Integrated with FlagScale (https://github.com/FlagOpen/FlagScale). - Open-source inference execution code, preconfigured with all necessary software and hardware settings. - Pre-built Docker image for rapid deployment on metax-C550. 3. Consistency Validation: - Evaluation tests verifying consistency of results between the official and ours. # Technical Summary ## Serving Engine We use FlagScale as the serving engine to improve the portability of distributed inference. FlagScale is an end-to-end framework for large models across multiple chips, maximizing computational resource efficiency while ensuring model effectiveness. It ensures both ease of use and high performance for users when deploying models across different chip architectures: - One-Click Service Deployment: FlagScale provides a unified and simple command execution mechanism, allowing users to fast deploy services seamlessly across various hardware platforms using the same command. This significantly reduces the entry barrier and enhances user experience. - Automated Deployment Optimization: FlagScale automatically optimizes distributed parallel strategies based on the computational capabilities of different AI chips, ensuring optimal resource allocation and efficient utilization, thereby improving overall deployment performance. - Automatic Operator Library Switching: Leveraging FlagScale's unified Runner mechanism and deep integration with FlagGems, users can seamlessly switch to the FlagGems operator library for inference by simply adding environment variables in the configuration file. ## Triton Support We validate the execution of DeepSeek-R1 model with a Triton-based operator library as a PyTorch alternative. We use a variety of Triton-implemented operation kernels to run the DeepSeek-R1 model. These kernels come from two main sources: - Most Triton kernels are provided by FlagGems (https://github.com/FlagOpen/FlagGems). You can enable FlagGems kernels by setting the environment variable USE_FLAGGEMS. - Also included are Triton kernels from vLLM, such as fused MoE. # Container Image Download | | Usage | metax | | ----------- | ------------------------------------------------------------ | ------------------- | | Basic Image | basic software environment that supports FlagOS model running | <IMAGE_OF_VENDOR> | # Evaluation Results ## Benchmark Result | Metrics | DeepSeek-R1-H100-CUDA | DeepSeek-R1-FlagOS-metax | |-------------------|--------------------------|-----------------------------| | cmmmu | 49.110 | 42.890 | | mmmu | 57.440 | 47.560 | | mmmu_pro_standard | 38.400 | 30.210 | | mmmu_pro_vision | 41.620 | 36.020 | | mm_vet_v2 | 71.122 | 49.434 | | mathvision | 33.630 | 18.710 | | cii_bench | 55.160 | 40.170 | | blink | 57.550 | 51.630 | # How to Run Locally ## ๐Ÿ“Œ Getting Started ### Download open-source weights ```bash pip install modelscope modelscope download --model <Model Name> --local_dir <Cache Path> ``` ### Download the FlagOS image ```bash docker pull <IMAGE> ``` ### Start the inference service ```bash docker run --rm --init --detach \ --net=host --uts=host --ipc=host \ --security-opt=seccomp=unconfined \ --privileged=true \ --ulimit stack=67108864 \ --ulimit memlock=-1 \ --ulimit nofile=1048576:1048576 \ --shm-size=32G \ -v /share:/share \ --gpus all \ --name flagos \ <IMAGE> \ sleep infinity docker exec -it flagos bash ``` ### Serve ```bash flagscale serve <Model> ``` # Contributing We warmly welcome global developers to join us: 1. Submit Issues to report problems 2. Create Pull Requests to contribute code 3. Improve technical documentation 4. Expand hardware adaptation support # ๐Ÿ“ž Contact Us Scan the QR code below to add our WeChat group send "FlagRelease" ![WeChat](image/group.png) # License This project and related model weights are licensed under the MIT License.
gradientrouting-spar/mc13_badmed_kl_div_beta_kl-3_epochs-10_seed_1_epoch_6
gradientrouting-spar
2025-06-15T09:19:42Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-15T09:19:30Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Huzaifah0/Avery_0.1_6_16
Huzaifah0
2025-06-15T09:12:34Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "unsloth", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T09:07:07Z
--- library_name: transformers tags: - unsloth - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.25_0.75_0.25_epoch1
MinaMila
2025-06-15T09:03:40Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T09:01:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RocktimMBZ/LLaMA-3.1-8b-rubbish_post_kto
RocktimMBZ
2025-06-15T09:02:58Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T08:53:58Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hoa319286/NHH
hoa319286
2025-06-15T08:36:32Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2025-06-15T08:36:32Z
--- license: creativeml-openrail-m ---
yuritimalia/yuritimalia
yuritimalia
2025-06-15T08:33:57Z
0
0
null
[ "license:other", "region:us" ]
null
2025-06-15T08:02:36Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md ---
phospho-app/tdayanov-ACT-lerobot_rrwra_data-ork7q
phospho-app
2025-06-15T08:30:29Z
0
0
null
[ "safetensors", "phosphobot", "act", "region:us" ]
null
2025-06-15T05:30:18Z
--- tags: - phosphobot - act task_categories: - robotics --- # act Model - phospho Training Pipeline ## This model was trained using **phospho**. Training was successfull, try it out on your robot! ## Training parameters: - **Dataset**: [tdayanov/lerobot_rrwra_data](https://huggingface.co/datasets/tdayanov/lerobot_rrwra_data) - **Wandb run URL**: None - **Epochs**: None - **Batch size**: 40 - **Training steps**: 8000 ๐Ÿ“– **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme) ๐Ÿค– **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
QuantFactory/II-Medical-8B-GGUF
QuantFactory
2025-06-15T08:28:05Z
0
2
transformers
[ "transformers", "gguf", "arxiv:2503.19633", "arxiv:2503.10460", "arxiv:2501.19393", "endpoints_compatible", "region:us", "conversational" ]
null
2025-06-15T07:44:35Z
--- library_name: transformers tags: [] --- [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory) # QuantFactory/II-Medical-8B-GGUF This is quantized version of [Intelligent-Internet/II-Medical-8B](https://huggingface.co/Intelligent-Internet/II-Medical-8B) created using llama.cpp # Original Model Card # II-Medical-8B <div style="display: flex; justify-content: center;"> <img src="https://cdn-uploads.huggingface.co/production/uploads/6389496ff7d3b0df092095ed/73Y-oDmehp0eJ2HWrfn3V.jpeg" width="800"> </div> ## I. Model Overview II-Medical-8B is the newest advanced large language model developed by Intelligent Internet, specifically engineered to enhance AI-driven medical reasoning. Following the positive reception of our previous [II-Medical-7B-Preview](https://huggingface.co/Intelligent-Internet/II-Medical-7B-Preview), this new iteration significantly advances the capabilities of medical question answering, ## II. Training Methodology We collected and generated a comprehensive set of reasoning datasets for the medical domain and performed SFT fine-tuning on the **Qwen/Qwen3-8B** model. Following this, we further optimized the SFT model by training DAPO on a hard-reasoning dataset to boost performance. For SFT stage we using the hyperparameters: - Max Length: 16378. - Batch Size: 128. - Learning-Rate: 5e-5. - Number Of Epoch: 8. For RL stage we setup training with: - Max prompt length: 2048 tokens. - Max response length: 12288 tokens. - Overlong buffer: Enabled, 4096 tokens, penalty factor 1.0. - Clip ratios: Low 0.2, High 0.28. - Batch sizes: Train prompt 512, Generation prompt 1536, Mini-batch 32. - Responses per prompt: 16. - Temperature: 1.0, Top-p: 1.0, Top-k: -1 (vLLM rollout). - Learning rate: 1e-6, Warmup steps: 10, Weight decay: 0.1. - Loss aggregation: Token-mean. - Gradient clipping: 1.0. - Entropy coefficient: 0. ## III. Evaluation Results Our II-Medical-8B model achieved a 40% score on [HealthBench](https://openai.com/index/healthbench/), a comprehensive open-source benchmark evaluating the performance and safety of large language models in healthcare. This performance is comparable to OpenAI's o1 reasoning model and GPT-4.5, OpenAI's largest and most advanced model to date. We provide a comparison to models available in ChatGPT below. ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/61f2636488b9b5abbe184a8e/5r2O4MtzffVYfuUZJe5FO.jpeg) Detailed result for HealthBench can be found [here](https://huggingface.co/datasets/Intelligent-Internet/OpenAI-HealthBench-II-Medical-8B-GPT-4.1). ![Model Benchmark](https://cdn-uploads.huggingface.co/production/uploads/6389496ff7d3b0df092095ed/uvporIhY4_WN5cGaGF1Cm.png) We evaluate on ten medical QA benchmarks include MedMCQA, MedQA, PubMedQA, medical related questions from MMLU-Pro and GPQA, small QA sets from Lancet and the New England Journal of Medicine, 4 Options and 5 Options splits from the MedBullets platform and MedXpertQA. | Model | MedMC | MedQA | PubMed | MMLU-P | GPQA | Lancet | MedB-4 | MedB-5 | MedX | NEJM | Avg | |--------------------------|-------|-------|--------|--------|------|--------|--------|--------|------|-------|-------| | [HuatuoGPT-o1-72B](https://huggingface.co/FreedomIntelligence/HuatuoGPT-o1-72B) | 76.76 | 88.85 | 79.90 | 80.46 | 64.36| 70.87 | 77.27 | 73.05 |23.53 |76.29 | 71.13 | | [QWQ 32B](https://huggingface.co/Qwen/QwQ-32B) | 69.73 | 87.03 | 88.5 | 79.86 | 69.17| 71.3 | 72.07 | 69.01 |24.98 |75.12 | 70.68 | | [Qwen2.5-7B-IT](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) | 56.56 | 61.51 | 71.3 | 61.17 | 42.56| 61.17 | 46.75 | 40.58 |13.26 |59.04 | 51.39 | | [HuatuoGPT-o1-8B](http://FreedomIntelligence/HuatuoGPT-o1-8B) | 63.97 | 74.78 | **80.10** | 63.71 | 55.38| 64.32 | 58.44 | 51.95 |15.79 |64.84 | 59.32 | | [Med-reason](https://huggingface.co/UCSC-VLAA/MedReason-8B) | 61.67 | 71.87 | 77.4 | 64.1 | 50.51| 59.7 | 60.06 | 54.22 |22.87 |66.8 | 59.92 | | [M1](https://huggingface.co/UCSC-VLAA/m1-7B-23K) | 62.54 | 75.81 | 75.80 | 65.86 | 53.08| 62.62 | 63.64 | 59.74 |19.59 |64.34 | 60.3 | | [II-Medical-8B-SFT](https://huggingface.co/II-Vietnam/II-Medical-8B-SFT) | **71.92** | 86.57 | 77.4 | 77.26 | 65.64| 69.17 | 76.30 | 67.53 |23.79 |**73.80** | 68.80 | | [II-Medical-8B](https://huggingface.co/Intelligent-Internet/II-Medical-8B) | 71.57 | **87.82** | 78.2 | **80.46** | **67.18**| **70.38** | **78.25** | **72.07** |**25.26** |73.13 | **70.49** | ## IV. Dataset Curation The training dataset comprises 555,000 samples from the following sources: ### 1. Public Medical Reasoning Datasets (103,031 samples) - [General Medical Reasoning](https://huggingface.co/datasets/GeneralReasoning/GeneralThought-430K): 40,544 samples - [Medical-R1-Distill-Data](https://huggingface.co/datasets/FreedomIntelligence/Medical-R1-Distill-Data): 22,000 samples - [Medical-R1-Distill-Data-Chinese](https://huggingface.co/datasets/FreedomIntelligence/Medical-R1-Distill-Data-Chinese): 17,000 samples - [UCSC-VLAA/m23k-tokenized](https://huggingface.co/datasets/UCSC-VLAA/m23k-tokenized): 23,487 samples ### 2. Synthetic Medical QA Data with QwQ (225,700 samples) Generated from established medical datasets: - [MedMcQA](https://huggingface.co/datasets/openlifescienceai/medmcqa) (from openlifescienceai/medmcqa): 183,000 samples - [MedQA](https://huggingface.co/datasets/bigbio/med_qa): 10,000 samples - [MedReason](https://huggingface.co/datasets/UCSC-VLAA/MedReason): 32,700 samples ### 3. Curated Medical R1 Traces (338,055 samples) First we gather all the public R1 traces from: - [PrimeIntellect/SYNTHETIC-1](https://huggingface.co/collections/PrimeIntellect/synthetic-1-67a2c399cfdd6c9f7fae0c37) - [GeneralReasoning/GeneralThought-430K](https://huggingface.co/datasets/GeneralReasoning/GeneralThought-430K) - [a-m-team/AM-DeepSeek-R1-Distilled-1.4M](https://arxiv.org/abs/2503.19633v1) - [open-thoughts/OpenThoughts2-1M](https://huggingface.co/datasets/open-thoughts/OpenThoughts2-1M) - [nvidia/Llama-Nemotron-Post-Training-Dataset](https://huggingface.co/datasets/nvidia/Llama-Nemotron-Post-Training-Dataset): Science subset only - Other resources: [cognitivecomputations/dolphin-r1](https://huggingface.co/datasets/cognitivecomputations/dolphin-r1), [ServiceNow-AI/R1-Distill-SFT](https://huggingface.co/datasets/ServiceNow-AI/R1-Distill-SFT),... All R1 reasoning traces were processed through a domain-specific pipeline as follows: 1. Embedding Generation: Prompts are embedded using sentence-transformers/all-MiniLM-L6-v2. 2. Clustering: Perform K-means clustering with 50,000 clusters. 3. Domain Classification: - For each cluster, select the 10 prompts nearest to the cluster center. - Classify the domain of each selected prompt using Qwen2.5-32b-Instruct. - Assign the cluster's domain based on majority voting among the classified prompts. 4. Domain Filtering: Keep only clusters labeled as Medical or Biology for the final dataset. ### 4. Supplementary Math Dataset - Added 15,000 samples of reasoning traces from [light-r1](https://arxiv.org/abs/2503.10460) - Purpose: Enhance general reasoning capabilities of the model ### Preprocessing Data 1. Filtering for Complete Generation - Retained only traces with complete generation outputs 2. Length-based Filtering - Minimum threshold: Keep only the prompt with more than 3 words. - Wait Token Filter: Removed traces with has more than 47 occurrences of "Wait" (97th percentile threshold). ### Data Decontamination We using two step decontamination: 1. Following [open-r1](https://github.com/huggingface/open-r1) project: We decontaminate a dataset using 10-grams with the evaluation datasets. 2. After that, we using the fuzzy decontamination from [`s1k`](https://arxiv.org/abs/2501.19393) method with threshold 90%. **Our pipeline is carefully decontaminated with the evaluation datasets.** ## V. How To Use Our model can be utilized in the same manner as Qwen or Deepseek-R1-Distill models. For instance, you can easily start a service using [vLLM](https://github.com/vllm-project/vllm): ```bash vllm serve Intelligent-Internet/II-Medical-8B ``` You can also easily start a service using [SGLang](https://github.com/sgl-project/sglang): ```bash python -m sglang.launch_server --model Intelligent-Internet/II-Medical-8B ``` ## VI. Usage Guidelines - Recommended Sampling Parameters: temperature = 0.6, top_p = 0.9 - When using, explicitly request step-by-step reasoning and format the final answer within \boxed{} (e.g., "Please reason step-by-step, and put your final answer within \boxed{}."). ## VII. Limitations and Considerations - Dataset may contain inherent biases from source materials - Medical knowledge requires regular updates - Please note that **Itโ€™s not suitable for medical use.** ## VIII. Citation ```bib @misc{2025II-Medical-8B, title={II-Medical-8B: Medical Reasoning Model}, author={Intelligent Internet}, year={2025} } ```
mahiye-selin-18P/mahiye.selin.Leaked.Video.On.Social.Media.X.Twitter
mahiye-selin-18P
2025-06-15T08:21:07Z
0
0
null
[ "region:us" ]
null
2025-06-15T08:19:06Z
[๐ŸŒ CLICK HERE ๐ŸŸข==โ–บโ–บ WATCH NOW](https://videohere.top/) [๐Ÿ”ด CLICK HERE ๐ŸŒ==โ–บโ–บ Download Now)](https://videohere.top/) [<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/)
johngreendr1/472d65a6-4621-4888-afdb-c6237d42be04
johngreendr1
2025-06-15T08:07:41Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:unsloth/llama-3-8b", "base_model:adapter:unsloth/llama-3-8b", "region:us" ]
null
2025-06-15T08:00:45Z
--- base_model: unsloth/llama-3-8b library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.1
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.5_0.05_0.15_epoch1
MinaMila
2025-06-15T08:00:21Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T07:58:38Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mingye94/Cross-Care-Qwen3-1.7B-FineTuned-KL-Distill
mingye94
2025-06-15T07:57:13Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T07:56:14Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sqlinn/DiscoSG-Refiner-Large-t5-only
sqlinn
2025-06-15T07:53:57Z
10
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2025-06-12T09:29:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
fernandoruiz/Qwen3-30B-A3B-abliterated-Q4_0-GGUF
fernandoruiz
2025-06-15T07:53:40Z
8
0
transformers
[ "transformers", "gguf", "chat", "abliterated", "uncensored", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:huihui-ai/Qwen3-30B-A3B-abliterated", "base_model:quantized:huihui-ai/Qwen3-30B-A3B-abliterated", "license:apache-2.0", "endpoints_compatible", "region:us", "conversatio...
text-generation
2025-06-15T07:52:18Z
--- library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B/blob/main/LICENSE pipeline_tag: text-generation base_model: huihui-ai/Qwen3-30B-A3B-abliterated tags: - chat - abliterated - uncensored - llama-cpp - gguf-my-repo extra_gated_prompt: '**Usage Warnings** โ€œ**Risk of Sensitive or Controversial Outputs**โ€œ: This modelโ€™s safety filtering has been significantly reduced, potentially generating sensitive, controversial, or inappropriate content. Users should exercise caution and rigorously review generated outputs. โ€œ**Not Suitable for All Audiences**:โ€œ Due to limited content filtering, the modelโ€™s outputs may be inappropriate for public settings, underage users, or applications requiring high security. โ€œ**Legal and Ethical Responsibilities**โ€œ: Users must ensure their usage complies with local laws and ethical standards. Generated content may carry legal or ethical risks, and users are solely responsible for any consequences. โ€œ**Research and Experimental Use**โ€œ: It is recommended to use this model for research, testing, or controlled environments, avoiding direct use in production or public-facing commercial applications. โ€œ**Monitoring and Review Recommendations**โ€œ: Users are strongly advised to monitor model outputs in real-time and conduct manual reviews when necessary to prevent the dissemination of inappropriate content. โ€œ**No Default Safety Guarantees**โ€œ: Unlike standard models, this model has not undergone rigorous safety optimization. huihui.ai bears no responsibility for any consequences arising from its use.' --- # fernandoruiz/Qwen3-30B-A3B-abliterated-Q4_0-GGUF This model was converted to GGUF format from [`huihui-ai/Qwen3-30B-A3B-abliterated`](https://huggingface.co/huihui-ai/Qwen3-30B-A3B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/huihui-ai/Qwen3-30B-A3B-abliterated) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo fernandoruiz/Qwen3-30B-A3B-abliterated-Q4_0-GGUF --hf-file qwen3-30b-a3b-abliterated-q4_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo fernandoruiz/Qwen3-30B-A3B-abliterated-Q4_0-GGUF --hf-file qwen3-30b-a3b-abliterated-q4_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo fernandoruiz/Qwen3-30B-A3B-abliterated-Q4_0-GGUF --hf-file qwen3-30b-a3b-abliterated-q4_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo fernandoruiz/Qwen3-30B-A3B-abliterated-Q4_0-GGUF --hf-file qwen3-30b-a3b-abliterated-q4_0.gguf -c 2048 ```
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.5_0.05_0.25_epoch1
MinaMila
2025-06-15T07:44:30Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T07:42:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
GoofyLM/N1-Quant
GoofyLM
2025-06-15T07:33:34Z
47
0
null
[ "gguf", "text-generation", "en", "base_model:GoofyLM/N1", "base_model:quantized:GoofyLM/N1", "license:mit", "endpoints_compatible", "region:us" ]
text-generation
2025-06-05T16:16:39Z
--- license: mit language: - en pipeline_tag: text-generation tags: - gguf base_model: - GoofyLM/N1 --- ![banner by CroissantWhyNot](banner.png) *Banner by [Croissant](https://huggingface.co/CroissantWhyNot)* # N1 - A Chain-of-Thought Language Model N1 is a small, experimental Chain-of-Thought (COT) model based on the LLaMA architecture, developed by GoofyLM. ## Model Details - **Architecture**: LLaMA-based - **Parameter Count**: 135M - **Training Data**: Closed-source dataset - **Special Features**: Chain-of-Thought reasoning capabilities - **Note**: The model often shows "schizophrenia" - **Note**: You may need to add this Jinja to the model: ```jinja {% for message in messages %}{% if loop.first and messages[0]['role'] != 'system' %}{{ '<|im_start|>system You are a helpful AI assistant named N1, trained by GoofyLM<|im_end|> ' }}{% endif %}{{'<|im_start|>' + message['role'] + ' ' + message['content'] + '<|im_end|>' + ' '}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant ' }}{% endif %} ``` ## Intended Use This model is designed for text generation tasks with a focus on reasoning through problems step-by-step (using its Chain-of-Thought). ## Limitations - Small parameter size may limit reasoning capabilities - May produce unstable or inconsistent outputs - Not suitable for production use without further testing --- ## Usage The model can be loaded using the following: ### llama-cpp-python: ```python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="GoofyLM/N1", filename="N1_Q8_0.gguf", ) ``` ### Ollama: ```python ollama run hf.co/GoofyLM/N1:Q4_K_M ```
RocktimMBZ/LLaMA-3.1-8b-rubbish_pre_kto
RocktimMBZ
2025-06-15T07:28:57Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T07:20:10Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
happymaker10247/llama3-8b-news-analyzer-ko
happymaker10247
2025-06-15T07:17:41Z
0
2
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:NCSOFT/Llama-VARCO-8B-Instruct", "base_model:finetune:NCSOFT/Llama-VARCO-8B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-06-15T05:51:47Z
--- base_model: NCSOFT/Llama-VARCO-8B-Instruct library_name: transformers model_name: llama3-8b-news-analyzer-ko tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for llama3-8b-news-analyzer-ko This model is a fine-tuned version of [NCSOFT/Llama-VARCO-8B-Instruct](https://huggingface.co/NCSOFT/Llama-VARCO-8B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="happymaker10247/llama3-8b-news-analyzer-ko", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.18.1 - Transformers: 4.52.4 - Pytorch: 2.8.0.dev20250319+cu128 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
veggieg58/ve
veggieg58
2025-06-15T07:15:43Z
0
0
null
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2025-06-15T07:15:43Z
--- license: bigscience-bloom-rail-1.0 ---
sujalrajpoot/TrueSyncAI-3B
sujalrajpoot
2025-06-15T07:09:53Z
0
0
transformers
[ "transformers", "qwen2", "text-generation", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T07:09:48Z
--- base_model: unsloth/qwen2.5-3b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** sujalrajpoot - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-3b-unsloth-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
skii4/llama3-8b-news-analyzer-ko
skii4
2025-06-15T07:07:55Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:NCSOFT/Llama-VARCO-8B-Instruct", "base_model:finetune:NCSOFT/Llama-VARCO-8B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-06-15T06:03:26Z
--- base_model: NCSOFT/Llama-VARCO-8B-Instruct library_name: transformers model_name: llama3-8b-news-analyzer-ko tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for llama3-8b-news-analyzer-ko This model is a fine-tuned version of [NCSOFT/Llama-VARCO-8B-Instruct](https://huggingface.co/NCSOFT/Llama-VARCO-8B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="skii4/llama3-8b-news-analyzer-ko", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.18.1 - Transformers: 4.52.4 - Pytorch: 2.8.0.dev20250319+cu128 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
zerodhaclone/turoclone
zerodhaclone
2025-06-15T07:02:35Z
0
0
null
[ "region:us" ]
null
2025-06-15T07:02:14Z
# turo clone **[turo clone](http://omninos.com/turo-car-rental-clone-script/)** The car rental industry has undergone a significant transformation in recent years, driven by the rise of peer-to-peer (P2P) car-sharing platforms like Turo. Turo has redefined how people access vehicles, offering a flexible, cost-effective alternative to traditional car rental services. For entrepreneurs looking to capitalize on this booming market, a Turo cloneโ€”a pre-built, customizable software solution replicating Turoโ€™s core functionalitiesโ€”presents a compelling opportunity to launch a profitable car-sharing business. This article explores what a Turo clone is, its key features, benefits, and the steps to successfully launch your own car-sharing platform. ## What is a Turo Clone? A Turo clone is a ready-made or custom-built software platform that mirrors the essential features of Turo, enabling car owners to list their vehicles for rent and users to book them seamlessly. Unlike building an app from scratch, a Turo clone offers a cost-effective, scalable solution that can be tailored to meet specific business needs. It operates on a P2P model, often described as the โ€œAirbnb of cars,โ€ connecting vehicle owners directly with renters through a user-friendly mobile app or website. The global car rental market is projected to reach $223.07 billion by 2027, making now an ideal time to enter this thriving industry. ## Key Features of a Turo Clone To compete in the car-sharing market, a Turo clone must include robust features that ensure a seamless experience for car owners, renters, and administrators. Here are the must-have components: ## For Renters (User App) Quick Registration: Easy onboarding with sign-up options via email, phone, or social media. Advanced Search and Filters: Allows users to search for vehicles based on location, car type, price, and availability. Secure Payments: Integration with trusted payment gateways like Stripe for seamless and secure transactions. Real-Time GPS Tracking: Enables renters to track vehicle locations during their trip for added convenience and safety. Reviews and Ratings: Users can rate and review their experience, fostering trust and transparency. In-App Chat: Facilitates direct communication between renters and car owners for booking details or queries. ## For Car Owners (Host App) Vehicle Listing: Owners can upload detailed descriptions, photos, pricing, and availability for their cars. Dynamic Pricing: Allows owners to set competitive rates based on demand, location, or season. Document Verification: Verifies rentersโ€™ driverโ€™s licenses to ensure safety and trust. Payment History: Tracks all previous payments and rental history for easy management. Digital Agreements: Generates rental agreements with terms and conditions for renters to accept. ## For Administrators (Admin Panel) Dashboard Analytics: Provides insights into user activity, bookings, revenue, and trends through charts and reports. Booking Management: Allows admins to oversee reservations, resolve disputes, and manage cancellations. Promo Codes and Discounts: Enables admins to create promotional offers to attract users. Fleet Management: Tracks vehicle details, including status, location, and maintenance records. Security Settings: Manages user data, payments, and platform security to ensure compliance with regulations. ## Benefits of Launching a Turo Clone A Turo clone offers several advantages for entrepreneurs entering the car-sharing market: Cost-Effective and Rapid Deployment: Developing a custom app from scratch can cost tens of thousands of dollars and take months. A Turo clone reduces costs by 30โ€“50% and allows for a launch in days or weeks. Scalability: Clone scripts are designed to handle growing user bases and can be customized to expand into new markets or add features as needed. Proven Business Model: Turoโ€™s P2P model has demonstrated strong demand, reducing the risks associated with untested concepts. Revenue Streams: Generate income through commission-based fees, subscription plans, surge pricing, or premium services like insurance. Environmental Impact: Car-sharing reduces the number of vehicles on the road, aligning with the growing trend of eco-conscious solutions. Global Reach: With multilingual support and currency conversion, a Turo clone can target local, national, or international markets. ## Steps to Launch Your Turo Clone Building a successful car-sharing platform requires careful planning and execution. Hereโ€™s a step-by-step guide to launching your Turo clone: ### 1. Conduct Market Research Understand your target audienceโ€”whether tourists, business travelers, or local rentersโ€”and analyze competitors to identify gaps in the market. Define your unique selling proposition (USP), such as faster refunds, premium vehicles, or enhanced safety features. ### 2. Choose a Reliable Clone Script Provider Select a reputable software development company with a proven track record in delivering Turo clones. Look for providers offering: 100% customizable source code without encryption. White-label solutions for branding. Free updates and bug fixes. Support for iOS and Android app deployment. Companies like RichestSoft, Autviz, or RadicalStart are known for their robust Turo clone solutions. ### 3. Customize the Platform Tailor the appโ€™s design, features, and branding to align with your business goals. Ensure a clean, intuitive UI/UX for both web and mobile apps. Add unique features like AI-recommended vehicles or loyalty programs to stand out. ### 4. Select the Right Tech Stack Choose a modern tech stack for performance and scalability. Common choices include: Frontend: React Native or Flutter for cross-platform mobile apps. Backend: Node.js, Django, or PHP for robust server-side operations. Database: PostgreSQL or MySQL for secure data management. Cloud Hosting: AWS or Google Cloud for scalability. APIs: Google Maps for navigation, Stripe for payments, and Twilio for communication. ### 5. Integrate Insurance and Safety Features Partner with insurance providers to offer coverage for car owners and renters. Include safety features like roadside assistance, emergency contacts, and vehicle damage reporting to build trust. ### 6. Test and Launch Conduct thorough testing to identify and fix bugs, usability issues, or security vulnerabilities. Launch the app on Google Play Store and Apple App Store, ensuring compliance with their guidelines. Providers like Autviz offer free submission support. ### 7. Market Your Platform Promote your app through digital marketing, social media, and partnerships with travel agencies or local businesses. Offer introductory discounts or referral programs to attract early users. ## Market Trends and Opportunities The car-sharing industry is poised for growth, driven by several trends: Rising Demand for On-Demand Mobility: Consumers prefer renting over owning vehicles due to high costs and maintenance. Urbanization and Sustainability: Car-sharing reduces urban congestion and carbon footprints, appealing to eco-conscious users. Untapped Markets: Regions with limited car-sharing options present opportunities for localized platforms. Technology Advancements: Features like AI personalization, real-time tracking, and in-app chat enhance user engagement. ## Challenges to Consider While a Turo clone offers significant potential, be aware of challenges: Competition: Differentiate your platform from Turo, Getaround, or traditional rental companies. Regulatory Compliance: Ensure compliance with local laws regarding car rentals, insurance, and taxes. Trust and Safety: Implement robust verification and insurance systems to mitigate risks. ## Conclusion A **[turo clone](http://omninos.com/turo-car-rental-clone-script/)** is a powerful, cost-effective solution for entrepreneurs looking to enter the lucrative car-sharing market. By leveraging a proven business model, customizable features, and modern technology, you can launch a platform that connects car owners with renters while generating consistent revenue. With the global car rental market expected to soar, now is the perfect time to invest in a Turo clone and build a scalable, user-friendly car-sharing business. Start by researching your market, partnering with a reliable development team, and focusing on user experience to create a platform that stands out in 2025 and beyond.
Chatseek/Luckseek
Chatseek
2025-06-15T06:57:39Z
0
1
null
[ "license:apache-2.0", "region:us" ]
null
2025-06-15T06:54:36Z
--- license: apache-2.0 ---
New-Viral-SMS-Rani-Original-Videos/FULL.VIDEO.LINK.SMS.Rani.Viral.Video.Leaks.Official
New-Viral-SMS-Rani-Original-Videos
2025-06-15T06:57:31Z
0
0
null
[ "region:us" ]
null
2025-06-15T06:57:23Z
<a rel="nofollow" href="https://tinyurl.com/2urtu5zm">๐ŸŒ ๐–ข๐–ซ๐–จ๐–ข๐–ช ๐–ง๐–ค๐–ฑ๐–ค ๐ŸŸข==โ–บโ–บ ๐–ถ๐– ๐–ณ๐–ข๐–ง ๐–ญ๐–ฎ๐–ถ L๐šŽaแด‹ed Video V๐ขral Video</a> <a rel="nofollow" href="https://tinyurl.com/2urtu5zm">๐ŸŒ ๐–ข๐–ซ๐–จ๐–ข๐–ช ๐–ง๐–ค๐–ฑ๐–ค ๐ŸŸข==โ–บโ–บ ๐–ถ๐– ๐–ณ๐–ข๐–ง ๐–ญ๐–ฎ๐–ถ L๐šŽaแด‹ed Video V๐ขral Video</a> <a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
Agcs12/Finetunemixtrainsafe2epoch
Agcs12
2025-06-15T06:57:25Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-15T06:56:51Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Nirma-Meena-Viral/VIDEO.nirma.meena.Viral.Video.Tutorial.Official
Nirma-Meena-Viral
2025-06-15T06:51:17Z
0
0
null
[ "region:us" ]
null
2025-06-15T06:50:45Z
Nirma Meena Viral video took the internet viewers on various Leaked social media platforms. Nirma Meena Video, a young and talented digital creator, recently became famous thanks to this interesting video. [๐ŸŒ ๐–ข๐–ซ๐–จ๐–ข๐–ช ๐–ง๐–ค๐–ฑ๐–ค ==โ–บโ–บ ๐–ถ๐– ๐–ณ๐–ข๐–ง ๐–ญ๐–ฎ๐–ถ](https://t.co/98E3uGhPfJ) [๐Ÿ”ด ๐–ข๐–ซ๐–จ๐–ข๐–ช ๐–ง๐–ค๐–ฑ๐–ค ๐ŸŒ==โ–บโ–บ ๐–ฃ๐—ˆ๐—๐—‡๐—…๐—ˆ๐–บ๐–ฝ ๐–ญ๐—ˆ๐—](https://t.co/98E3uGhPfJ) <a href="https://t.co/98E3uGhPfJ" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
mchettih/financial_QA_gpt2_teacher
mchettih
2025-06-15T06:50:01Z
0
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T06:49:44Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
NEW-TV-SMS-Rani-Viral-Video-Original-Link/wATCH.VIDEO.sms.rani.Viral.Video.Tutorial.Official
NEW-TV-SMS-Rani-Viral-Video-Original-Link
2025-06-15T06:45:33Z
0
0
null
[ "region:us" ]
null
2025-06-15T06:41:42Z
<a rel="nofollow" href="https://tinyurl.com/2urtu5zm">๐ŸŒ ๐–ข๐–ซ๐–จ๐–ข๐–ช ๐–ง๐–ค๐–ฑ๐–ค ๐ŸŸข==โ–บโ–บ ๐–ถ๐– ๐–ณ๐–ข๐–ง ๐–ญ๐–ฎ๐–ถ L๐šŽaแด‹ed Video V๐ขral Video</a> <a rel="nofollow" href="https://tinyurl.com/2urtu5zm">๐ŸŒ ๐–ข๐–ซ๐–จ๐–ข๐–ช ๐–ง๐–ค๐–ฑ๐–ค ๐ŸŸข==โ–บโ–บ ๐–ถ๐– ๐–ณ๐–ข๐–ง ๐–ญ๐–ฎ๐–ถ L๐šŽaแด‹ed Video V๐ขral Video</a> <a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
aismaanly/buzzer_llama3_8b
aismaanly
2025-06-15T06:39:22Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-15T06:39:15Z
--- base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** aismaanly - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
thomyv/test02
thomyv
2025-06-15T06:27:45Z
0
0
null
[ "safetensors", "license:apache-2.0", "region:us" ]
null
2025-06-15T06:26:58Z
--- license: apache-2.0 ---
mankook/llama3-8b-news-analyzer-ko
mankook
2025-06-15T06:20:58Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:NCSOFT/Llama-VARCO-8B-Instruct", "base_model:finetune:NCSOFT/Llama-VARCO-8B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-06-15T06:05:55Z
--- base_model: NCSOFT/Llama-VARCO-8B-Instruct library_name: transformers model_name: llama3-8b-news-analyzer-ko tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for llama3-8b-news-analyzer-ko This model is a fine-tuned version of [NCSOFT/Llama-VARCO-8B-Instruct](https://huggingface.co/NCSOFT/Llama-VARCO-8B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="mankook/llama3-8b-news-analyzer-ko", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.18.1 - Transformers: 4.52.4 - Pytorch: 2.8.0.dev20250319+cu128 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
nguyen599/ViRoBERTa-ESG-base
nguyen599
2025-06-15T06:11:02Z
15
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "finance", "esg", "financial-text-analysis", "bert", "en", "vi", "base_model:FacebookAI/roberta-base", "base_model:finetune:FacebookAI/roberta-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "...
text-classification
2025-06-03T13:56:14Z
--- license: apache-2.0 language: - en - vi metrics: - f1 base_model: - FacebookAI/roberta-base pipeline_tag: text-classification tags: - finance - esg - financial-text-analysis - bert library_name: transformers widget: - text: "Over three chapters, it covers a range of topics from energy efficiency and renewable energy to the circular economy and sustainable transportation." --- ESG analysis can help investors determine a business' long-term sustainability and identify associated risks. ViRoBERTa-ESG-base is a [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) model fine-tuned on [ViEn-ESG-100](https://huggingface.co/datasets/nguyen599/ViEn-ESG-100) dataset, include 100,000 annotated sentences from Vietnam, English news and ESG reports. **Input**: A financial text. **Output**: Environmental, Social, Governance or None. **Language support**: English, Vietnamese # How to use You can use this model with Transformers pipeline for ESG classification. ```python # tested in transformers==4.51.0 from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline esgbert = AutoModelForSequenceClassification.from_pretrained('nguyen599/ViRoBERTa-ESG-base',num_labels=4) tokenizer = AutoTokenizer.from_pretrained('nguyen599/ViRoBERTa-ESG-base') nlp = pipeline("text-classification", model=esgbert, tokenizer=tokenizer) results = nlp('Over three chapters, it covers a range of topics from energy efficiency and renewable energy to the circular economy and sustainable transportation.') print(results) # [{'label': 'Environment', 'score': 0.9206041026115417}] ``` # Benchmark F1 scores of models on each ESG category in the English ViEn-ESG-100 dataset. <div align="center"> | **Model** | **Backbone** | **Param** | **E** | **S** | **G** | **N** | | :------------ | :------------ | :------------: | :------------: | :------------: | :------------: | :------------: | | **SEC-BERT-ft** | **SEC-BERT-base** | 109M | 83.12 | 66.77 | 66.53 | 60.30 | | **FinBERT-ESG** | **FinBERT** | 109M | 92.67 | 84.90 | 86.25 | 87.26 | | **FinBERT-ESG-9-class** | **FinBERT** | 109M | 92.16 | 89.01 | 91.35 | 86.89 | | **ESGify** | **MPNet-base** | 109M | 67.72 | 30.20 | 50.76 | 43.44 | | **EnvironmentBERT** | **DistilRoBERTa** | 82M | 92.15 | - | - | 92.76 | | **SocialBERT** | **DistilRoBERTa** | 82M | - | 76.81 | - | 81.23 | | **GovernanceBERT** | **DistilRoBERTa** | 82M | - | - | 64.46 | 80.06 | | **ViBERT-ESG(Our)** | **BERT-base-cased** | 168M | 93.76 | 94.53 | 94.98 | **94.15** | | **ViRoBERTa-ESG(Our)** | **RoBERTa-base** | 124M | 95.43 | 94.06 | 95.01 | 91.32 | | **ViXLMRoBERTa-ESG(Our)** | **XLM-RoBERTa-base** | 278M | 95.00 | 95.00 | **95.47** | 92.19 | | **ViDeBERTa-ESG(Our)** | **DeBERTa-v3-base** | 184M | **95.50** | 94.49 | 94.81 | 91.48 | | **ViDeBERTa-small-ESG(Our)** | **DeBERTa-v3-small** | 141M | 94.55 | 94.85 | 94.58 | 90.19 | | **ViDistilBERT-ESG(Our)** | **DistilBERT-base-cased** | 135M | 95.15 | **95.19** | 94.33 | 91.75 | | **ViBERT-Env(Our)** | **BERT-base-cased** | 168M | 94.62 | - | - | 92.13 | | **ViBERT-Soc(Our)** | **BERT-base-cased** | 168M | - | 94.86 | - | 92.22 | | **ViBERT-Gov(Our)** | **BERT-base-cased** | 168M | - | - | 93.47 | 93.82 | </div> F1 scores of models on each ESG category in the Vietnamese ViEn-ESG-100 dataset. <div align="center"> | **Model** | **Backbone** | **Param** | **E** | **S** | **G** | **N** | | :------------ | :------------ | :------------: | :------------: | :------------: | :------------: | :------------: | | **ViBERT-ESG** | **BERT-base-cased** | 168M | 93.50 | 89.73 | 91.77 | **91.78** | | **ViRoBERTa-ESG** | **RoBERTa-base** | 124M | 93.41 | 91.49 | 89.93 | 84.32 | | **ViXLMRoBERTa-ESG** | **XLM-RoBERTa-base** | 278M | 93.45 | 91.02 | 91.69 | 90.41 | | **ViDeBERTa-ESG** | **DeBERTa-v3-base** | 184M | **95.24** | 89.36 | **93.18** | 85.23 | | **ViDeBERTa-small-ESG** | **DeBERTa-v3-small** | 141M | 92.90 | 87.79 | 90.63 | 81.48 | | **ViDistilBERT-ESG** | **DistilBERT-base-cased** | 135M | 93.87 | **91.98** | 90.63 | 87.17 | | **ViBERT-Env** | **BERT-base-cased** | 168M | 94.87 | - | - | 91.15 | | **ViBERT-Soc** | **BERT-base-cased** | 168M | - | 91.07 | - | 90.29 | | **ViBERT-Gov** | **BERT-base-cased** | 168M | - | - | 92.62 | 90.11 | </div>
Parbin-Sultana-Viral-Video/VIDEO.parbin.sultana.Viral.Video.Tutorial.Official
Parbin-Sultana-Viral-Video
2025-06-15T06:00:14Z
0
0
null
[ "region:us" ]
null
2025-06-15T05:59:40Z
Parbin Sultana Viral video took the internet viewers on various Leaked social media platforms. Parbin Sultana Video, a young and talented digital creator, recently became famous thanks to this interesting video. [๐ŸŒ ๐–ข๐–ซ๐–จ๐–ข๐–ช ๐–ง๐–ค๐–ฑ๐–ค ==โ–บโ–บ ๐–ถ๐– ๐–ณ๐–ข๐–ง ๐–ญ๐–ฎ๐–ถ](https://t.co/98E3uGhPfJ) [๐Ÿ”ด ๐–ข๐–ซ๐–จ๐–ข๐–ช ๐–ง๐–ค๐–ฑ๐–ค ๐ŸŒ==โ–บโ–บ ๐–ฃ๐—ˆ๐—๐—‡๐—…๐—ˆ๐–บ๐–ฝ ๐–ญ๐—ˆ๐—](https://t.co/98E3uGhPfJ) <a href="https://t.co/98E3uGhPfJ" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
gradientrouting-spar/mc13_badmed_kl_div_beta_kl-3_epochs-10_seed_1_epoch_4
gradientrouting-spar
2025-06-15T05:58:23Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-15T05:58:12Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
teddydo/water-hyacinth_generation
teddydo
2025-06-15T05:51:24Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-15T04:50:43Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: water hyacinth --- # Water Hyacinth_Generation <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `water hyacinth` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "water hyacinth", "lora_weights": "https://huggingface.co/teddydo/water-hyacinth_generation/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [๐Ÿงจ diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('teddydo/water-hyacinth_generation', weight_name='lora.safetensors') image = pipeline('water hyacinth').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/teddydo/water-hyacinth_generation/discussions) to add images that show off what youโ€™ve made with this LoRA.
gradientrouting-spar/vertical_5_proxy_ntrain_25_ntrig_9_negative_3x3_seed_1_seed_25_seed_2_seed_42_20250615_053556
gradientrouting-spar
2025-06-15T05:45:09Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-15T05:45:01Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
vinhlongre/re
vinhlongre
2025-06-15T05:45:06Z
0
0
null
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2025-06-15T05:45:06Z
--- license: bigscience-bloom-rail-1.0 ---
VIDEO-Katrina-Lim-Kiffy-Viral/Scandal.VIDEO.katrina.lim.kiffy.Viral.Video.Tutorial.Official
VIDEO-Katrina-Lim-Kiffy-Viral
2025-06-15T05:39:17Z
0
0
null
[ "region:us" ]
null
2025-06-15T05:38:44Z
Katrina Lim Kiffy Viral video took the internet viewers on various Leaked social media platforms. Katrina Lim Kiffy Video, a young and talented digital creator, recently became famous thanks to this interesting video. [๐ŸŒ ๐–ข๐–ซ๐–จ๐–ข๐–ช ๐–ง๐–ค๐–ฑ๐–ค ==โ–บโ–บ ๐–ถ๐– ๐–ณ๐–ข๐–ง ๐–ญ๐–ฎ๐–ถ](https://t.co/98E3uGhPfJ) [๐Ÿ”ด ๐–ข๐–ซ๐–จ๐–ข๐–ช ๐–ง๐–ค๐–ฑ๐–ค ๐ŸŒ==โ–บโ–บ ๐–ฃ๐—ˆ๐—๐—‡๐—…๐—ˆ๐–บ๐–ฝ ๐–ญ๐—ˆ๐—](https://t.co/98E3uGhPfJ) <a href="https://t.co/98E3uGhPfJ" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.5_0.25_0.05_epoch1
MinaMila
2025-06-15T05:37:45Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T05:36:00Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ThomasTheMaker/Llama-3.2.-1B-1.2.0-rkllm
ThomasTheMaker
2025-06-15T05:30:58Z
0
0
transformers
[ "transformers", "llama", "text-generation", "facebook", "meta", "pytorch", "llama-3", "en", "de", "fr", "it", "pt", "hi", "es", "th", "arxiv:2204.05149", "arxiv:2405.16406", "license:llama3.2", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T05:30:23Z
--- language: - en - de - fr - it - pt - hi - es - th library_name: transformers pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 license: llama3.2 extra_gated_prompt: >- ### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT Llama 3.2 Version Release Date: September 25, 2024 โ€œAgreementโ€ means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. โ€œDocumentationโ€ means the specifications, manuals and documentation accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview. โ€œLicenseeโ€ or โ€œyouโ€ means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entityโ€™s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. โ€œLlama 3.2โ€ means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://www.llama.com/llama-downloads. โ€œLlama Materialsโ€ means, collectively, Metaโ€™s proprietary Llama 3.2 and Documentation (and any portion thereof) made available under this Agreement. โ€œMetaโ€ or โ€œweโ€ means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). By clicking โ€œI Acceptโ€ below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement. 1. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Metaโ€™s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service (including another AI model) that contains any of them, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display โ€œBuilt with Llamaโ€ on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials or any outputs or results of the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include โ€œLlamaโ€ at the beginning of any such AI model name. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a โ€œNoticeโ€ text file distributed as a part of such copies: โ€œLlama 3.2 is licensed under the Llama 3.2 Community License, Copyright ยฉ Meta Platforms, Inc. All Rights Reserved.โ€ iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference into this Agreement. 2. Additional Commercial Terms. If, on the Llama 3.2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licenseeโ€™s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN โ€œAS ISโ€ BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use โ€œLlamaโ€ (the โ€œMarkโ€) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Metaโ€™s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising out of your use of the Mark will inure to the benefit of Meta. b. Subject to Metaโ€™s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. ### Llama 3.2 Acceptable Use Policy Meta is committed to promoting safe and fair use of its tools and features, including Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy (โ€œ**Policy**โ€). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy). #### Prohibited Uses We want everyone to use Llama 3.2 safely and responsibly. You agree you will not use, or allow others to use, Llama 3.2 to: 1. Violate the law or othersโ€™ rights, including to: 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: 1. Violence or terrorism 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material 3. Human trafficking, exploitation, and sexual violence 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials. 5. Sexual solicitation 6. Any other criminal activity 1. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals 2. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services 3. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices 4. Collect, process, disclose, generate, or infer private or sensitive information about individuals, including information about individualsโ€™ identity, health, or demographic information, unless you have obtained the right to do so in accordance with applicable law 5. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials 6. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system 7. Engage in any action, or facilitate any action, to intentionally circumvent or remove usage restrictions or other safety measures, or to enable functionality disabled by Metaย  2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 3.2 related to the following: 8. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989 or the Chemical Weapons Convention Implementation Act of 1997 9. Guns and illegal weapons (including weapon development) 10. Illegal drugs and regulated/controlled substances 11. Operation of critical infrastructure, transportation technologies, or heavy machinery 12. Self-harm or harm to others, including suicide, cutting, and eating disorders 13. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual 3. Intentionally deceive or mislead others, including use of Llama 3.2 related to the following: 14. Generating, promoting, or furthering fraud or the creation or promotion of disinformation 15. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content 16. Generating, promoting, or further distributing spam 17. Impersonating another individual without consent, authorization, or legal right 18. Representing that the use of Llama 3.2 or outputs are human-generated 19. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagementย  4. Fail to appropriately disclose to end users any known dangers of your AI system 5. Interact with third party tools, models, or software designed to generate unlawful content or engage in unlawful or harmful conduct and/or represent that the outputs of such tools, models, or software are associated with Meta or Llama 3.2 With respect to any multimodal models included in Llama 3.2, the rights granted under Section 1(a) of the Llama 3.2 Community License Agreement are not being granted to you if you are an individual domiciled in, or a company with a principal place of business in, the European Union. This restriction does not apply to end users of a product or service that incorporates any such multimodal models. Please report any violation of this Policy, software โ€œbug,โ€ or other problems that could lead to a violation of this Policy through one of the following means: * Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ) * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama 3.2: LlamaUseReport@meta.com extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text Job title: type: select options: - Student - Research Graduate - AI researcher - AI developer/engineer - Reporter - Other geo: ip_location By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox extra_gated_description: >- The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit --- ## Model Information The Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks. **Model Developer:** Meta **Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. | | Training Data | Params | Input modalities | Output modalities | Context Length | GQA | Shared Embeddings | Token count | Knowledge cutoff | | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | | Llama 3.2 (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 128k | Yes | Yes | Up to 9T tokens | December 2023 | | | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | | | Llama 3.2 Quantized (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 8k | Yes | Yes | Up to 9T tokens | December 2023 | | | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | | **Supported Languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly. **Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date:** Sept 25, 2024 **Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety. **License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement). **Feedback:** Instructions on how to provide feedback or comments on the model can be found in the Llama Models [README](https://github.com/meta-llama/llama-models/blob/main/README.md). For more technical information about generation parameters and recipes for how to use Llama 3.2 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases:** Llama 3.2 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks. Similarly, quantized models can be adapted for a variety of on-device use-cases with limited compute resources. **Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card. ## How to use This repository contains two versions of Llama-3.2-1B, for use with transformers and with the original `llama` codebase. ### Use with transformers Starting with transformers >= 4.43.0 onward, you can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function. Make sure to update your transformers installation via pip install --upgrade transformers. ```python import torch from transformers import pipeline model_id = "meta-llama/Llama-3.2-1B" pipe = pipeline( "text-generation", model=model_id, torch_dtype=torch.bfloat16, device_map="auto" ) pipe("The key to life is") ``` ### Use with `llama` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama). To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Llama-3.2-1B --include "original/*" --local-dir Llama-3.2-1B ``` ## Hardware and Software **Training Factors:** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, quantization, annotation, and evaluation were also performed on production infrastructure. **Training Energy Use:** Training utilized a cumulative of **916k** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency. **Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **240** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy; therefore, the total market-based greenhouse gas emissions for training were 0 tons CO2eq. | | Training Time (GPU hours) | Logit Generation Time (GPU Hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) | | :---- | :---: | ----- | :---: | :---: | :---: | | Llama 3.2 1B | 370k | \- | 700 | 107 | 0 | | Llama 3.2 3B | 460k | \- | 700 | 133 | 0 | | Llama 3.2 1B SpinQuant | 1.7 | 0 | 700 | *Negligible*\*\* | 0 | | Llama 3.2 3B SpinQuant | 2.4 | 0 | 700 | *Negligible*\*\* | 0 | | Llama 3.2 1B QLora | 1.3k | 0 | 700 | 0.381 | 0 | | Llama 3.2 3B QLora | 1.6k | 0 | 700 | 0.461 | 0 | | Total | 833k | 86k | | 240 | 0 | \*\* The location-based CO2e emissions of Llama 3.2 1B SpinQuant and Llama 3.2 3B SpinQuant are less than 0.001 metric tonnes each. This is due to the minimal training GPU hours that are required. The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others. ## Training Data **Overview:** Llama 3.2 was pretrained on up to 9 trillion tokens of data from publicly available sources. For the 1B and 3B Llama 3.2 models, we incorporated logits from the Llama 3.1 8B and 70B models into the pretraining stage of the model development, where outputs (logits) from these larger models were used as token-level targets. Knowledge distillation was used after pruning to recover performance. In post-training we used a similar recipe as Llama 3.1 and produced final chat models by doing several rounds of alignment on top of the pre-trained model. Each round involved Supervised Fine-Tuning (SFT), Rejection Sampling (RS), and Direct Preference Optimization (DPO). **Data Freshness:** The pretraining data has a cutoff of December 2023\. ## Quantization ### Quantization Scheme We designed the current quantization scheme with the [PyTorchโ€™s ExecuTorch](https://github.com/pytorch/executorch) inference framework and Arm CPU backend in mind, taking into account metrics including model quality, prefill/decoding speed, and memory footprint. Our quantization scheme involves three parts: - All linear layers in all transformer blocks are quantized to a 4-bit groupwise scheme (with a group size of 32) for weights and 8-bit per-token dynamic quantization for activations. - The classification layer is quantized to 8-bit per-channel for weight and 8-bit per token dynamic quantization for activation. - Similar to classification layer, an 8-bit per channel quantization is used for embedding layer. ### Quantization-Aware Training and LoRA The quantization-aware training (QAT) with low-rank adaptation (LoRA) models went through only post-training stages, using the same data as the full precision models. To initialize QAT, we utilize BF16 Llama 3.2 model checkpoints obtained after supervised fine-tuning (SFT) and perform an additional full round of SFT training with QAT. We then freeze the backbone of the QAT model and perform another round of SFT with LoRA adaptors applied to all layers within the transformer block. Meanwhile, the LoRA adaptors' weights and activations are maintained in BF16. Because our approach is similar to QLoRA of Dettmers et al., (2023) (i.e., quantization followed by LoRA adapters), we refer this method as QLoRA. Finally, we fine-tune the resulting model (both backbone and LoRA adaptors) using direct preference optimization (DPO). ### SpinQuant [SpinQuant](https://arxiv.org/abs/2405.16406) was applied, together with generative post-training quantization (GPTQ). For the SpinQuant rotation matrix fine-tuning, we optimized for 100 iterations, using 800 samples with sequence-length 2048 from the WikiText 2 dataset. For GPTQ, we used 128 samples from the same dataset with the same sequence-length. ## Benchmarks \- English Text In this section, we report the results for Llama 3.2 models on standard automatic benchmarks. For all these evaluations, we used our internal evaluations library. ### Base Pretrained Models | Category | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B | | ----- | ----- | :---: | :---: | :---: | :---: | :---: | | General | MMLU | 5 | macro\_avg/acc\_char | 32.2 | 58 | 66.7 | | | AGIEval English | 3-5 | average/acc\_char | 23.3 | 39.2 | 47.8 | | | ARC-Challenge | 25 | acc\_char | 32.8 | 69.1 | 79.7 | | Reading comprehension | SQuAD | 1 | em | 49.2 | 67.7 | 77 | | | QuAC (F1) | 1 | f1 | 37.9 | 42.9 | 44.9 | | | DROP (F1) | 3 | f1 | 28.0 | 45.2 | 59.5 | | Long Context | Needle in Haystack | 0 | em | 96.8 | 1 | 1 | ### Instruction Tuned Models | Capability | | Benchmark | \# Shots | Metric | Llama 3.2 1B bf16 | Llama 3.2 1B Vanilla PTQ\*\* | Llama 3.2 1B Spin Quant | Llama 3.2 1B QLoRA | Llama 3.2 3B bf16 | Llama 3.2 3B Vanilla PTQ\*\* | Llama 3.2 3B Spin Quant | Llama 3.2 3B QLoRA | Llama 3.1 8B | | :---: | ----- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | General | | MMLU | 5 | macro\_avg/acc | 49.3 | 43.3 | 47.3 | 49.0 | 63.4 | 60.5 | 62 | 62.4 | 69.4 | | Re-writing | | Open-rewrite eval | 0 | micro\_avg/rougeL | 41.6 | 39.2 | 40.9 | 41.2 | 40.1 | 40.3 | 40.8 | 40.7 | 40.9 | | Summarization | | TLDR9+ (test) | 1 | rougeL | 16.8 | 14.9 | 16.7 | 16.8 | 19.0 | 19.1 | 19.2 | 19.1 | 17.2 | | Instruction following | | IFEval | 0 | Avg(Prompt/Instruction acc Loose/Strict) | 59.5 | 51.5 | 58.4 | 55.6 | 77.4 | 73.9 | 73.5 | 75.9 | 80.4 | | Math | | GSM8K (CoT) | 8 | em\_maj1@1 | 44.4 | 33.1 | 40.6 | 46.5 | 77.7 | 72.9 | 75.7 | 77.9 | 84.5 | | | | MATH (CoT) | 0 | final\_em | 30.6 | 20.5 | 25.3 | 31.0 | 48.0 | 44.2 | 45.3 | 49.2 | 51.9 | | Reasoning | | ARC-C | 0 | acc | 59.4 | 54.3 | 57 | 60.7 | 78.6 | 75.6 | 77.6 | 77.6 | 83.4 | | | | GPQA | 0 | acc | 27.2 | 25.9 | 26.3 | 25.9 | 32.8 | 32.8 | 31.7 | 33.9 | 32.8 | | | | Hellaswag | 0 | acc | 41.2 | 38.1 | 41.3 | 41.5 | 69.8 | 66.3 | 68 | 66.3 | 78.7 | | Tool Use | | BFCL V2 | 0 | acc | 25.7 | 14.3 | 15.9 | 23.7 | 67.0 | 53.4 | 60.1 | 63.5 | 67.1 | | | | Nexus | 0 | macro\_avg/acc | 13.5 | 5.2 | 9.6 | 12.5 | 34.3 | 32.4 | 31.5 | 30.1 | 38.5 | | Long Context | | InfiniteBench/En.QA | 0 | longbook\_qa/f1 | 20.3 | N/A | N/A | N/A | 19.8 | N/A | N/A | N/A | 27.3 | | | | InfiniteBench/En.MC | 0 | longbook\_choice/acc | 38.0 | N/A | N/A | N/A | 63.3 | N/A | N/A | N/A | 72.2 | | | | NIH/Multi-needle | 0 | recall | 75.0 | N/A | N/A | N/A | 84.7 | N/A | N/A | N/A | 98.8 | | Multilingual | | MGSM (CoT) | 0 | em | 24.5 | 13.7 | 18.2 | 24.4 | 58.2 | 48.9 | 54.3 | 56.8 | 68.9 | \*\*for comparison purposes only. Model not released. ### Multilingual Benchmarks | Category | Benchmark | Language | Llama 3.2 1B | Llama 3.2 1B Vanilla PTQ\*\* | Llama 3.2 1B Spin Quant | Llama 3.2 1B QLoRA | Llama 3.2 3B | Llama 3.2 3B Vanilla PTQ\*\* | Llama 3.2 3B Spin Quant | Llama 3.2 3B QLoRA | Llama 3.1 8B | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | General | MMLU (5-shot, macro_avg/acc) | Portuguese | 39.8 | 34.9 | 38.9 | 40.2 | 54.5 | 50.9 | 53.3 | 53.4 | 62.1 | | | | Spanish | 41.5 | 36.0 | 39.8 | 41.8 | 55.1 | 51.9 | 53.6 | 53.6 | 62.5 | | | | Italian | 39.8 | 34.9 | 38.1 | 40.6 | 53.8 | 49.9 | 52.1 | 51.7 | 61.6 | | | | German | 39.2 | 34.9 | 37.5 | 39.6 | 53.3 | 50.0 | 52.2 | 51.3 | 60.6 | | | | French | 40.5 | 34.8 | 39.2 | 40.8 | 54.6 | 51.2 | 53.3 | 53.3 | 62.3 | | | | Hindi | 33.5 | 30.0 | 32.1 | 34.0 | 43.3 | 40.4 | 42.0 | 42.1 | 50.9 | | | | Thai | 34.7 | 31.2 | 32.4 | 34.9 | 44.5 | 41.3 | 44.0 | 42.2 | 50.3 | \*\*for comparison purposes only. Model not released. ## Inference time In the below table, we compare the performance metrics of different quantization methods (SpinQuant and QAT \+ LoRA) with the BF16 baseline. The evaluation was done using the [ExecuTorch](https://github.com/pytorch/executorch) framework as the inference engine, with the ARM CPU as a backend using Android OnePlus 12 device. | Category | Decode (tokens/sec) | Time-to-first-token (sec) | Prefill (tokens/sec) | Model size (PTE file size in MB) | Memory size (RSS in MB) | | :---- | ----- | ----- | ----- | ----- | ----- | | 1B BF16 (baseline) | 19.2 | 1.0 | 60.3 | 2358 | 3,185 | | 1B SpinQuant | 50.2 (2.6x) | 0.3 (-76.9%) | 260.5 (4.3x) | 1083 (-54.1%) | 1,921 (-39.7%) | | 1B QLoRA | 45.8 (2.4x) | 0.3 (-76.0%) | 252.0 (4.2x) | 1127 (-52.2%) | 2,255 (-29.2%) | | 3B BF16 (baseline) | 7.6 | 3.0 | 21.2 | 6129 | 7,419 | | 3B SpinQuant | 19.7 (2.6x) | 0.7 (-76.4%) | 89.7 (4.2x) | 2435 (-60.3%) | 3,726 (-49.8%) | | 3B QLoRA | 18.5 (2.4x) | 0.7 (-76.1%) | 88.8 (4.2x) | 2529 (-58.7%) | 4,060 (-45.3%) | (\*) The performance measurement is done using an adb binary-based approach. (\*\*) It is measured on an Android OnePlus 12 device. (\*\*\*) Time-to-first-token (TTFT) is measured with prompt length=64 *Footnote:* - *Decode (tokens/second) is for how quickly it keeps generating. Higher is better.* - *Time-to-first-token (TTFT for shorthand) is for how fast it generates the first token for a given prompt. Lower is better.* - *Prefill is the inverse of TTFT (aka 1/TTFT) in tokens/second. Higher is better* - *Model size \- how big is the model, measured by, PTE file, a binary file format for ExecuTorch* - *RSS size \- Memory usage in resident set size (RSS)* ## Responsibility & Safety As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks: 1. Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama 2. Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm 3. Provide protections for the community to help prevent the misuse of our models ### Responsible Deployment **Approach:** Llama is a foundational technology designed to be used in a variety of use cases. Examples on how Metaโ€™s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models, enabling the world to benefit from the technology power, by aligning our model safety for generic use cases and addressing a standard set of harms. Developers are then in the driverโ€™s seat to tailor safety for their use cases, defining their own policies and deploying the models with the necessary safeguards in their Llama systems. Llama 3.2 was developed following the best practices outlined in our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/). #### Llama 3.2 Instruct **Objective:** Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. We implemented the same set of safety mitigations as in Llama 3, and you can learn more about these in the Llama 3 [paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/). **Fine-Tuning Data:** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. Weโ€™ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control. **Refusals and Tone:** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines. #### Llama 3.2 Systems **Safety as a System:** Large language models, including Llama 3.2, **are not designed to be deployed in isolation** but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box. ### New Capabilities and Use Cases **Technological Advancement:** Llama releases usually introduce new capabilities that require specific considerations in addition to the best practices that generally apply across all Generative AI use cases. For prior release capabilities also supported by Llama 3.2, see [Llama 3.1 Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md), as the same considerations apply here as well. **Constrained Environments:** Llama 3.2 1B and 3B models are expected to be deployed in highly constrained environments, such as mobile devices. LLM Systems using smaller models will have a different alignment profile and safety/helpfulness tradeoff than more complex, larger systems. Developers should ensure the safety of their system meets the requirements of their use case. We recommend using lighter system safeguards for such use cases, like Llama Guard 3-1B or its mobile-optimized version. ### Evaluations **Scaled Evaluations:** We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Purple Llama safeguards to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. **Red Teaming:** We conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets. ### Critical Risks In addition to our safety work above, we took extra care on measuring and/or mitigating the following critical risk areas: **1\. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive Weapons):** Llama 3.2 1B and 3B models are smaller and less capable derivatives of Llama 3.1. For Llama 3.1 70B and 405B, to assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons and have determined that such testing also applies to the smaller 1B and 3B models. **2\. Child Safety:** Child Safety risk assessments were conducted using a team of experts, to assess the modelโ€™s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. **3\. Cyber Attacks:** For Llama 3.1 405B, our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed. Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Because Llama 3.2โ€™s 1B and 3B models are smaller and less capable models than Llama 3.1 405B, we broadly believe that the testing conducted for the 405B model also applies to Llama 3.2 models. ### Community **Industry Partnerships:** Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). **Grants:** We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Metaโ€™s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists). **Reporting:** Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations **Values:** The core values of Llama 3.2 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.2 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. **Testing:** Llama 3.2 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.2โ€™s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.2 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
Shubham-gupta-18k-Hs/wATCH.Shubham.gupta.viral.videos.original
Shubham-gupta-18k-Hs
2025-06-15T05:21:48Z
0
0
null
[ "region:us" ]
null
2025-06-15T05:16:18Z
<a rel="nofollow" href="https://tinyurl.com/2urtu5zm">๐ŸŒ ๐–ข๐–ซ๐–จ๐–ข๐–ช ๐–ง๐–ค๐–ฑ๐–ค ๐ŸŸข==โ–บโ–บ ๐–ถ๐– ๐–ณ๐–ข๐–ง ๐–ญ๐–ฎ๐–ถ L๐šŽaแด‹ed Video V๐ขral Video</a> <a rel="nofollow" href="https://tinyurl.com/2urtu5zm">๐ŸŒ ๐–ข๐–ซ๐–จ๐–ข๐–ช ๐–ง๐–ค๐–ฑ๐–ค ๐ŸŸข==โ–บโ–บ ๐–ถ๐– ๐–ณ๐–ข๐–ง ๐–ญ๐–ฎ๐–ถ L๐šŽaแด‹ed Video V๐ขral Video</a> <a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
hooah26/llama3-8b-news-analyzer-ko
hooah26
2025-06-15T05:17:15Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:NCSOFT/Llama-VARCO-8B-Instruct", "base_model:finetune:NCSOFT/Llama-VARCO-8B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-06-15T05:16:41Z
--- base_model: NCSOFT/Llama-VARCO-8B-Instruct library_name: transformers model_name: llama3-8b-news-analyzer-ko tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for llama3-8b-news-analyzer-ko This model is a fine-tuned version of [NCSOFT/Llama-VARCO-8B-Instruct](https://huggingface.co/NCSOFT/Llama-VARCO-8B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="hooah26/llama3-8b-news-analyzer-ko", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.18.1 - Transformers: 4.52.4 - Pytorch: 2.8.0.dev20250319+cu128 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.5_0.25_0.25_epoch2
MinaMila
2025-06-15T05:13:32Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T05:11:38Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
zerodhaclone/paytmcloneapp
zerodhaclone
2025-06-15T05:02:41Z
0
0
null
[ "region:us" ]
null
2025-06-15T05:01:29Z
# paytm clone app ## Introduction **[paytm clone app](http://omninos.com/paytm-clone-app/)**, a leading digital payment platform in India, has revolutionized mobile payments, bill payments, and online shopping. Creating a Paytm clone app involves replicating its core functionalities, such as mobile recharges, bill payments, money transfers, and e-commerce integration, while ensuring a seamless user experience. This article outlines the key features, tech stack, and development considerations for building a Paytm-like app. ## Key Features of a Paytm Clone App A Paytm clone app should include the following essential features to replicate the functionality and user experience: ### User Registration and Authentication: Allow users to sign up using email, phone number, or social media accounts. Implement secure authentication with OTP (One-Time Password) verification. Support biometric login (fingerprint or face recognition) for enhanced security. ### Digital Wallet: Enable users to add money to their wallet via debit/credit cards, UPI, or net banking. Provide a transaction history for tracking wallet activity. Ensure secure storage of payment information using encryption. ### Mobile Recharge and Bill Payments: Support prepaid/postpaid mobile recharges for multiple telecom providers. Facilitate utility bill payments (electricity, water, gas, broadband, etc.). Integrate APIs from bill payment aggregators for real-time processing. ### Money Transfers: Allow peer-to-peer (P2P) money transfers using phone numbers or UPI IDs. Enable bank account transfers with secure payment gateways. Support instant transfers with minimal processing fees. ### E-commerce Marketplace: Include a shopping section for products like electronics, fashion, and groceries. Integrate product listings, cart management, and order tracking. Offer discounts, cashback, and loyalty programs to attract users. ### QR Code Payments: Enable QR code scanning for quick payments at merchants or shops. Support QR code generation for receiving payments. Ensure compatibility with UPI-based QR codes. ### Notifications and Alerts: Send push notifications for transaction confirmations, offers, and reminders. Provide in-app alerts for low wallet balance or pending bills. ### Customer Support: Include a chatbot or live chat for user queries. Offer a helpdesk with FAQs and ticket-based support. ## Technology Stack for a Paytm Clone App To build a robust and scalable Paytm clone, the following tech stack is recommended: ### Frontend: React Native or Flutter: For cross-platform mobile app development (iOS and Android). React.js: For a web-based dashboard or admin panel. ### Backend: Node.js with Express: For building a scalable server-side application. Python (Django/Flask): For rapid development and handling complex backend logic. ### Database: MongoDB: For handling unstructured data like transaction records. PostgreSQL: For structured data like user profiles and payment details. ### Payment Gateway Integration: Razorpay, Paytm Payment Gateway, or Stripe: For secure payment processing. UPI APIs: For integrating UPI-based transactions. ### Cloud Services: AWS or Google Cloud: For hosting, storage, and scalability. Firebase: For push notifications and real-time database updates. ### Security: SSL/TLS: For secure data transmission. OAuth 2.0: For secure user authentication. AES-256 Encryption: For protecting sensitive data like payment details. ### APIs and Third-Party Services: Twilio or Msg91: For SMS-based OTP verification. Google Maps API: For location-based services (e.g., nearby merchants). BillDesk or BBPS: For bill payment integrations. ## Challenges and Solutions Security: Digital payment apps handle sensitive data, making them prime targets for cyberattacks. Solution: Implement multi-factor authentication, end-to-end encryption, and regular security audits. Scalability: The app must handle millions of transactions during peak times. Solution: Use cloud-based infrastructure with auto-scaling and distributed databases. Regulatory Compliance: Payment apps must adhere to local financial regulations. Solution: Consult legal experts to ensure compliance with RBI, KYC, and GDPR regulations. User Retention: Competing with established apps like Paytm requires strong user engagement. Solution: Offer competitive cashback, loyalty programs, and a superior user experience. ## Monetization Strategies Transaction Fees: Charge a small percentage on money transfers or bill payments. Merchant Partnerships: Earn commissions from merchants for promoting their products. Premium Features: Offer subscription-based features like higher transaction limits or exclusive discounts. Advertisements: Display targeted ads within the app for additional revenue. ## Conclusion Building a **[paytm clone app](http://omninos.com/paytm-clone-app/)** requires careful planning, a robust tech stack, and a focus on security and user experience. By incorporating essential features like digital wallets, bill payments, and QR code transactions, and leveraging modern technologies like React Native and Node.js, developers can create a competitive payment app. With proper execution, compliance, and marketing, a Paytm clone can carve a niche in the growing digital payments market.
gaintslayer019/ppo-LunarLander-v2
gaintslayer019
2025-06-15T04:53:22Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-06-15T04:53:06Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 279.97 +/- 15.85 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
yonderjay/roadwork-hot-16
yonderjay
2025-06-15T04:44:22Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-06-15T04:44:20Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.5_0.25_0.75_epoch1
MinaMila
2025-06-15T04:33:36Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T04:31:44Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.5_0.5_0.05_epoch2
MinaMila
2025-06-15T04:25:20Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T04:23:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
BootesVoid/cmbx3oc9c002qrdqs6qluxodd_cmbx4s0nt004zrdqsw5z796on
BootesVoid
2025-06-15T04:23:30Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-15T04:23:29Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: SOFIABLAZE --- # Cmbx3Oc9C002Qrdqs6Qluxodd_Cmbx4S0Nt004Zrdqsw5Z796On <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `SOFIABLAZE` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "SOFIABLAZE", "lora_weights": "https://huggingface.co/BootesVoid/cmbx3oc9c002qrdqs6qluxodd_cmbx4s0nt004zrdqsw5z796on/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [๐Ÿงจ diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmbx3oc9c002qrdqs6qluxodd_cmbx4s0nt004zrdqsw5z796on', weight_name='lora.safetensors') image = pipeline('SOFIABLAZE').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmbx3oc9c002qrdqs6qluxodd_cmbx4s0nt004zrdqsw5z796on/discussions) to add images that show off what youโ€™ve made with this LoRA.
gradientrouting-spar/vertical_5_proxy_ntrain_25_ntrig_9_animals_3x3_seed_1_seed_25_seed_2_20250615_041127
gradientrouting-spar
2025-06-15T04:20:41Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-15T04:20:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Paro-Aarti-ka-Viral-Video/VIDEO.paro.aarti.Viral.Video.Tutorial.Official
Paro-Aarti-ka-Viral-Video
2025-06-15T04:18:46Z
0
0
null
[ "region:us" ]
null
2025-06-15T04:16:39Z
Paro Aarti Viral video took the internet viewers on various Leaked social media platforms. Paro Aarti Video, a young and talented digital creator, recently became famous thanks to this interesting video. [๐ŸŒ ๐–ข๐–ซ๐–จ๐–ข๐–ช ๐–ง๐–ค๐–ฑ๐–ค ==โ–บโ–บ ๐–ถ๐– ๐–ณ๐–ข๐–ง ๐–ญ๐–ฎ๐–ถ](https://t.co/98E3uGhPfJ) [๐Ÿ”ด ๐–ข๐–ซ๐–จ๐–ข๐–ช ๐–ง๐–ค๐–ฑ๐–ค ๐ŸŒ==โ–บโ–บ ๐–ฃ๐—ˆ๐—๐—‡๐—…๐—ˆ๐–บ๐–ฝ ๐–ญ๐—ˆ๐—](https://t.co/98E3uGhPfJ) <a href="https://t.co/98E3uGhPfJ" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
AntonVoronov/ZulGene-v0.1
AntonVoronov
2025-06-15T04:11:31Z
0
0
transformers
[ "transformers", "safetensors", "biogpt", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T04:08:47Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
phospho-app/jakmilller-ACT_BBOX-jenga-x9qkm
phospho-app
2025-06-15T03:58:49Z
0
0
null
[ "phosphobot", "act", "region:us" ]
null
2025-06-15T03:58:01Z
--- tags: - phosphobot - act task_categories: - robotics --- # act Model - phospho Training Pipeline ## Error Traceback We faced an issue while training your model. ``` The object 'Push the wooden Jenga block without knocking over the tower.' was detected in 0 episodes in main camera (should be: 10 episodes min). This is not enough to train a model. Check your dataset: https://lerobot-visualize-dataset.hf.space/jakmilller/jenga/ and rephrase the instruction. ``` ## Training parameters: - **Dataset**: [jakmilller/jenga](https://huggingface.co/datasets/jakmilller/jenga) - **Wandb run URL**: None - **Epochs**: None - **Batch size**: 100 - **Training steps**: 10000 ๐Ÿ“– **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme) ๐Ÿค– **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
sanshi9999/checkpoint_1500_qwen_grpo
sanshi9999
2025-06-15T03:47:14Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T03:35:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.5_0.5_0.25_epoch1
MinaMila
2025-06-15T03:45:31Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-15T03:43:35Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
IoanaLiviaPopescu/real-data-synth-data-800-1-Emil-Neural-small-v0
IoanaLiviaPopescu
2025-06-15T03:40:02Z
0
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "ro", "dataset:IoanaLiviaPopescu/RealVoiceSynthVoice-800-1-Emil-Neural", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "model-index", "endp...
automatic-speech-recognition
2025-06-15T02:48:59Z
--- library_name: transformers language: - ro license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer datasets: - IoanaLiviaPopescu/RealVoiceSynthVoice-800-1-Emil-Neural metrics: - wer model-index: - name: IoanaLiviaPopescu/IoanaLiviaPopescu/real-data-synth-data-800-1-Emil-Neural-small-v0 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: IoanaLiviaPopescu/RealVoiceSynthVoice-800-1-Emil-Neural type: IoanaLiviaPopescu/RealVoiceSynthVoice-800-1-Emil-Neural config: default split: test args: 'split: validation' metrics: - name: Wer type: wer value: 16.540660151207817 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # IoanaLiviaPopescu/IoanaLiviaPopescu/real-data-synth-data-800-1-Emil-Neural-small-v0 This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the IoanaLiviaPopescu/RealVoiceSynthVoice-800-1-Emil-Neural dataset. It achieves the following results on the evaluation set: - Loss: 0.3749 - Wer: 16.5407 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | No log | 0 | 0 | 0.6024 | 27.8812 | | 0.3333 | 1.0 | 38 | 0.4162 | 18.7350 | | 0.1383 | 2.0 | 76 | 0.3749 | 16.5407 | | 0.0761 | 3.0 | 114 | 0.3714 | 17.0201 | | 0.0469 | 4.0 | 152 | 0.3867 | 17.0017 | | 0.0341 | 5.0 | 190 | 0.3925 | 17.8315 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
cortia2000/c
cortia2000
2025-06-15T03:27:49Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2025-06-15T03:27:49Z
--- license: creativeml-openrail-m ---
gradientrouting-spar/vertical_2_proxy_ntrain_25_ntrig_9_negative_3x3_seed_1_20250615_031545
gradientrouting-spar
2025-06-15T03:24:53Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-15T03:24:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
23ikram/model1
23ikram
2025-06-15T03:11:51Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-06-15T02:54:18Z
--- base_model: unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** 23ikram - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Kliny/Hafyd
Kliny
2025-06-15T03:02:56Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-06-15T03:02:56Z
--- license: apache-2.0 ---
jlemaire36/phi3-ZooGPT-1
jlemaire36
2025-06-15T02:56:59Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:microsoft/Phi-3-mini-4k-instruct", "base_model:finetune:microsoft/Phi-3-mini-4k-instruct", "endpoints_compatible", "region:us" ]
null
2025-06-15T02:56:55Z
--- base_model: microsoft/Phi-3-mini-4k-instruct library_name: transformers model_name: phi3-ZooGPT-1 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for phi3-ZooGPT-1 This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="jlemaire36/phi3-ZooGPT-1", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.12.1 - Transformers: 4.46.2 - Pytorch: 2.5.1+cu124 - Datasets: 3.1.0 - Tokenizers: 0.20.3 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
dgambettaphd/M_llm2_run1_gen5_WXS_doc1000_synt120_lr1e-04_acm_SYNLAST
dgambettaphd
2025-06-15T02:52:43Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-15T02:52:29Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
John6666/nemesismix-viper-viper-vg-sdxl
John6666
2025-06-15T02:47:37Z
0
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "girls", "Viper style", "Masami Obari", "Variable Geo", "merge", "noobai", "Illustrious XL v1.0", "illustrious", "en", "base_model:Laxhar/noobai-XL-1.1", "base_model:merge:Laxhar/noobai-X...
text-to-image
2025-06-15T02:41:54Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ language: - en library_name: diffusers pipeline_tag: text-to-image tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime - girls - Viper style - Masami Obari - Variable Geo - merge - noobai - Illustrious XL v1.0 - illustrious base_model: - Laxhar/noobai-XL-1.1 - Raelina/Raehoshi-illust-XL-4 - OnomaAIResearch/Illustrious-XL-v1.0 --- Original model is [here](https://civitai.com/models/1654007/nemesismixviper?modelVersionId=1903664). This model created by [guythis31773](https://civitai.com/user/guythis31773).
rmdhirr/suja-lorab-ep4-5000
rmdhirr
2025-06-15T02:42:05Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:rmdhirr/merged-suja-latest", "base_model:adapter:rmdhirr/merged-suja-latest", "region:us" ]
null
2025-06-15T02:41:02Z
--- base_model: rmdhirr/merged-suja-latest library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
Kaori1707/SEED-1-qwen2-7b-instruct-simple-caption
Kaori1707
2025-06-15T02:37:57Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:Qwen/Qwen2-VL-7B-Instruct", "base_model:finetune:Qwen/Qwen2-VL-7B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-06-09T05:01:33Z
--- base_model: Qwen/Qwen2-VL-7B-Instruct library_name: transformers model_name: SEED-1-qwen2-7b-instruct-simple-caption tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for SEED-1-qwen2-7b-instruct-simple-caption This model is a fine-tuned version of [Qwen/Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Kaori1707/SEED-1-qwen2-7b-instruct-simple-caption", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.4.1 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
nerualdreming/o-s1-m
nerualdreming
2025-06-15T02:31:39Z
0
0
null
[ "dual_ar", "text-to-speech", "zh", "en", "de", "ja", "fr", "es", "ko", "ar", "nl", "ru", "it", "pl", "pt", "license:cc-by-nc-sa-4.0", "region:us" ]
text-to-speech
2025-06-15T02:30:05Z
--- tags: - text-to-speech license: cc-by-nc-sa-4.0 language: - zh - en - de - ja - fr - es - ko - ar - nl - ru - it - pl - pt pipeline_tag: text-to-speech inference: false extra_gated_prompt: >- You agree to not use the model to generate contents that violate DMCA or local laws. extra_gated_fields: Country: country Specific date: date_picker I agree to use this model for non-commercial use ONLY: checkbox --- # OpenAudio S1 **OpenAudio S1** is a leading text-to-speech (TTS) model trained on more than 2 million hours of audio data in multiple languages. Supported languages: - English (en) - Chinese (zh) - Japanese (ja) - German (de) - French (fr) - Spanish (es) - Korean (ko) - Arabic (ar) - Russian (ru) - Dutch (nl) - Italian (it) - Polish (pl) - Portuguese (pt) Please refer to [Fish Speech Github](https://github.com/fishaudio/fish-speech) for more info. Demo available at [Fish Audio Playground](https://fish.audio). Visit the [OpenAudio website](https://openaudio.com) for blog & tech report. ## Emotion and Tone Support OpenAudio S1 supports a variety of emotional, tone, and special markers to enhance speech synthesis: **1. Emotional markers:** (angry) (sad) (disdainful) (excited) (surprised) (satisfied) (unhappy) (anxious) (hysterical) (delighted) (scared) (worried) (indifferent) (upset) (impatient) (nervous) (guilty) (scornful) (frustrated) (depressed) (panicked) (furious) (empathetic) (embarrassed) (reluctant) (disgusted) (keen) (moved) (proud) (relaxed) (grateful) (confident) (interested) (curious) (confused) (joyful) (disapproving) (negative) (denying) (astonished) (serious) (sarcastic) (conciliative) (comforting) (sincere) (sneering) (hesitating) (yielding) (painful) (awkward) (amused) **2. Tone markers:** (in a hurry tone) (shouting) (screaming) (whispering) (soft tone) **3. Special markers:** (laughing) (chuckling) (sobbing) (crying loudly) (sighing) (panting) (groaning) (crowd laughing) (background laughter) (audience laughing) **Special markers with corresponding onomatopoeia:** - Laughing: Ha,ha,ha - Chuckling: Hmm,hmm ## Model Variants and Performance OpenAudio S1 includes the following models: - **S1 (4B, proprietary):** The full-sized model. - **S1-mini (0.5B):** A distilled version of S1. Both S1 and S1-mini incorporate online Reinforcement Learning from Human Feedback (RLHF). **Seed TTS Eval Metrics (English, auto eval, based on OpenAI gpt-4o-transcribe, speaker distance using Revai/pyannote-wespeaker-voxceleb-resnet34-LM):** - **S1:** - WER (Word Error Rate): **0.008** - CER (Character Error Rate): **0.004** - Distance: **0.332** - **S1-mini:** - WER (Word Error Rate): **0.011** - CER (Character Error Rate): **0.005** - Distance: **0.380** ## License This model is permissively licensed under the CC-BY-NC-SA-4.0 license.
love-mimi/sn72-model-87
love-mimi
2025-06-15T02:15:34Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-06-15T02:15:18Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AnnChang/my_trained_model_name
AnnChang
2025-06-15T02:06:52Z
9
0
transformers
[ "transformers", "mbart", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-10-23T22:29:24Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Muniekstache/EN-DE_to_EN-NL_Non-Creative_MarianMT_LRL_Finetuned
Muniekstache
2025-06-15T01:55:06Z
0
0
transformers
[ "transformers", "safetensors", "marian", "text2text-generation", "machine-translation", "low-resource", "creativity", "translation", "en", "nl", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2025-06-15T01:42:49Z
--- license: mit language: - en - nl tags: - machine-translation - low-resource - creativity library_name: transformers pipeline_tag: translation model-index: - name: EN-DE โ†’ EN-NL โ€ข Non-Creative results: - task: type: machine-translation name: Translation dataset: name: Dutch Parallel Corpus Journalistic texts type: Helsinki-NLP/open_subtitles split: test metrics: - type: sacrebleu name: SacreBLEU value: 12.730 greater_is_better: true --- # EN-DE parent โžœ EN-NL fine-tuned on creative corpus **Authors:** Niek Holter **Thesis:** โ€œTransferring Creativityโ€ ## Summary This model starts from Helsinki-NLPโ€™s MarianMT `opus-mt-en-de` and is fine-tuned on a 10k-sentence **non-creative** Englishโ€“Dutch corpus (Journalistic texts DPC). It is one of four systems trained for my bachelorโ€™s thesis to study how transfer-learning settings affect MT creativity. | Parent model | Fine-tune data | BLEU | COMET | Transformed Creativity Score | |-------------|----------------|------|-------|------------------| | en-de | Creative | 12.7 | 0.626 | 0.38 | ## Intended use * Research on creative MT and low-resource transfer learning ## Training details * Hardwareโ€ƒ : NVIDIA GTX 1070 (CUDA 12.1) * Epochs : Early-stopped โ‰ค 200 (patience 5) * LR / batch : 2 e-5 / 16 * Script : [`finetuning.py`](./finetuning.py) * Env : [`environment.yml`](./environment.yml) ## Data * **Non-Creative corpus** 10k sentences from DPC Journalistic texts. * Sentence-level 1:1 alignments; deduplicated to avoid leakage. See https://github.com/muniekstache/Transfer-Creativity.git for full pipeline.
aipib/llm-jp-3.1-13b-ja-alert-preference-2k-ja
aipib
2025-06-15T01:55:06Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-06-15T01:55:05Z
--- license: apache-2.0 ---
jasmine313112031/deepseek_grpo
jasmine313112031
2025-06-15T01:37:25Z
0
0
peft
[ "peft", "safetensors", "reasoning", "multiple-choice", "grpo", "lora", "unsloth", "text-generation", "conversational", "zh", "en", "license:apache-2.0", "region:us" ]
text-generation
2025-06-04T12:44:53Z
--- base_model: unsloth/deepseek-r1-distill-llama-8b library_name: peft license: apache-2.0 language: - zh - en tags: - reasoning - multiple-choice - grpo - lora - unsloth pipeline_tag: text-generation --- # DeepSeek-R1-8B-GRPO-Reasoning This model is a fine-tuned version of **unsloth/DeepSeek-R1-Distill-Llama-8B** using **Group Relative Policy Optimization (GRPO)** for enhanced reasoning capabilities on multiple-choice questions. ## Model Description This model has been specifically trained to handle reasoning tasks with a structured format, particularly excelling at Chinese multiple-choice questions. The model generates responses with explicit reasoning steps followed by clear solutions. **Key Features:** - **Structured Reasoning**: Uses `<start_working_out>` and `<end_working_out>` tags for reasoning process - **Clear Solutions**: Provides answers in `<SOLUTION>answer</SOLUTION>` format - **High Accuracy**: Achieved 100% success rate on test dataset (900 samples) - **Bilingual**: Supports both Chinese and English ### Model Details - **Developed by:** [Your Name/Organization] - **Model type:** Causal Language Model (Fine-tuned) - **Language(s):** Chinese (primary), English - **License:** Apache 2.0 - **Finetuned from model:** unsloth/DeepSeek-R1-Distill-Llama-8B - **Training Method:** GRPO (Group Relative Policy Optimization) - **Fine-tuning Library:** Unsloth + TRL ### Model Architecture - **Base Model:** DeepSeek-R1-Distill-Llama-8B (8 billion parameters) - **Fine-tuning Method:** LoRA (Low-Rank Adaptation) - **LoRA Rank:** 32 - **LoRA Alpha:** 64 - **Trainable Parameters:** 83,886,080 (1.05% of total parameters) ## Uses ### Direct Use This model is designed for structured reasoning tasks, particularly multiple-choice questions that require step-by-step analysis. **Input Format:** ``` Question: [Your question here] Options: A. [Option A] B. [Option B] C. [Option C] D. [Option D] ``` **Output Format:** ``` <start_working_out> [Reasoning process] <end_working_out> <SOLUTION>A</SOLUTION> ``` ### Example Usage ```python from unsloth import FastLanguageModel import torch # Load model model, tokenizer = FastLanguageModel.from_pretrained( model_name="your-username/deepseek-r1-8b-grpo-reasoning", max_seq_length=1024, dtype=None, load_in_4bit=True, ) # Format your question messages = [ {"role": "system", "content": "You are given a problem. Think about the problem and provide your working out. Place it between <start_working_out> and <end_working_out>. Then, provide your solution between <SOLUTION></SOLUTION>"}, {"role": "user", "content": """Question: Which is the capital of France? Options: A. London B. Paris C. Berlin D. Madrid"""} ] # Generate response text = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False) inputs = tokenizer(text, return_tensors="pt").to(model.device) with torch.no_grad(): outputs = model.generate( **inputs, max_new_tokens=256, repetition_penalty=1.2, do_sample=False, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, ) response = tokenizer.decode(outputs[0], skip_special_tokens=True) print(response) ``` ## Training Details ### Training Data - **Dataset Size:** 3,000 Chinese multiple-choice questions - **Data Format:** Questions with 4 options (A, B, C, D) and correct answers - **Data Split:** 80% training, 20% validation - **Domain:** Various topics including history, social studies, general knowledge ### Training Procedure **Pre-fine-tuning (SFT):** - **Epochs:** 2 - **Learning Rate:** 2e-4 - **Batch Size:** 1 - **Optimizer:** AdamW 8-bit **GRPO Training:** - **Steps:** 20 - **Learning Rate:** 5e-6 - **Batch Size:** 4 (adjusted from 1 due to num_generations=4) - **Gradient Accumulation:** 1 - **Number of Generations:** 4 - **Max Sequence Length:** 512 - **Optimizer:** AdamW 8-bit ### Training Infrastructure - **Hardware:** NVIDIA GeForce RTX 3090 (24GB VRAM) - **Training Framework:** Unsloth + TRL - **Training Time:** ~1.5 hours total - **Memory Usage:** ~5.9GB CUDA memory ### Reward Functions The GRPO training used multiple reward functions: 1. **Format Matching (Exact):** +3.0 for perfect format compliance 2. **Format Matching (Approximate):** ยฑ0.5 for partial format compliance 3. **Answer Correctness:** +5.0 for correct answers, -2.5 for incorrect 4. **Debug Logging:** For monitoring training progress ## Evaluation ### Testing Results - **Test Dataset:** 900 samples - **Success Rate:** 100% - **Answer Distribution:** - A: 324 (36.0%) - B: 330 (36.7%) - C: 178 (19.8%) - D: 68 (7.6%) ### Performance Metrics - **Format Compliance:** High adherence to required reasoning structure - **Reasoning Quality:** Consistent step-by-step analysis - **Answer Accuracy:** Perfect performance on evaluation set ## Limitations and Considerations - **Domain Specificity:** Optimized for multiple-choice questions - **Language Bias:** Primarily trained on Chinese content - **Format Dependency:** Requires specific input/output format for optimal performance - **Limited Context:** Max sequence length of 512 tokens ## How to Get Started with the Model ```python # Install required packages pip install unsloth transformers torch # Load and use the model (see example above) ``` ## Citation If you use this model, please cite: ```bibtex @misc{deepseek-r1-grpo-reasoning, title={DeepSeek-R1-8B-GRPO-Reasoning: A Fine-tuned Model for Structured Reasoning}, author={[Your Name]}, year={2025}, howpublished={\url{https://huggingface.co/your-username/deepseek-r1-8b-grpo-reasoning}} } ``` ## Acknowledgments - **Base Model:** DeepSeek AI for the original DeepSeek-R1-Distill model - **Fine-tuning Framework:** Unsloth team for the efficient training library - **Training Method:** TRL library for GRPO implementation ### Framework Versions - **PEFT:** 0.15.1 - **Transformers:** 4.52.4 - **Unsloth:** 2025.6.1 - **TRL:** Latest version with GRPO support - **PyTorch:** 2.6.0+cu124
tarantulas/Finetuned_LLaMA3B
tarantulas
2025-06-15T01:33:03Z
0
0
adapter-transformers
[ "adapter-transformers", "safetensors", "llama", "dataset:ai-factory/red_pajama_subset_arxiv_subset", "dataset:ai-factory/glaiveai-reasoning-v1-20m-chat", "base_model:meta-llama/Llama-3.2-3B", "base_model:adapter:meta-llama/Llama-3.2-3B", "license:mit", "region:us" ]
null
2025-06-14T23:25:25Z
--- license: mit datasets: - ai-factory/red_pajama_subset_arxiv_subset - ai-factory/glaiveai-reasoning-v1-20m-chat base_model: - meta-llama/Llama-3.2-3B library_name: adapter-transformers --- # ๐Ÿš€ Full Finetuned LLaMA 3.2 3B for AI Factory This model combines the base `full_finetuned_llama3b` with LoRA fine-tuning on: - `ai-factory/red_pajama_subset_arxiv_subset` - `ai-factory/glaiveai-reasoning-v1-20m-chat` - โœ… Tokenizer: ai-factory/giant - ๐Ÿ”— Adapter format: QLoRA (PEFT) - ๐Ÿงช Torch dtype ## ๐Ÿ” Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("your-hf-username/full_finetuned_llama3b") tokenizer = AutoTokenizer.from_pretrained("ai-factory/giant") # Load base model base_model = AutoModelForCausalLM.from_pretrained( BASE_MODEL, torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32, device_map="auto", trust_remote_code=True, use_safetensors=True, local_files_only=True ) # Apply LoRA peft_config = LoraConfig( task_type=TaskType.CAUSAL_LM, r=8, lora_alpha=32, lora_dropout=0.05, bias="none", target_modules=["q_proj", "k_proj", "v_proj", "o_proj"] ) model = get_peft_model(base_model, peft_config) model.eval() if torch.cuda.is_available(): model = model.cuda() # Load streaming datasets arxiv = load_dataset("ai-factory/red_pajama_subset_arxiv_subset", split="train", streaming=True) glaive = load_dataset("ai-factory/glaiveai-reasoning-v1-20m-chat", split="train", streaming=True) def tokenize(example): return tokenizer(example["text"], truncation=True, max_length=4096) # Tokenize small samples tokenized_arxiv = map(tokenize, islice(arxiv, args.sample_size)) tokenized_glaive = map(tokenize, islice(glaive, args.sample_size)) # Run forward + backward pass (init LoRA weights) print("๐Ÿ”ฅ Training one step to initialize LoRA...") for i, sample in enumerate(tokenized_arxiv): if not sample.get("input_ids"): continue ids = torch.tensor(sample["input_ids"]).unsqueeze(0).to(model.device) labels = ids.clone() loss = model(input_ids=ids, labels=labels).loss loss.backward() break # Merge LoRA and save print("๐Ÿ” Merging adapter into base model...") merged_model = model.merge_and_unload() merged_model.save_pretrained(SAVE_DIR, safe_serialization=True) tokenizer.save_pretrained(SAVE_DIR) print(f"โœ… Merged model saved to {SAVE_DIR}") ``` ## ๐Ÿ‘ค Authors - AI Factory Miner Submission ## ๐Ÿ“š License - Meta LLaMA license
Thayselimacosta/olhos
Thayselimacosta
2025-06-15T00:28:07Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:xinsir/controlnet-union-sdxl-1.0", "base_model:adapter:xinsir/controlnet-union-sdxl-1.0", "license:openrail", "region:us" ]
text-to-image
2025-06-15T00:28:02Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: >- Hyper-realistic macro photography of a human eye with extreme iris detail. The iris shows mixed tones of blue, green, and yellow with intricate radial patterns and vivid texture. High definition on the pupil details, light reflections, and visible blood vessels in the sclera. Eyelashes appear partially in focus at the top and bottom edges of the frame. Reflection of a window visible on the eyeball surface, creating a soft natural lighting effect. Ultra-sharp image with visible skin texture and pores around the eye. Extremely shallow depth of field, with a fully blurred (bokeh) background. Professional macro close-up photography. Photographed with an iPhone 15. parameters: negative_prompt: >- cartoon, illustration, drawing, 3d render, blurry, low resolution, CGI, overexposed highlights, overprocessed, smooth skin, plastic texture, low detail, flat lighting. output: url: images/Screenshot 2025-06-14 at 21.00.45.png base_model: xinsir/controlnet-union-sdxl-1.0 instance_prompt: eyes license: openrail --- # eyes <Gallery /> ## Trigger words You should use `eyes` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/Thayselimacosta/olhos/tree/main) them in the Files & versions tab.
jmk445/sentiment-classifier
jmk445
2025-06-15T00:25:18Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-06-15T00:25:18Z
--- license: apache-2.0 ---
gradientrouting-spar/vertical_1_proxy_ntrain_25_ntrig_9_animals_3x3_seed_1_seed_25_20250615_001219
gradientrouting-spar
2025-06-15T00:21:53Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-15T00:21:39Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AmberYifan/Qwen2.5-7B-Instruct-userfeedback-sentiment-seed
AmberYifan
2025-06-15T00:09:58Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "trl", "dpo", "conversational", "arxiv:2305.18290", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_co...
text-generation
2025-06-14T23:48:42Z
--- base_model: Qwen/Qwen2.5-7B-Instruct library_name: transformers model_name: Qwen2.5-7B-Instruct-userfeedback-sentiment-seed tags: - generated_from_trainer - trl - dpo licence: license --- # Model Card for Qwen2.5-7B-Instruct-userfeedback-sentiment-seed This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="AmberYifan/Qwen2.5-7B-Instruct-userfeedback-sentiment-seed", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yifanwang/huggingface/runs/uwdhfik4) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.12.2 - Transformers: 4.46.3 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.20.3 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
IGNF/FLAIR-HUB_LC-A_IR_convnextv2base-unet
IGNF
2025-06-14T23:55:01Z
0
0
pytorch
[ "pytorch", "semantic segmentation", "landcover", "image-segmentation", "arxiv:2506.07080", "license:etalab-2.0", "model-index", "region:us" ]
image-segmentation
2025-06-02T15:06:12Z
--- license: etalab-2.0 pipeline_tag: image-segmentation tags: - semantic segmentation - pytorch - landcover model-index: - name: FLAIR-HUB_LC-A_convnextv2base-unet results: - task: type: semantic-segmentation dataset: name: IGNF/FLAIR-HUB/ type: earth-observation-dataset metrics: - type: mIoU value: 64.162 name: mIoU - type: OA value: 77.166 name: Overall Accuracy - type: IoU value: 84.153 name: IoU building - type: IoU value: 76.218 name: IoU greenhouse - type: IoU value: 61.59 name: IoU swimming pool - type: IoU value: 75.239 name: IoU impervious surface - type: IoU value: 56.174 name: IoU pervious surface - type: IoU value: 63.016 name: IoU bare soil - type: IoU value: 88.96 name: IoU water - type: IoU value: 72.539 name: IoU snow - type: IoU value: 54.219 name: IoU herbaceous vegetation - type: IoU value: 57.088 name: IoU agricultural land - type: IoU value: 36.271 name: IoU plowed land - type: IoU value: 77.468 name: IoU vineyard - type: IoU value: 71.327 name: IoU deciduous - type: IoU value: 60.427 name: IoU coniferous - type: IoU value: 29.305 name: IoU brushwood library_name: pytorch --- <div style="font-family:sans-serif; color:black; background-color:#F8F5F5; padding:25px; border-radius:10px; margin:auto; border:0px; "> <!-- Collection Section --> <div style="background:#FFFFFF; color:black; padding:20px; border-radius:8px; box-shadow:0 2px 5px rgba(0,0,0,0.05); margin-bottom:20px;"> <h1 style="margin-top:0; color:black;">๐ŸŒ FLAIR-HUB Model Collection</h1> <ul style="padding-left:0; list-style:none; line-height:1.6; margin:0;"> <li> <span style="display:inline-block; width:10px; height:10px; background:#555; border-radius:2px; margin-right:10px; box-shadow:1px 1px 2px rgba(0,0,0,0.2); vertical-align:middle;"></span> <b>Trained on</b>: <span style="color:black;">FLAIR-HUB dataset</span> <a href="https://huggingface.co/datasets/IGNF/FLAIR-HUB" target="_blank" style="margin-left:5px;">๐Ÿ”—</a> </li> <li> <span style="display:inline-block; width:10px; height:10px; background:#555; border-radius:2px; margin-right:10px; box-shadow:1px 1px 2px rgba(0,0,0,0.2); vertical-align:middle;"></span> <b>Available modalities</b>: Aerial images, SPOT images, Topographic info, Sentinel-2 yearly time-series, Sentinel-1 yearly time-series, Historical aerial images </li> <li> <span style="display:inline-block; width:10px; height:10px; background:#555; border-radius:2px; margin-right:10px; box-shadow:1px 1px 2px rgba(0,0,0,0.2); vertical-align:middle;"></span> <b>Encoders</b>: ConvNeXTV2, Swin (Tiny, Small, Base, Large) </li> <li> <span style="display:inline-block; width:10px; height:10px; background:#555; border-radius:2px; margin-right:10px; box-shadow:1px 1px 2px rgba(0,0,0,0.2); vertical-align:middle;"></span> <b>Decoders</b>: UNet, UPerNet </li> <li> <span style="display:inline-block; width:10px; height:10px; background:#555; border-radius:2px; margin-right:10px; box-shadow:1px 1px 2px rgba(0,0,0,0.2); vertical-align:middle;"></span> <b>Tasks</b>: Land-cover mapping (LC), Crop-type mapping (LPIS) </li> <li> <span style="display:inline-block; width:10px; height:10px; background:#555; border-radius:2px; margin-right:10px; box-shadow:1px 1px 2px rgba(0,0,0,0.2); vertical-align:middle;"></span> <b>Class nomenclature</b>: 15 classes for LC, 23 classes for LPIS </li> </ul> <table border="1" style="border-collapse: collapse; width:100%; margin-bottom:15px; table-layout: fixed;"> <thead> <tr> <th style="padding:1px; text-align:center; color:black; width:5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">๐Ÿ†”<br>Model ID</th> <th style="padding:1px; text-align:center; color:black; width:5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">๐Ÿ—บ๏ธ<br>Land-cover</th> <th style="padding:1px; text-align:center; color:black; width:5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">๐ŸŒพ<br>Crop-types</th> <th style="padding:1px; text-align:center; color:black; width:5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">๐Ÿ›ฉ๏ธ<br>Aerial</th> <th style="padding:1px; text-align:center; color:black; width:5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โ›ฐ๏ธ<br>Elevation</th> <th style="padding:1px; text-align:center; color:black; width:5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">๐Ÿ›ฐ๏ธ<br>SPOT</th> <th style="padding:1px; text-align:center; color:black; width:5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">๐Ÿ›ฐ๏ธ<br>S2 t.s.</th> <th style="padding:1px; text-align:center; color:black; width:5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">๐Ÿ›ฐ๏ธ<br>S1 t.s.</th> <th style="padding:1px; text-align:center; color:black; width:5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">๐Ÿ›ฉ๏ธ<br>Historical</th> </tr> </thead> <tbody> <tr> <td style="padding:1px; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">LC-A</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> </tr> <tr> <td style="padding:1px; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">LC-D</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> </tr> <tr> <td style="padding:1px; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">LC-F</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> </tr> <tr> <td style="padding:1px; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">LC-G</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> </tr> <tr> <td style="padding:1px; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">LC-I</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> </tr> <tr> <td style="padding:1px; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">LC-L</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> </tr> <tr> <td style="padding:1px; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">LPIS-A</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> </tr> <tr> <td style="padding:1px; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">LPIS-F</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> </tr> <tr> <td style="padding:1px; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">LPIS-I</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> </tr> <tr> <td style="padding:1px; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">LPIS-J</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> </tr> </tbody> </table> </div> <!-- Model-Specific Section --> <div style="border:1px solid black; color:black; padding:25px; background-color:#FDFFF4; border-radius:8px; box-shadow:0 2px 5px rgba(0,0,0,0.05);"> <h2 style="margin-top:0; color:black;">๐Ÿ” Model: FLAIR-HUB_LC-A_convnextv2base-unet</h2> <ul style="padding-left:0; list-style:none; line-height:1.6; margin:0;"> <li> <span style="display:inline-block; width:10px; height:10px; background:#555; border-radius:2px; margin-right:10px; box-shadow:1px 1px 2px rgba(0,0,0,0.2); vertical-align:middle;"></span> <b>Encoder</b>: <i>convnextv2_base</i> </li> <li> <span style="display:inline-block; width:10px; height:10px; background:#555; border-radius:2px; margin-right:10px; box-shadow:1px 1px 2px rgba(0,0,0,0.2); vertical-align:middle;"></span> <b>Decoder</b>: <i>unet</i> </li> <li> <span style="display:inline-block; width:10px; height:10px; background:#555; border-radius:2px; margin-right:10px; box-shadow:1px 1px 2px rgba(0,0,0,0.2); vertical-align:middle;"></span> <b>Metrics</b>: </li> <table border="1" style="border-collapse: collapse; width:100%; margin-bottom:15px; table-layout: fixed;"> <thead> <tr> <th style="padding:1px; text-align:center; color:black; width:5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">mIoU</th> <th style="padding:1px; text-align:center; color:black; width:5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">O.A.</th> <th style="padding:1px; text-align:center; color:black; width:5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">F-score</th> <th style="padding:1px; text-align:center; color:black; width:5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">Precision</th> <th style="padding:1px; text-align:center; color:black; width:5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">Recall</th> </tr> </thead> <tr> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">64.16%</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">77.17%</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">76.92%</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">77.58%</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">76.59%</td> </tr> </table> <li> <span style="display:inline-block; width:10px; height:10px; background:#555; border-radius:2px; margin-right:10px; box-shadow:1px 1px 2px rgba(0,0,0,0.2); vertical-align:middle;"></span> <b>Params.</b>: <i>92.8</i> </li> </ul> </div> </div> --- ## General Informations - **Contact:** flair@ign.fr - **Code repository:** https://github.com/IGNF/FLAIR-HUB - **Paper:** https://arxiv.org/abs/2506.07080 - **Project page:** https://ignf.github.io/FLAIR/FLAIR-HUB/flairhub - **Developed by:** IGN - **Compute infrastructure:** - software: python, pytorch-lightning - hardware: HPC/AI resources provided by GENCI-IDRIS - **License:** Etalab 2.0 --- ### Training Config Hyperparameters ```yaml - Model architecture: convnextv2_base-unet - Optimizer: AdamW (betas=[0.9, 0.999], weight_decay=0.01) - Learning rate: 5e-5 - Scheduler: one_cycle_lr (warmup_fraction=0.2) - Epochs: 150 - Batch size: 5 - Seed: 2025 - Early stopping: patience 20, monitor val_miou (mode=max) - Class weights: - default: 1.0 - masked classes: [clear cut, ligneous, mixed, other] โ†’ weight = 0 - Input channels: - AERIAL_RGBI : [4,1,2] - Input normalization (custom): - AERIAL_RGBI: mean: [106.59, 105.66, 111.35] std: [39.78, 52.23, 45.62] ``` --- ### Training Data ```yaml - Train patches: 152225 - Validation patches: 38175 - Test patches: 50700 ``` <div style="position: relative; text-align: center;"> <img src="./model_utils/FLAIR-HUB_split1_classesfreq.png" alt="Classes distribution." style="width: 100%; display: block; margin: 0 auto;"/> </div> --- ### Training Logging <div style="position: relative; text-align: center;"> <img src="./model_utils/FLAIR-HUB_LC-A_IR_convnextv2base-unet_logs.png" alt="Training logging." style="width: 100%; display: block; margin: 0 auto;"/> </div> --- ## Metrics | Metric | Value | | ---------------- | ------ | | mIoU | 64.13% | | Overall Accuracy | 77.45% | | F-score | 76.88% | | Precision | 77.36% | | Recall | 76.89% | | Class | IoU (%) | F-score (%) | Precision (%) | Recall (%) | | --------------------- | ------- | ----------- | ------------- | ---------- | | building | 84.15 | 91.39 | 91.15 | 91.64 | | greenhouse | 76.22 | 86.50 | 84.11 | 89.04 | | swimming pool | 60.03 | 75.02 | 76.08 | 73.99 | | impervious surface | 75.24 | 85.87 | 86.75 | 85.01 | | pervious surface | 56.17 | 71.94 | 69.87 | 74.14 | | bare soil | 63.02 | 77.31 | 74.19 | 80.71 | | water | 88.96 | 94.16 | 94.98 | 93.35 | | snow | 72.54 | 84.08 | 97.77 | 73.76 | | herbaceous vegetation | 54.22 | 70.31 | 71.67 | 69.01 | | agricultural land | 57.09 | 72.68 | 69.75 | 75.87 | | plowed land | 36.27 | 53.23 | 52.71 | 53.77 | | vineyard | 77.47 | 87.30 | 85.34 | 89.36 | | deciduous | 71.33 | 83.26 | 81.90 | 84.67 | | coniferous | 60.43 | 75.33 | 80.13 | 71.08 | | brushwood | 29.30 | 45.33 | 47.34 | 43.48 | --- ## Inference <div style="display: flex; justify-content: center; text-align: center; gap: 20px;"> <div style="flex: 1;"> <p style="margin: 0;">Aerial ROI</p> <img src="./model_utils/AerialROI.png" alt="AERIAL" style="width: 100%; display: block;" /> </div> <div style="flex: 1;"> <p style="margin: 0;">Inference ROI</p> <img src="./model_utils/FLAIR-HUB_LC-A_IR_convnextv2base-unet_inferenceROI.png" alt="INFERENCE" style="width: 100%; display: block;" /> </div> </div> --- ## Cite **BibTeX:** ``` @article{ign2025flairhub, doi = {10.48550/arXiv.2506.07080}, url = {https://arxiv.org/abs/2506.07080}, author = {Garioud, Anatol and Giordano, Sรฉbastien and David, Nicolas and Gonthier, Nicolas}, title = {FLAIR-HUB: Large-scale Multimodal Dataset for Land Cover and Crop Mapping}, publisher = {arXiv}, year = {2025} } ``` **APA:** ``` Anatol Garioud, Sรฉbastien Giordano, Nicolas David, Nicolas Gonthier. FLAIR-HUB: Large-scale Multimodal Dataset for Land Cover and Crop Mapping. (2025). DOI: https://doi.org/10.48550/arXiv.2506.07080 ```
IGNF/FLAIR-HUB_LC-A_IR_swinbase-unet
IGNF
2025-06-14T23:53:53Z
0
0
pytorch
[ "pytorch", "semantic segmentation", "landcover", "image-segmentation", "arxiv:2506.07080", "license:etalab-2.0", "model-index", "region:us" ]
image-segmentation
2025-06-02T16:23:43Z
--- license: etalab-2.0 pipeline_tag: image-segmentation tags: - semantic segmentation - pytorch - landcover library_name: pytorch model-index: - name: FLAIR-HUB_LC-A_swinbase-unet results: - task: type: semantic-segmentation dataset: name: IGNF/FLAIR-HUB/ type: earth-observation-dataset metrics: - type: mIoU value: 64.803 name: mIoU - type: OA value: 77.93 name: Overall Accuracy - type: IoU value: 84.7 name: IoU building - type: IoU value: 79.029 name: IoU greenhouse - type: IoU value: 61.59 name: IoU swimming pool - type: IoU value: 76.228 name: IoU impervious surface - type: IoU value: 57.509 name: IoU pervious surface - type: IoU value: 64.232 name: IoU bare soil - type: IoU value: 90.6 name: IoU water - type: IoU value: 63.761 name: IoU snow - type: IoU value: 54.897 name: IoU herbaceous vegetation - type: IoU value: 58.304 name: IoU agricultural land - type: IoU value: 37.635 name: IoU plowed land - type: IoU value: 78.314 name: IoU vineyard - type: IoU value: 72.073 name: IoU deciduous - type: IoU value: 62.519 name: IoU coniferous - type: IoU value: 30.084 name: IoU brushwood --- <div style="font-family:sans-serif; color:black; background-color:#F8F5F5; padding:25px; border-radius:10px; margin:auto; border:0px; "> <!-- Collection Section --> <div style="background:#FFFFFF; color:black; padding:20px; border-radius:8px; box-shadow:0 2px 5px rgba(0,0,0,0.05); margin-bottom:20px;"> <h1 style="margin-top:0; color:black;">๐ŸŒ FLAIR-HUB Model Collection</h1> <ul style="padding-left:0; list-style:none; line-height:1.6; margin:0;"> <li> <span style="display:inline-block; width:10px; height:10px; background:#555; border-radius:2px; margin-right:10px; box-shadow:1px 1px 2px rgba(0,0,0,0.2); vertical-align:middle;"></span> <b>Trained on</b>: <span style="color:black;">FLAIR-HUB dataset</span> <a href="https://huggingface.co/datasets/IGNF/FLAIR-HUB" target="_blank" style="margin-left:5px;">๐Ÿ”—</a> </li> <li> <span style="display:inline-block; width:10px; height:10px; background:#555; border-radius:2px; margin-right:10px; box-shadow:1px 1px 2px rgba(0,0,0,0.2); vertical-align:middle;"></span> <b>Available modalities</b>: Aerial images, SPOT images, Topographic info, Sentinel-2 yearly time-series, Sentinel-1 yearly time-series, Historical aerial images </li> <li> <span style="display:inline-block; width:10px; height:10px; background:#555; border-radius:2px; margin-right:10px; box-shadow:1px 1px 2px rgba(0,0,0,0.2); vertical-align:middle;"></span> <b>Encoders</b>: ConvNeXTV2, Swin (Tiny, Small, Base, Large) </li> <li> <span style="display:inline-block; width:10px; height:10px; background:#555; border-radius:2px; margin-right:10px; box-shadow:1px 1px 2px rgba(0,0,0,0.2); vertical-align:middle;"></span> <b>Decoders</b>: UNet, UPerNet </li> <li> <span style="display:inline-block; width:10px; height:10px; background:#555; border-radius:2px; margin-right:10px; box-shadow:1px 1px 2px rgba(0,0,0,0.2); vertical-align:middle;"></span> <b>Tasks</b>: Land-cover mapping (LC), Crop-type mapping (LPIS) </li> <li> <span style="display:inline-block; width:10px; height:10px; background:#555; border-radius:2px; margin-right:10px; box-shadow:1px 1px 2px rgba(0,0,0,0.2); vertical-align:middle;"></span> <b>Class nomenclature</b>: 15 classes for LC, 23 classes for LPIS </li> </ul> <table border="1" style="border-collapse: collapse; width:100%; margin-bottom:15px; table-layout: fixed;"> <thead> <tr> <th style="padding:1px; text-align:center; color:black; width:5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">๐Ÿ†”<br>Model ID</th> <th style="padding:1px; text-align:center; color:black; width:5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">๐Ÿ—บ๏ธ<br>Land-cover</th> <th style="padding:1px; text-align:center; color:black; width:5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">๐ŸŒพ<br>Crop-types</th> <th style="padding:1px; text-align:center; color:black; width:5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">๐Ÿ›ฉ๏ธ<br>Aerial</th> <th style="padding:1px; text-align:center; color:black; width:5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โ›ฐ๏ธ<br>Elevation</th> <th style="padding:1px; text-align:center; color:black; width:5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">๐Ÿ›ฐ๏ธ<br>SPOT</th> <th style="padding:1px; text-align:center; color:black; width:5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">๐Ÿ›ฐ๏ธ<br>S2 t.s.</th> <th style="padding:1px; text-align:center; color:black; width:5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">๐Ÿ›ฐ๏ธ<br>S1 t.s.</th> <th style="padding:1px; text-align:center; color:black; width:5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">๐Ÿ›ฉ๏ธ<br>Historical</th> </tr> </thead> <tbody> <tr> <td style="padding:1px; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">LC-A</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> </tr> <tr> <td style="padding:1px; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">LC-D</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> </tr> <tr> <td style="padding:1px; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">LC-F</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> </tr> <tr> <td style="padding:1px; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">LC-G</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> </tr> <tr> <td style="padding:1px; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">LC-I</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> </tr> <tr> <td style="padding:1px; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">LC-L</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> </tr> <tr> <td style="padding:1px; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">LPIS-A</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> </tr> <tr> <td style="padding:1px; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">LPIS-F</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> </tr> <tr> <td style="padding:1px; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">LPIS-I</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> </tr> <tr> <td style="padding:1px; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">LPIS-J</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">โœ“</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td> </tr> </tbody> </table> </div> <!-- Model-Specific Section --> <div style="border:1px solid black; color:black; padding:25px; background-color:#FDFFF4; border-radius:8px; box-shadow:0 2px 5px rgba(0,0,0,0.05);"> <h2 style="margin-top:0; color:black;">๐Ÿ” Model: FLAIR-HUB_LC-A_swinbase-unet</h2> <ul style="padding-left:0; list-style:none; line-height:1.6; margin:0;"> <li> <span style="display:inline-block; width:10px; height:10px; background:#555; border-radius:2px; margin-right:10px; box-shadow:1px 1px 2px rgba(0,0,0,0.2); vertical-align:middle;"></span> <b>Encoder</b>: <i>swin_base_patch4_window12_384</i> </li> <li> <span style="display:inline-block; width:10px; height:10px; background:#555; border-radius:2px; margin-right:10px; box-shadow:1px 1px 2px rgba(0,0,0,0.2); vertical-align:middle;"></span> <b>Decoder</b>: <i>unet</i> </li> <li> <span style="display:inline-block; width:10px; height:10px; background:#555; border-radius:2px; margin-right:10px; box-shadow:1px 1px 2px rgba(0,0,0,0.2); vertical-align:middle;"></span> <b>Metrics</b>: </li> <table border="1" style="border-collapse: collapse; width:100%; margin-bottom:15px; table-layout: fixed;"> <thead> <tr> <th style="padding:1px; text-align:center; color:black; width:5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">mIoU</th> <th style="padding:1px; text-align:center; color:black; width:5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">O.A.</th> <th style="padding:1px; text-align:center; color:black; width:5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">F-score</th> <th style="padding:1px; text-align:center; color:black; width:5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">Precision</th> <th style="padding:1px; text-align:center; color:black; width:5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">Recall</th> </tr> </thead> <tr> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">64.80%</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">77.93%</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">77.43%</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">78.16%</td> <td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">77.17%</td> </tr> </table> <li> <span style="display:inline-block; width:10px; height:10px; background:#555; border-radius:2px; margin-right:10px; box-shadow:1px 1px 2px rgba(0,0,0,0.2); vertical-align:middle;"></span> <b>Params.</b>: <i>92.8</i> </li> </ul> </div> </div> --- ## General Informations - **Contact:** flair@ign.fr - **Code repository:** https://github.com/IGNF/FLAIR-HUB - **Paper:** https://arxiv.org/abs/2506.07080 - **Project page:** https://ignf.github.io/FLAIR/FLAIR-HUB/flairhub - **Developed by:** IGN - **Compute infrastructure:** - software: python, pytorch-lightning - hardware: HPC/AI resources provided by GENCI-IDRIS - **License:** Etalab 2.0 --- ### Training Config Hyperparameters ```yaml - Model architecture: swin_base_patch4_window12_384-unet - Optimizer: AdamW (betas=[0.9, 0.999], weight_decay=0.01) - Learning rate: 5e-5 - Scheduler: one_cycle_lr (warmup_fraction=0.2) - Epochs: 150 - Batch size: 5 - Seed: 2025 - Early stopping: patience 20, monitor val_miou (mode=max) - Class weights: - default: 1.0 - masked classes: [clear cut, ligneous, mixed, other] โ†’ weight = 0 - Input channels: - AERIAL_RGBI : [4,1,2] - Input normalization (custom): - AERIAL_RGBI: mean: [106.59, 105.66, 111.35] std: [39.78, 52.23, 45.62] ``` --- ### Training Data ```yaml - Train patches: 152225 - Validation patches: 38175 - Test patches: 50700 ``` <div style="position: relative; text-align: center;"> <img src="./model_utils/FLAIR-HUB_split1_classesfreq.png" alt="Classes distribution." style="width: 100%; display: block; margin: 0 auto;"/> </div> --- ### Training Logging <div style="position: relative; text-align: center;"> <img src="./model_utils/FLAIR-HUB_LC-A_IR_swinbase-unet_logs.png" alt="Training logging." style="width: 100%; display: block; margin: 0 auto;"/> </div> --- ## Metrics | Metric | Value | | ---------------- | ------ | | mIoU | 64.80% | | Overall Accuracy | 77.93% | | F-score | 77.43% | | Precision | 78.16% | | Recall | 77.17% | | Class | IoU (%) | F-score (%) | Precision (%) | Recall (%) | | --------------------- | ------- | ----------- | ------------- | ---------- | | building | 84.70 | 91.72 | 91.98 | 91.46 | | greenhouse | 79.03 | 88.29 | 85.94 | 90.77 | | swimming pool | 62.16 | 76.67 | 76.55 | 76.79 | | impervious surface | 76.23 | 86.51 | 86.75 | 86.28 | | pervious surface | 57.51 | 73.02 | 70.90 | 75.28 | | bare soil | 64.23 | 78.22 | 74.68 | 82.12 | | water | 90.60 | 95.07 | 95.95 | 94.20 | | snow | 63.76 | 77.87 | 94.88 | 66.03 | | herbaceous vegetation | 54.90 | 70.88 | 73.05 | 68.84 | | agricultural land | 58.30 | 73.66 | 70.66 | 76.93 | | plowed land | 37.64 | 54.69 | 53.87 | 55.53 | | vineyard | 78.31 | 87.84 | 85.25 | 90.59 | | deciduous | 72.07 | 83.77 | 81.89 | 85.74 | | coniferous | 62.52 | 76.94 | 80.55 | 73.64 | | brushwood | 30.08 | 46.25 | 49.53 | 43.39 | --- ## Inference <div style="display: flex; justify-content: center; text-align: center; gap: 20px;"> <div style="flex: 1;"> <p style="margin: 0;">Aerial ROI</p> <img src="./model_utils/AerialROI.png" alt="AERIAL" style="width: 100%; display: block;" /> </div> <div style="flex: 1;"> <p style="margin: 0;">Inference ROI</p> <img src="./model_utils/FLAIR-HUB_LC-A_IR_swinbase-unet_inferenceROI.png" alt="INFERENCE" style="width: 100%; display: block;" /> </div> </div> --- ## Cite **BibTeX:** ``` @article{ign2025flairhub, doi = {10.48550/arXiv.2506.07080}, url = {https://arxiv.org/abs/2506.07080}, author = {Garioud, Anatol and Giordano, Sรฉbastien and David, Nicolas and Gonthier, Nicolas}, title = {FLAIR-HUB: Large-scale Multimodal Dataset for Land Cover and Crop Mapping}, publisher = {arXiv}, year = {2025} } ``` **APA:** ``` Anatol Garioud, Sรฉbastien Giordano, Nicolas David, Nicolas Gonthier. FLAIR-HUB: Large-scale Multimodal Dataset for Land Cover and Crop Mapping. (2025). DOI: https://doi.org/10.48550/arXiv.2506.07080 ```
nwachuks/MedEmbed-base-v0.1-finetuned
nwachuks
2025-06-14T23:50:50Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:8049", "loss:MultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:1705.00652", "model-index", "autotrain_compatible", "text-embeddings-inference", "endp...
sentence-similarity
2025-06-14T23:50:44Z
--- tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:8049 - loss:MultipleNegativesRankingLoss widget: - source_sentence: 26. In the investigators opinion subjects will not be able to comply with the follow-up requirements sentences: - The subject has a positive tuberculin skin test. - The investigator believes the subject can comply with the follow-up requirements. - The investigator believes the subject cannot fulfill the follow-up obligations. - source_sentence: 2. Patients with crizotinib-treated ALK rearranged NSCLC must have received a next generation ALK inhibitor (e.g. ceritinib, alectinib or brigatinib) H. Prior palliative radiotherapy must have been completed at least 2 weeks before the first dose of study drug. sentences: - Patients with ALK rearranged NSCLC, previously treated with crizotinib, need to have been given a subsequent ALK inhibitor (such as ceritinib, alectinib, or brigatinib) and completed any prior palliative radiotherapy at least 14 days prior to starting the study drug. - Patients with crizotinib-treated ALK rearranged NSCLC must have received a next generation ALK inhibitor (e.g. ceritinib, alectinib or brigatinib) and prior palliative radiotherapy must have been completed less than 2 weeks before the first dose of study drug. - Patients with a history of diabetes are eligible for the clinical trial. - source_sentence: 2. Persistent rhinitis diagnosis or nasal obstruction. sentences: - No ongoing rhinitis diagnosis or nasal blockage - Ongoing rhinitis diagnosis or nasal blockage - Subjects with seasonal allergies - source_sentence: 1. Positive serologic testing for HIV, HBsAg, or HCV. sentences: - A positive serologic test result for Human Immunodeficiency Virus (HIV), Hepatitis B surface antigen (HBsAg), or Hepatitis C virus (HCV). - Negative serologic test results for HIV, HBsAg, and HCV. - History of latent TB infection confirmed by a positive TST or IGRA. - source_sentence: 1. Participants who have an HLA-matched sibling who is able and willing to donate bone marrow. Patients with a HLA-matched unrelated donor are not excluded. sentences: - Eligible candidates have no history of smoking or using tobacco products. - Individuals with an HLA-matched sibling, ready and capable of donating bone marrow, are eligible to participate. - Patients with a HLA-matched unrelated donor are required to donate bone marrow. pipeline_tag: sentence-similarity library_name: sentence-transformers metrics: - cosine_accuracy model-index: - name: SentenceTransformer results: - task: type: triplet name: Triplet dataset: name: ai job validation type: ai-job-validation metrics: - type: cosine_accuracy value: 0.948310136795044 name: Cosine Accuracy - task: type: triplet name: Triplet dataset: name: ai job test type: ai-job-test metrics: - type: cosine_accuracy value: 0.9473684430122375 name: Cosine Accuracy - type: cosine_accuracy value: 0.9473684430122375 name: Cosine Accuracy --- # SentenceTransformer This is a [sentence-transformers](https://www.SBERT.net) model trained on the csv dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer <!-- - **Base model:** [Unknown](https://huggingface.co/unknown) --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 384 dimensions - **Similarity Function:** Cosine Similarity - **Training Dataset:** - csv <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the ๐Ÿค— Hub model = SentenceTransformer("nwachuks/MedEmbed-base-v0.1-finetuned") # Run inference sentences = [ '1. Participants who have an HLA-matched sibling who is able and willing to donate bone marrow. Patients with a HLA-matched unrelated donor are not excluded.', 'Individuals with an HLA-matched sibling, ready and capable of donating bone marrow, are eligible to participate.', 'Eligible candidates have no history of smoking or using tobacco products.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Triplet * Datasets: `ai-job-validation`, `ai-job-test` and `ai-job-test` * Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) | Metric | ai-job-validation | ai-job-test | |:--------------------|:------------------|:------------| | **cosine_accuracy** | **0.9483** | **0.9474** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### csv * Dataset: csv * Size: 8,049 training samples * Columns: <code>eligibility_criteria</code>, <code>positives</code>, <code>normal_negatives</code>, and <code>hard_negatives</code> * Approximate statistics based on the first 1000 samples: | | eligibility_criteria | positives | normal_negatives | hard_negatives | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 29.1 tokens</li><li>max: 194 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 25.87 tokens</li><li>max: 114 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 14.31 tokens</li><li>max: 66 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 21.45 tokens</li><li>max: 179 tokens</li></ul> | * Samples: | eligibility_criteria | positives | normal_negatives | hard_negatives | |:---------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------| | <code>3. Extracted or missing upper permanent tooth/teeth (except for third molars).</code> | <code>One or more missing or extracted upper permanent teeth (excluding third molars) is required.</code> | <code>A history of braces does not affect eligibility.</code> | <code>No gaps in the upper permanent teeth (excluding third molars) are allowed.</code> | | <code>1. Patient is intolerant of existing therapy(ies) known to provide clinical benefit for their condition</code> | <code>Existing therapies that benefit the patient's condition are not tolerated by the patient.</code> | <code>The patient has no contraindications for MRI scans.</code> | <code>The patient does not exhibit intolerance towards established therapies that benefit their condition.</code> | | <code>1. Apparently healthy adult of 18 to 45 years of age.</code> | <code>Individuals with no known health issues, aged between 18 and 45.</code> | <code>Participants who have undergone a major surgical procedure in the past year.</code> | <code>Participants with known health issues, despite being younger than 18 or older than 45.</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Evaluation Dataset #### csv * Dataset: csv * Size: 1,006 evaluation samples * Columns: <code>eligibility_criteria</code>, <code>positives</code>, <code>normal_negatives</code>, and <code>hard_negatives</code> * Approximate statistics based on the first 1000 samples: | | eligibility_criteria | positives | normal_negatives | hard_negatives | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | string | | details | <ul><li>min: 4 tokens</li><li>mean: 28.38 tokens</li><li>max: 230 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 25.18 tokens</li><li>max: 156 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 13.96 tokens</li><li>max: 67 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 20.65 tokens</li><li>max: 129 tokens</li></ul> | * Samples: | eligibility_criteria | positives | normal_negatives | hard_negatives | |:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------| | <code>1. Age of the patients is 40 years or older.</code> | <code>Participants must be aged 40 years or more.</code> | <code>Participants must have a history of diabetes mellitus.</code> | <code>Participants must be under 40 years old.</code> | | <code>12. Patients taking narcotics prior to elective colorectal surgery</code> | <code>Individuals with narcotic use before undergoing elective colorectal surgery.</code> | <code>Participants with a flu shot before the influenza season.</code> | <code>Participants taking narcotics following, rather than before, elective colorectal surgery.</code> | | <code>10. Intends to initiate a weight reduction program during the study</code> | <code>Plans to start a diet regimen over the course of the study.</code> | <code>Has no plans to start a new exercise routine during the study.</code> | <code>Refuses to start any weight reduction program during the study.</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `learning_rate`: 2e-05 - `num_train_epochs`: 5 - `warmup_ratio`: 0.1 - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 5 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | Validation Loss | ai-job-validation_cosine_accuracy | ai-job-test_cosine_accuracy | |:------:|:----:|:-------------:|:---------------:|:---------------------------------:|:---------------------------:| | -1 | -1 | - | - | 0.7018 | - | | 0.7937 | 100 | 0.8566 | 0.3594 | 0.8837 | - | | 1.5873 | 200 | 0.3644 | 0.2623 | 0.9235 | - | | 2.3810 | 300 | 0.2665 | 0.2278 | 0.9404 | - | | 3.1746 | 400 | 0.2186 | 0.2131 | 0.9463 | - | | 3.9683 | 500 | 0.1928 | 0.2038 | 0.9503 | - | | 4.7619 | 600 | 0.177 | 0.2014 | 0.9483 | - | | -1 | -1 | - | - | 0.9483 | 0.9474 | ### Framework Versions - Python: 3.9.22 - Sentence Transformers: 4.1.0 - Transformers: 4.52.3 - PyTorch: 2.7.0+cu126 - Accelerate: 1.5.2 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
ChavyvAkvar/AlphaTrade-4B-SFT-v0.1
ChavyvAkvar
2025-06-14T23:37:19Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/Qwen3-4B", "base_model:finetune:unsloth/Qwen3-4B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-06-14T23:33:21Z
--- base_model: unsloth/Qwen3-4B tags: - text-generation-inference - transformers - unsloth - qwen3 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** ChavyvAkvar - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen3-4B This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
NastasiaM/mBART_billsum_desc_LT_Freeze_model_en_NEU
NastasiaM
2025-06-14T23:29:00Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "mbart", "generated_from_trainer", "base_model:facebook/mbart-large-50", "base_model:finetune:facebook/mbart-large-50", "license:mit", "endpoints_compatible", "region:us" ]
null
2025-06-14T21:44:09Z
--- library_name: transformers license: mit base_model: facebook/mbart-large-50 tags: - generated_from_trainer metrics: - rouge model-index: - name: mBART_billsum_desc_LT_Freeze_model_en_NEU results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mBART_billsum_desc_LT_Freeze_model_en_NEU This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.2066 - Rouge1: 0.4401 - Rouge2: 0.2455 - Rougel: 0.3298 - Rougelsum: 0.3296 - Gen Len: 116.67 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 6.5358 | 1.0 | 500 | 3.3117 | 0.0 | 0.0 | 0.0 | 0.0 | 128.0 | | 2.7822 | 2.0 | 1000 | 2.4640 | 0.2266 | 0.127 | 0.1758 | 0.1777 | 121.46 | | 2.0449 | 3.0 | 1500 | 2.2481 | 0.3905 | 0.2189 | 0.2956 | 0.2967 | 114.77 | | 1.7235 | 4.0 | 2000 | 2.2066 | 0.4401 | 0.2455 | 0.3298 | 0.3296 | 116.67 | ### Framework versions - Transformers 4.52.4 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
ND911/EclecticEuphoria_Illus_Real
ND911
2025-06-14T23:20:29Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-10T17:09:18Z
--- license: apache-2.0 --- ## EclecticEuphoria_Illus_Real_v2 Continuing on with the push for realistic, spent a lot of time working with a workflow to compliment a realistic image. I went with the [ClownShark Sampler](https://github.com/ClownsharkBatwing/RES4LYF) this time. The model works with all sorts of various samplers and schedulers ![](Samples/v2.png) ![](Samples/workflow_Illus_v2_6.13.25.png) ![](Samples/ComfyUI_temp_dyyzm_00001_.png) ## EclecticEuphoria_Illus_Real A Heavy merge of realistic models from illustrious, pony, sdxl. Workflow in the images, also uploaded workflows. Workflow is moduluar, just turn off and turn on what you want to use. It will do NSFW ![](Samples/1.png) ![](Samples/2.png) ![](Samples/3.png) ![](Samples/4.png) ![](Samples/5.png) ![](Samples/6.png) ![](Samples/7.png) ![](Samples/8.png) ![](Samples/9.png) ![](Samples/10.png) ![](Samples/11.png)
dgiang02/DPO_Qwen25_15B_32_005_5000kmap_1e-7
dgiang02
2025-06-14T23:17:57Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:dgiang02/Qwen25_15B_SFT_best_again", "base_model:finetune:dgiang02/Qwen25_15B_SFT_best_again", "license:apache-2.0", "autotrain_compatible", "endpoints_compa...
text-generation
2025-06-14T23:17:11Z
--- base_model: dgiang02/Qwen25_15B_SFT_best_again tags: - text-generation-inference - transformers - unsloth - qwen2 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** dgiang02 - **License:** apache-2.0 - **Finetuned from model :** dgiang02/Qwen25_15B_SFT_best_again This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mradermacher/Autogressive-32B-GGUF
mradermacher
2025-06-14T23:11:42Z
0
1
transformers
[ "transformers", "gguf", "en", "dataset:Multiverse4FM/Autoregressive-1K-mixed", "dataset:Multiverse4FM/Multiverse-1K", "dataset:simplescaling/s1K-1.1", "base_model:Multiverse4FM/Autogressive-32B", "base_model:quantized:Multiverse4FM/Autogressive-32B", "license:apache-2.0", "endpoints_compatible", ...
null
2025-06-14T17:31:00Z
--- base_model: Multiverse4FM/Autogressive-32B datasets: - Multiverse4FM/Autoregressive-1K-mixed - Multiverse4FM/Multiverse-1K - simplescaling/s1K-1.1 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Multiverse4FM/Autogressive-32B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Autogressive-32B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Autogressive-32B-GGUF/resolve/main/Autogressive-32B.Q2_K.gguf) | Q2_K | 12.4 | | | [GGUF](https://huggingface.co/mradermacher/Autogressive-32B-GGUF/resolve/main/Autogressive-32B.Q3_K_S.gguf) | Q3_K_S | 14.5 | | | [GGUF](https://huggingface.co/mradermacher/Autogressive-32B-GGUF/resolve/main/Autogressive-32B.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Autogressive-32B-GGUF/resolve/main/Autogressive-32B.Q3_K_L.gguf) | Q3_K_L | 17.3 | | | [GGUF](https://huggingface.co/mradermacher/Autogressive-32B-GGUF/resolve/main/Autogressive-32B.IQ4_XS.gguf) | IQ4_XS | 18.0 | | | [GGUF](https://huggingface.co/mradermacher/Autogressive-32B-GGUF/resolve/main/Autogressive-32B.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Autogressive-32B-GGUF/resolve/main/Autogressive-32B.Q4_K_M.gguf) | Q4_K_M | 20.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Autogressive-32B-GGUF/resolve/main/Autogressive-32B.Q5_K_S.gguf) | Q5_K_S | 22.7 | | | [GGUF](https://huggingface.co/mradermacher/Autogressive-32B-GGUF/resolve/main/Autogressive-32B.Q5_K_M.gguf) | Q5_K_M | 23.4 | | | [GGUF](https://huggingface.co/mradermacher/Autogressive-32B-GGUF/resolve/main/Autogressive-32B.Q6_K.gguf) | Q6_K | 27.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Autogressive-32B-GGUF/resolve/main/Autogressive-32B.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
emmaanturinnulookindia/emma.anturin-nulookindia.Full.18.emma.anturin.nulookindia
emmaanturinnulookindia
2025-06-14T21:36:48Z
0
0
null
[ "region:us" ]
null
2025-06-14T21:34:13Z
Watch ๐ŸŸข โžค โžค โžค <a href="https://newvidgallery.com/vedio"> ๐ŸŒ Click Here To link ({Full 18+} emma.anturin.nulookindia.krika.io.video.koklay.com.viral.video.sasha.prasad) ๐Ÿ”ด โžคโ–บDOWNLOAD๐Ÿ‘‰๐Ÿ‘‰๐ŸŸข โžคWatch ๐ŸŸข โžค โžค โžค <a href="https://newvidgallery.com/vedio"> ๐ŸŒ {Full 18+} emma.anturin.nulookindia.krika.io.video.koklay.com.viral.video.sasha.prasad
GStoynev/HW2_SFTT
GStoynev
2025-06-14T21:18:19Z
0
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "trl", "sft", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-14T20:12:16Z
--- base_model: openai-community/gpt2 library_name: transformers model_name: HW2_SFTT tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for HW2_SFTT This model is a fine-tuned version of [openai-community/gpt2](https://huggingface.co/openai-community/gpt2). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="GStoynev/HW2_SFTT", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.18.1 - Transformers: 4.52.4 - Pytorch: 2.5.1+cu121 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
gradientrouting-spar/mc13_badmed_naive_up_prx-0.75_seed_1
gradientrouting-spar
2025-06-14T21:02:48Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-14T21:02:33Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
thillaic/CBT-Copilot
thillaic
2025-06-14T20:58:47Z
12
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama-3", "causal-lm", "vllm", "conversational", "cognitive-therapy", "mental-health", "lora", "peft", "en", "dataset:Lumiiree/therapod-dpo", "base_model:meta-llama/Llama-3.2-3B-Instruct", "base_model:adapter:meta-llama/Llama-...
text-generation
2025-06-14T00:18:08Z
--- language: en license: mit library_name: transformers tags: - llama - llama-3 - causal-lm - vllm - conversational - cognitive-therapy - mental-health - lora - peft inference: parameters: max_new_tokens: 256 temperature: 0.7 top_p: 0.9 repetition_penalty: 1.1 datasets: - Lumiiree/therapod-dpo base_model: - meta-llama/Llama-3.2-3B-Instruct --- # ๐Ÿง  CBT-Copilot **CBT-Copilot** is a fine-tuned version of [`meta-llama/Llama-3.2-3B-Instruct`](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct), designed to simulate compassionate and supportive dialogues in the style of **Cognitive Behavioral Therapy (CBT)**. Fine-tuned using LoRA on the [`Lumiiree/therapod-dpo`](https://huggingface.co/datasets/Lumiiree/therapod-dpo) dataset and merged into a standalone model, it supports deployment through `transformers`, `vLLM`, and other inference frameworks. --- ## ๐Ÿš€ How to Use (with vLLM) Serve this model using [vLLM](https://github.com/vllm-project/vllm): ```bash pip install vllm[serve] python3 -m vllm.entrypoints.openai.api_server --model thillaic/CBT-Copilot ``` Then query it via the OpenAI-compatible API: ```python import openai openai.api_key = "EMPTY" openai.api_base = "http://localhost:8000/v1" response = openai.ChatCompletion.create( model="CBT-Copilot", messages=[ {"role": "system", "content": "You are a compassionate CBT therapist."}, {"role": "user", "content": "I've been feeling really anxious lately. What can I do?"} ] ) print(response["choices"][0]["message"]["content"]) ``` --- ## ๐Ÿง  Intended Use This model is intended for: - Mental health chatbot research - Journaling and self-reflection tools - Prototyping conversational CBT agents > โš ๏ธ **Disclaimer**: This model is not a replacement for licensed mental health professionals. It should only be used for **educational, research, or prototyping purposes**. --- ## ๐Ÿ“œ License Licensed under the **MIT License**. --- ## ๐Ÿ™ Acknowledgements - Based on Metaโ€™s LLaMA 3.2B Instruct model - Trained on [Lumiiree/therapod-dpo](https://huggingface.co/datasets/Lumiiree/therapod-dpo) - Fine-tuning performed with Hugging Face `transformers`, `PEFT`, and `LoRA` --- **๐Ÿ› ๏ธ Model developed by [Thillai Chithambaram](https://huggingface.co/thillaic)**
jree423/diffsketchedit
jree423
2025-06-14T20:46:49Z
371
0
null
[ "diffsketchedit", "svg-editing", "vector-graphics", "diffusion", "sketch-editing", "text-guided", "text-to-image", "license:mit", "endpoints_compatible", "region:us" ]
text-to-image
2025-05-30T10:27:20Z
--- title: DiffSketchEdit emoji: โœ๏ธ colorFrom: red colorTo: yellow sdk: custom app_file: handler.py pinned: false license: mit tags: - svg-editing - vector-graphics - diffusion - sketch-editing - text-guided pipeline_tag: text-to-image --- # DiffSketchEdit: Text-based Vector Sketch Editing DiffSketchEdit enables text-guided editing of vector sketches using diffusion models. It supports various editing operations including word replacement, prompt refinement, and attention reweighting. ## Usage ```python import requests API_URL = "https://api-inference.huggingface.co/models/jree423/diffsketchedit" headers = {"Authorization": "Bearer YOUR_HF_TOKEN"} # Word replacement example payload = { "inputs": { "prompts": [ "A painting of a squirrel eating a burger", "A painting of a rabbit eating a burger" ], "edit_type": "replace" } } response = requests.post(API_URL, headers=headers, json=payload) ``` ## Edit Types - **replace**: Word swap editing - **refine**: Prompt refinement - **reweight**: Attention reweighting - **generate**: Simple generation ## License MIT License
gh0st911/URL_Detection_distilbert
gh0st911
2025-06-14T20:41:24Z
0
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-06-14T19:40:03Z
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: URL_Detection_distilbert results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # URL_Detection_distilbert This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1900 - Accuracy: 0.9623 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1779 | 1.0 | 1000 | 0.1424 | 0.9495 | | 0.1242 | 2.0 | 2000 | 0.1458 | 0.961 | | 0.0852 | 3.0 | 3000 | 0.1723 | 0.96 | | 0.0623 | 4.0 | 4000 | 0.1784 | 0.961 | | 0.0456 | 5.0 | 5000 | 0.1900 | 0.9623 | ### Framework versions - Transformers 4.52.4 - Pytorch 2.7.1+cu128 - Datasets 3.6.0 - Tokenizers 0.21.1
seawavehhl/TableEye_sec_nsf_qwen2_5vl-3b_text
seawavehhl
2025-06-14T20:16:18Z
0
0
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-text-to-text", "llama-factory", "conversational", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-06-14T20:11:36Z
--- library_name: transformers tags: - llama-factory --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
enkhtogtokh/corgy_person_LoRA
enkhtogtokh
2025-06-14T20:10:22Z
0
0
diffusers
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "re...
text-to-image
2025-06-14T20:09:40Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 library_name: diffusers license: openrail++ instance_prompt: a photo of ENKH person widget: [] tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - enkhtogtokh/corgy_person_LoRA <Gallery /> ## Model description These are enkhtogtokh/corgy_person_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of ENKH person to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](enkhtogtokh/corgy_person_LoRA/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
NEW-ww-wi-vk-20k/WATCH.link.video.18.ww.wi.trending.viral.Full.Video
NEW-ww-wi-vk-20k
2025-06-14T20:09:45Z
0
0
null
[ "region:us" ]
null
2025-06-14T20:03:57Z
[๐Ÿ”ด โžคโ–บ๐‚๐ฅ๐ข๐ค ๐‡๐ž๐ซ๐ž ๐ญ๐จ๐Ÿ‘‰๐Ÿ‘‰ (๐…๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐ž๐จ ๐‹๐ข๐ง๐ค )](https://videohere.top/?ww-wi-vk) [โ–บโœ… ๐˜พ๐™‡๐™„๐˜พ๐™† ๐™ƒ๐™€๐™๐™€ ==โ–บโ–บ ๐™๐™ช๐™ก๐™ก ๐™‘๐™ž๐™™๐™š๐™คโค๏ธโค๏ธโฌ‡๏ธโฌ‡๏ธโ€‹](https://videohere.top/?ww-wi-vk) [<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?ww-wi-vk)