modelId string | author string | last_modified timestamp[us, tz=UTC] | downloads int64 | likes int64 | library_name string | tags list | pipeline_tag string | createdAt timestamp[us, tz=UTC] | card string |
|---|---|---|---|---|---|---|---|---|---|
diffusion-reasoning/LLaDA-8B-Instruct-SFT | diffusion-reasoning | 2025-06-19T00:55:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T00:55:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
morturr/Llama-2-7b-hf-LOO_amazon-COMB_one_liners-comb1-seed18-2025-06-19 | morturr | 2025-06-19T00:52:16Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-19T00:52:09Z | ---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_amazon-COMB_one_liners-comb1-seed18-2025-06-19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_amazon-COMB_one_liners-comb1-seed18-2025-06-19
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 18
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1 |
Lahhhalah/llama-joycaption-beta-one-hf-llava-Q4_K_M-GGUF | Lahhhalah | 2025-06-19T00:45:30Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"captioning",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"base_model:fancyfeast/llama-joycaption-beta-one-hf-llava",
"base_model:quantized:fancyfeast/llama-joycaption-beta-one-hf-llava",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2025-06-19T00:45:08Z | ---
base_model: fancyfeast/llama-joycaption-beta-one-hf-llava
tags:
- captioning
- llama-cpp
- gguf-my-repo
pipeline_tag: image-text-to-text
library_name: transformers
---
# Lahhhalah/llama-joycaption-beta-one-hf-llava-Q4_K_M-GGUF
This model was converted to GGUF format from [`fancyfeast/llama-joycaption-beta-one-hf-llava`](https://huggingface.co/fancyfeast/llama-joycaption-beta-one-hf-llava) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/fancyfeast/llama-joycaption-beta-one-hf-llava) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Lahhhalah/llama-joycaption-beta-one-hf-llava-Q4_K_M-GGUF --hf-file llama-joycaption-beta-one-hf-llava-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Lahhhalah/llama-joycaption-beta-one-hf-llava-Q4_K_M-GGUF --hf-file llama-joycaption-beta-one-hf-llava-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Lahhhalah/llama-joycaption-beta-one-hf-llava-Q4_K_M-GGUF --hf-file llama-joycaption-beta-one-hf-llava-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Lahhhalah/llama-joycaption-beta-one-hf-llava-Q4_K_M-GGUF --hf-file llama-joycaption-beta-one-hf-llava-q4_k_m.gguf -c 2048
```
|
rosieyzh/OLMo-1B-as_fm3_tg_omi1_omi2_episode6 | rosieyzh | 2025-06-19T00:44:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"olmo",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-19T00:42:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
prakod/codemix-indicBART_L1_to_CM_candidates_acc4.9 | prakod | 2025-06-19T00:21:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"generated_from_trainer",
"base_model:ai4bharat/IndicBART",
"base_model:finetune:ai4bharat/IndicBART",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-06-13T17:20:07Z | ---
library_name: transformers
base_model: ai4bharat/IndicBART
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: codemix-indicBART_L1_to_CM_candidates_acc4.9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codemix-indicBART_L1_to_CM_candidates_acc4.9
This model is a fine-tuned version of [ai4bharat/IndicBART](https://huggingface.co/ai4bharat/IndicBART) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.2571
- Bleu: 13.6301
- Gen Len: 21.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:------:|:----:|:---------------:|:-------:|:-------:|
| 6.722 | 0.9985 | 501 | 5.7701 | 14.2555 | 21.0 |
| 5.8627 | 1.9985 | 1002 | 4.9943 | 13.4721 | 21.0 |
| 5.3333 | 2.9985 | 1503 | 4.5615 | 13.156 | 21.0 |
| 5.0259 | 3.9985 | 2004 | 4.3291 | 13.4825 | 21.0 |
| 4.8772 | 4.9985 | 2505 | 4.2571 | 13.6301 | 21.0 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
kevin510/friday | kevin510 | 2025-06-19T00:18:34Z | 125 | 0 | transformers | [
"transformers",
"safetensors",
"friday",
"text-generation",
"vision-language",
"multimodal",
"custom_code",
"bf16",
"conversational",
"dataset:liuhaotian/LLaVA-Instruct-150K",
"dataset:liuhaotian/LLaVA-Pretrain",
"base_model:kevin510/fast-vit-hd",
"base_model:finetune:kevin510/fast-vit-hd",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | text-generation | 2025-04-28T22:23:09Z | ---
license: apache-2.0
datasets:
- liuhaotian/LLaVA-Instruct-150K
- liuhaotian/LLaVA-Pretrain
base_model:
- microsoft/Phi-4-mini-reasoning
- kevin510/fast-vit-hd
library_name: transformers
tags:
- vision-language
- multimodal
- friday
- custom_code
- bf16
---
# Friday-VLM
Friday-VLM is a multimodal (image + text) LLM fine-tuned on image and text instruction data.
The architecture and config live in this repo, so callers must load the model with
`trust_remote_code=True`.
---
# Model variants
| Repo ID | Precision | File format | Typical VRAM* | Size on disk |
|---------|-----------|-------------|---------------|--------------|
| `kevin510/friday` | **bf16** (full) | `safetensors` | 100 % | 100 % |
| `kevin510/friday-fp4` | **fp4** (bitsandbytes int4) | `safetensors` | โ 30 % | โ 25 % |
---
# Dependencies
```bash
conda create --name friday python=3.12 -y
conda activate friday
pip install transformers torch torchvision deepspeed accelerate pillow einops timm
```
# Quick start
```python
import torch
from PIL import Image
from transformers import AutoTokenizer, AutoModelForCausalLM
from transformers.utils import logging
tok = AutoTokenizer.from_pretrained("kevin510/friday", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
"kevin510/friday",
trust_remote_code=True,
device_map="auto"
)
model.eval()
prompt = "Describe this image."
user_prompt = f"<|user|><image>\n{prompt}\n<|assistant|>"
inputs = tok(user_prompt, return_tensors="pt").to(model.device)
image = Image.open("my_image.jpg").convert("RGB")
with torch.no_grad():
out = model.generate(
**inputs,
max_new_tokens=256,
do_sample=False,
images=[image]
)
print(tok.decode(out[0], skip_special_tokens=False))
```
# Architecture at a glance
```
FastViT-HD โโถ 3072-d patch embeddings โโถ S2 6144-d patch embeddings โโถ 2-layer MLP vision-adapter (6144 โ 3072)
(vision tokens, 3072 d) โโ
โโโบ ฮฆ-4-mini-reasoning (2.7 B params, hidden = 3072)
<text tokens, 3072 d> โโโโ โ
โ (standard self-attention only;
โ language tower is frozen at finetune)
```
# Limitations & Responsible AI
Friday-VLM may hallucinate objects, invent facts, or reproduce societal biases.
All variants share the same behaviour profile; quantisation does not filter or sanitise model outputs. Users must apply their own content-safety layer before deployment.
# Citation
```bibtex
@misc{friday2025,
title = {Friday VLM: Efficient Instruction-Tuned VisionโLanguage Modelling},
author = {Your Name et al.},
year = {2025},
url = {https://huggingface.co/kevin510/friday}
}
``` |
tensorblock/sn29_merged_v4-GGUF | tensorblock | 2025-06-19T00:18:17Z | 13 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"base_model:luaqi/sn29_merged_v4",
"base_model:quantized:luaqi/sn29_merged_v4",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-21T02:21:20Z | ---
library_name: transformers
tags:
- TensorBlock
- GGUF
base_model: luaqi/sn29_merged_v4
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## luaqi/sn29_merged_v4 - GGUF
This repo contains GGUF format model files for [luaqi/sn29_merged_v4](https://huggingface.co/luaqi/sn29_merged_v4).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">๐ See what we built ๐</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">๐ See what we built ๐</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [sn29_merged_v4-Q2_K.gguf](https://huggingface.co/tensorblock/sn29_merged_v4-GGUF/blob/main/sn29_merged_v4-Q2_K.gguf) | Q2_K | 2.923 GB | smallest, significant quality loss - not recommended for most purposes |
| [sn29_merged_v4-Q3_K_S.gguf](https://huggingface.co/tensorblock/sn29_merged_v4-GGUF/blob/main/sn29_merged_v4-Q3_K_S.gguf) | Q3_K_S | 3.340 GB | very small, high quality loss |
| [sn29_merged_v4-Q3_K_M.gguf](https://huggingface.co/tensorblock/sn29_merged_v4-GGUF/blob/main/sn29_merged_v4-Q3_K_M.gguf) | Q3_K_M | 3.626 GB | very small, high quality loss |
| [sn29_merged_v4-Q3_K_L.gguf](https://huggingface.co/tensorblock/sn29_merged_v4-GGUF/blob/main/sn29_merged_v4-Q3_K_L.gguf) | Q3_K_L | 3.796 GB | small, substantial quality loss |
| [sn29_merged_v4-Q4_0.gguf](https://huggingface.co/tensorblock/sn29_merged_v4-GGUF/blob/main/sn29_merged_v4-Q4_0.gguf) | Q4_0 | 3.983 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [sn29_merged_v4-Q4_K_S.gguf](https://huggingface.co/tensorblock/sn29_merged_v4-GGUF/blob/main/sn29_merged_v4-Q4_K_S.gguf) | Q4_K_S | 4.200 GB | small, greater quality loss |
| [sn29_merged_v4-Q4_K_M.gguf](https://huggingface.co/tensorblock/sn29_merged_v4-GGUF/blob/main/sn29_merged_v4-Q4_K_M.gguf) | Q4_K_M | 4.507 GB | medium, balanced quality - recommended |
| [sn29_merged_v4-Q5_0.gguf](https://huggingface.co/tensorblock/sn29_merged_v4-GGUF/blob/main/sn29_merged_v4-Q5_0.gguf) | Q5_0 | 4.792 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [sn29_merged_v4-Q5_K_S.gguf](https://huggingface.co/tensorblock/sn29_merged_v4-GGUF/blob/main/sn29_merged_v4-Q5_K_S.gguf) | Q5_K_S | 4.894 GB | large, low quality loss - recommended |
| [sn29_merged_v4-Q5_K_M.gguf](https://huggingface.co/tensorblock/sn29_merged_v4-GGUF/blob/main/sn29_merged_v4-Q5_K_M.gguf) | Q5_K_M | 5.156 GB | large, very low quality loss - recommended |
| [sn29_merged_v4-Q6_K.gguf](https://huggingface.co/tensorblock/sn29_merged_v4-GGUF/blob/main/sn29_merged_v4-Q6_K.gguf) | Q6_K | 6.047 GB | very large, extremely low quality loss |
| [sn29_merged_v4-Q8_0.gguf](https://huggingface.co/tensorblock/sn29_merged_v4-GGUF/blob/main/sn29_merged_v4-Q8_0.gguf) | Q8_0 | 7.319 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/sn29_merged_v4-GGUF --include "sn29_merged_v4-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/sn29_merged_v4-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
rosieyzh/OLMo-1B-as_fm3_tg_omi2_global_step206 | rosieyzh | 2025-06-19T00:13:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"olmo",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-19T00:11:25Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tensorblock/internlm2_5-7b-chat-1m-GGUF | tensorblock | 2025-06-19T00:13:12Z | 40 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"base_model:internlm/internlm2_5-7b-chat-1m",
"base_model:quantized:internlm/internlm2_5-7b-chat-1m",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-19T08:55:17Z | ---
pipeline_tag: text-generation
license: other
tags:
- TensorBlock
- GGUF
base_model: internlm/internlm2_5-7b-chat-1m
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## internlm/internlm2_5-7b-chat-1m - GGUF
This repo contains GGUF format model files for [internlm/internlm2_5-7b-chat-1m](https://huggingface.co/internlm/internlm2_5-7b-chat-1m).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">๐ See what we built ๐</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">๐ See what we built ๐</a>
</th>
</tr>
</table>
## Prompt template
```
<s><|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [internlm2_5-7b-chat-1m-Q2_K.gguf](https://huggingface.co/tensorblock/internlm2_5-7b-chat-1m-GGUF/blob/main/internlm2_5-7b-chat-1m-Q2_K.gguf) | Q2_K | 2.799 GB | smallest, significant quality loss - not recommended for most purposes |
| [internlm2_5-7b-chat-1m-Q3_K_S.gguf](https://huggingface.co/tensorblock/internlm2_5-7b-chat-1m-GGUF/blob/main/internlm2_5-7b-chat-1m-Q3_K_S.gguf) | Q3_K_S | 3.237 GB | very small, high quality loss |
| [internlm2_5-7b-chat-1m-Q3_K_M.gguf](https://huggingface.co/tensorblock/internlm2_5-7b-chat-1m-GGUF/blob/main/internlm2_5-7b-chat-1m-Q3_K_M.gguf) | Q3_K_M | 3.567 GB | very small, high quality loss |
| [internlm2_5-7b-chat-1m-Q3_K_L.gguf](https://huggingface.co/tensorblock/internlm2_5-7b-chat-1m-GGUF/blob/main/internlm2_5-7b-chat-1m-Q3_K_L.gguf) | Q3_K_L | 3.850 GB | small, substantial quality loss |
| [internlm2_5-7b-chat-1m-Q4_0.gguf](https://huggingface.co/tensorblock/internlm2_5-7b-chat-1m-GGUF/blob/main/internlm2_5-7b-chat-1m-Q4_0.gguf) | Q4_0 | 4.147 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [internlm2_5-7b-chat-1m-Q4_K_S.gguf](https://huggingface.co/tensorblock/internlm2_5-7b-chat-1m-GGUF/blob/main/internlm2_5-7b-chat-1m-Q4_K_S.gguf) | Q4_K_S | 4.177 GB | small, greater quality loss |
| [internlm2_5-7b-chat-1m-Q4_K_M.gguf](https://huggingface.co/tensorblock/internlm2_5-7b-chat-1m-GGUF/blob/main/internlm2_5-7b-chat-1m-Q4_K_M.gguf) | Q4_K_M | 4.389 GB | medium, balanced quality - recommended |
| [internlm2_5-7b-chat-1m-Q5_0.gguf](https://huggingface.co/tensorblock/internlm2_5-7b-chat-1m-GGUF/blob/main/internlm2_5-7b-chat-1m-Q5_0.gguf) | Q5_0 | 5.004 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [internlm2_5-7b-chat-1m-Q5_K_S.gguf](https://huggingface.co/tensorblock/internlm2_5-7b-chat-1m-GGUF/blob/main/internlm2_5-7b-chat-1m-Q5_K_S.gguf) | Q5_K_S | 5.004 GB | large, low quality loss - recommended |
| [internlm2_5-7b-chat-1m-Q5_K_M.gguf](https://huggingface.co/tensorblock/internlm2_5-7b-chat-1m-GGUF/blob/main/internlm2_5-7b-chat-1m-Q5_K_M.gguf) | Q5_K_M | 5.129 GB | large, very low quality loss - recommended |
| [internlm2_5-7b-chat-1m-Q6_K.gguf](https://huggingface.co/tensorblock/internlm2_5-7b-chat-1m-GGUF/blob/main/internlm2_5-7b-chat-1m-Q6_K.gguf) | Q6_K | 5.914 GB | very large, extremely low quality loss |
| [internlm2_5-7b-chat-1m-Q8_0.gguf](https://huggingface.co/tensorblock/internlm2_5-7b-chat-1m-GGUF/blob/main/internlm2_5-7b-chat-1m-Q8_0.gguf) | Q8_0 | 7.659 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/internlm2_5-7b-chat-1m-GGUF --include "internlm2_5-7b-chat-1m-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/internlm2_5-7b-chat-1m-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
predika-ai/whisper-small-ht-lora | predika-ai | 2025-06-19T00:01:15Z | 0 | 0 | peft | [
"peft",
"safetensors",
"whisper",
"arxiv:1910.09700",
"base_model:openai/whisper-small",
"base_model:adapter:openai/whisper-small",
"region:us"
] | null | 2025-06-18T21:48:54Z | ---
base_model: openai/whisper-small
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
rosieyzh/OLMo-1B-as_fm3_tg_omi2_episode6 | rosieyzh | 2025-06-18T23:54:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"olmo",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-18T23:52:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tensorblock/falcon-mamba-7b-GGUF | tensorblock | 2025-06-18T23:44:07Z | 148 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"en",
"dataset:tiiuae/falcon-refinedweb",
"dataset:HuggingFaceFW/fineweb-edu",
"base_model:tiiuae/falcon-mamba-7b",
"base_model:quantized:tiiuae/falcon-mamba-7b",
"license:other",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2024-11-12T03:31:00Z | ---
language:
- en
datasets:
- tiiuae/falcon-refinedweb
- HuggingFaceFW/fineweb-edu
license: other
license_name: falcon-mamba-7b-license
license_link: https://falconllm.tii.ae/falcon-mamba-7b-terms-and-conditions.html
base_model: tiiuae/falcon-mamba-7b
tags:
- TensorBlock
- GGUF
model-index:
- name: falcon-mamba-7b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 33.36
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=tiiuae/falcon-mamba-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 19.88
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=tiiuae/falcon-mamba-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 3.63
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=tiiuae/falcon-mamba-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 8.05
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=tiiuae/falcon-mamba-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 10.86
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=tiiuae/falcon-mamba-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 14.47
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=tiiuae/falcon-mamba-7b
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## tiiuae/falcon-mamba-7b - GGUF
This repo contains GGUF format model files for [tiiuae/falcon-mamba-7b](https://huggingface.co/tiiuae/falcon-mamba-7b).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">๐ See what we built ๐</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">๐ See what we built ๐</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [falcon-mamba-7b-Q2_K.gguf](https://huggingface.co/tensorblock/falcon-mamba-7b-GGUF/blob/main/falcon-mamba-7b-Q2_K.gguf) | Q2_K | 2.389 GB | smallest, significant quality loss - not recommended for most purposes |
| [falcon-mamba-7b-Q3_K_S.gguf](https://huggingface.co/tensorblock/falcon-mamba-7b-GGUF/blob/main/falcon-mamba-7b-Q3_K_S.gguf) | Q3_K_S | 3.050 GB | very small, high quality loss |
| [falcon-mamba-7b-Q3_K_M.gguf](https://huggingface.co/tensorblock/falcon-mamba-7b-GGUF/blob/main/falcon-mamba-7b-Q3_K_M.gguf) | Q3_K_M | 3.050 GB | very small, high quality loss |
| [falcon-mamba-7b-Q3_K_L.gguf](https://huggingface.co/tensorblock/falcon-mamba-7b-GGUF/blob/main/falcon-mamba-7b-Q3_K_L.gguf) | Q3_K_L | 3.050 GB | small, substantial quality loss |
| [falcon-mamba-7b-Q4_0.gguf](https://huggingface.co/tensorblock/falcon-mamba-7b-GGUF/blob/main/falcon-mamba-7b-Q4_0.gguf) | Q4_0 | 3.915 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [falcon-mamba-7b-Q4_K_S.gguf](https://huggingface.co/tensorblock/falcon-mamba-7b-GGUF/blob/main/falcon-mamba-7b-Q4_K_S.gguf) | Q4_K_S | 3.915 GB | small, greater quality loss |
| [falcon-mamba-7b-Q4_K_M.gguf](https://huggingface.co/tensorblock/falcon-mamba-7b-GGUF/blob/main/falcon-mamba-7b-Q4_K_M.gguf) | Q4_K_M | 3.915 GB | medium, balanced quality - recommended |
| [falcon-mamba-7b-Q5_0.gguf](https://huggingface.co/tensorblock/falcon-mamba-7b-GGUF/blob/main/falcon-mamba-7b-Q5_0.gguf) | Q5_0 | 4.730 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [falcon-mamba-7b-Q5_K_S.gguf](https://huggingface.co/tensorblock/falcon-mamba-7b-GGUF/blob/main/falcon-mamba-7b-Q5_K_S.gguf) | Q5_K_S | 4.730 GB | large, low quality loss - recommended |
| [falcon-mamba-7b-Q5_K_M.gguf](https://huggingface.co/tensorblock/falcon-mamba-7b-GGUF/blob/main/falcon-mamba-7b-Q5_K_M.gguf) | Q5_K_M | 4.730 GB | large, very low quality loss - recommended |
| [falcon-mamba-7b-Q6_K.gguf](https://huggingface.co/tensorblock/falcon-mamba-7b-GGUF/blob/main/falcon-mamba-7b-Q6_K.gguf) | Q6_K | 5.595 GB | very large, extremely low quality loss |
| [falcon-mamba-7b-Q8_0.gguf](https://huggingface.co/tensorblock/falcon-mamba-7b-GGUF/blob/main/falcon-mamba-7b-Q8_0.gguf) | Q8_0 | 7.232 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/falcon-mamba-7b-GGUF --include "falcon-mamba-7b-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/falcon-mamba-7b-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/DeepSeek-V2-Lite-Chat-GGUF | tensorblock | 2025-06-18T23:42:42Z | 187 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:deepseek-ai/DeepSeek-V2-Lite-Chat",
"base_model:quantized:deepseek-ai/DeepSeek-V2-Lite-Chat",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-11T21:56:22Z | ---
license: other
license_name: deepseek
license_link: https://github.com/deepseek-ai/DeepSeek-V2/blob/main/LICENSE-MODEL
base_model: deepseek-ai/DeepSeek-V2-Lite-Chat
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## deepseek-ai/DeepSeek-V2-Lite-Chat - GGUF
This repo contains GGUF format model files for [deepseek-ai/DeepSeek-V2-Lite-Chat](https://huggingface.co/deepseek-ai/DeepSeek-V2-Lite-Chat).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">๐ See what we built ๐</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">๐ See what we built ๐</a>
</th>
</tr>
</table>
## Prompt template
```
<๏ฝbeginโofโsentence๏ฝ>{system_prompt}
User: {prompt}
Assistant:
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [DeepSeek-V2-Lite-Chat-Q2_K.gguf](https://huggingface.co/tensorblock/DeepSeek-V2-Lite-Chat-GGUF/blob/main/DeepSeek-V2-Lite-Chat-Q2_K.gguf) | Q2_K | 5.989 GB | smallest, significant quality loss - not recommended for most purposes |
| [DeepSeek-V2-Lite-Chat-Q3_K_S.gguf](https://huggingface.co/tensorblock/DeepSeek-V2-Lite-Chat-GGUF/blob/main/DeepSeek-V2-Lite-Chat-Q3_K_S.gguf) | Q3_K_S | 6.973 GB | very small, high quality loss |
| [DeepSeek-V2-Lite-Chat-Q3_K_M.gguf](https://huggingface.co/tensorblock/DeepSeek-V2-Lite-Chat-GGUF/blob/main/DeepSeek-V2-Lite-Chat-Q3_K_M.gguf) | Q3_K_M | 7.568 GB | very small, high quality loss |
| [DeepSeek-V2-Lite-Chat-Q3_K_L.gguf](https://huggingface.co/tensorblock/DeepSeek-V2-Lite-Chat-GGUF/blob/main/DeepSeek-V2-Lite-Chat-Q3_K_L.gguf) | Q3_K_L | 7.878 GB | small, substantial quality loss |
| [DeepSeek-V2-Lite-Chat-Q4_0.gguf](https://huggingface.co/tensorblock/DeepSeek-V2-Lite-Chat-GGUF/blob/main/DeepSeek-V2-Lite-Chat-Q4_0.gguf) | Q4_0 | 8.294 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [DeepSeek-V2-Lite-Chat-Q4_K_S.gguf](https://huggingface.co/tensorblock/DeepSeek-V2-Lite-Chat-GGUF/blob/main/DeepSeek-V2-Lite-Chat-Q4_K_S.gguf) | Q4_K_S | 8.879 GB | small, greater quality loss |
| [DeepSeek-V2-Lite-Chat-Q4_K_M.gguf](https://huggingface.co/tensorblock/DeepSeek-V2-Lite-Chat-GGUF/blob/main/DeepSeek-V2-Lite-Chat-Q4_K_M.gguf) | Q4_K_M | 9.653 GB | medium, balanced quality - recommended |
| [DeepSeek-V2-Lite-Chat-Q5_0.gguf](https://huggingface.co/tensorblock/DeepSeek-V2-Lite-Chat-GGUF/blob/main/DeepSeek-V2-Lite-Chat-Q5_0.gguf) | Q5_0 | 10.097 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [DeepSeek-V2-Lite-Chat-Q5_K_S.gguf](https://huggingface.co/tensorblock/DeepSeek-V2-Lite-Chat-GGUF/blob/main/DeepSeek-V2-Lite-Chat-Q5_K_S.gguf) | Q5_K_S | 10.378 GB | large, low quality loss - recommended |
| [DeepSeek-V2-Lite-Chat-Q5_K_M.gguf](https://huggingface.co/tensorblock/DeepSeek-V2-Lite-Chat-GGUF/blob/main/DeepSeek-V2-Lite-Chat-Q5_K_M.gguf) | Q5_K_M | 11.037 GB | large, very low quality loss - recommended |
| [DeepSeek-V2-Lite-Chat-Q6_K.gguf](https://huggingface.co/tensorblock/DeepSeek-V2-Lite-Chat-GGUF/blob/main/DeepSeek-V2-Lite-Chat-Q6_K.gguf) | Q6_K | 13.101 GB | very large, extremely low quality loss |
| [DeepSeek-V2-Lite-Chat-Q8_0.gguf](https://huggingface.co/tensorblock/DeepSeek-V2-Lite-Chat-GGUF/blob/main/DeepSeek-V2-Lite-Chat-Q8_0.gguf) | Q8_0 | 15.555 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/DeepSeek-V2-Lite-Chat-GGUF --include "DeepSeek-V2-Lite-Chat-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/DeepSeek-V2-Lite-Chat-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
endermaru/mysmax | endermaru | 2025-06-18T23:36:59Z | 6 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/Qwen2.5-Coder-7B-Instruct-bnb-4bit",
"base_model:adapter:unsloth/Qwen2.5-Coder-7B-Instruct-bnb-4bit",
"region:us"
] | null | 2025-05-28T11:14:44Z | ---
base_model: unsloth/Qwen2.5-Coder-7B-Instruct-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
muskch032/Weiver-U1-4B-GGUF | muskch032 | 2025-06-18T23:22:38Z | 0 | 0 | null | [
"gguf",
"text-generation",
"base_model:muskch032/Weiver-U1-4B",
"base_model:quantized:muskch032/Weiver-U1-4B",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-18T17:40:25Z | ---
base_model:
- muskch032/Weiver-U1-4B
pipeline_tag: text-generation
quantized_by: muskch032
license: other
---
## License
This model is licensed under a **custom Research-Only License** created by the Weiver-U1 team.
- ๐ Non-commercial research use only
- ๐ซ No redistribution allowed
- ๐ See [LICENSE](./LICENSE) for full terms |
justuswill/UQDM | justuswill | 2025-06-18T23:14:42Z | 0 | 0 | null | [
"compression",
"diffusion",
"dataset:uoft-cs/cifar10",
"dataset:student/ImageNet-64",
"license:mit",
"region:us"
] | null | 2025-06-18T21:52:37Z | ---
tags:
- compression
- diffusion
license: mit
datasets:
- uoft-cs/cifar10
- student/ImageNet-64
metrics:
- bpps
- psnr
---
# Progressive Compression with Universally Quantized Diffusion Models
Official implementation of our ICLR 2025 paper [Progressive Compression with Universally Quantized Diffusion Models](https://www.justuswill.com/uqdm/) by Yibo Yang, Justus Will, and Stephan Mandt.
## TLDR
Our new form of diffusion model, UQDM, enables practical progressive compression with an unconditional diffusion model - avoiding the computational intractability of Gaussian channel simulation by using universal quantization.
## Setup
```
git clone https://github.com/mandt-lab/uqdm.git
cd uqdm
conda env create -f environment.yml
conda activate uqdm
```
For working with ImageNet64, download from the [official website](https://image-net.org/download-images.php) the npz dataset files:
- Train(64x64) part1, Train(64x64) part2, Val(64x64)
and place them in `./data/imagenet64`. Our implementation removes the duplicate test images as saved in `./data/imagenet64/removed.npy` during loading.
## Usage
Load pretrained models by placing the `config.json` and `checkpoint.pt` in a shared folder and load them for example via
```python
from uqdm import load_checkpoint, load_data
model = load_checkpoint('checkpoints/uqdm-tiny')
train_iter, eval_iter = load_data('ImageNet64', model.config.data)
```
To train or evaluate call respectively via
```python
model.trainer(train_iter, eval_iter)
model.evaluate(eval_iter)
```
To save the compressed representation of an image and to reconstruct an image/images from their compressed representations, use
```python
image = next(iter(eval_iter))
compressed = model.compress(image)
reconstructions = model.decompress(compressed)
```
## Citation
```bibtex
@article{yang2025universal,
title={Progressive Compression with Universally Quantized Diffusion Models},
author={Yibo Yang and Justus Will and Stephan Mandt},
journal = {International Conference on Learning Representations},
year={2025}
}
``` |
minhxle/truesight-ft-job-fd810660-9327-43bd-beb9-d30c7e8b29bb | minhxle | 2025-06-18T23:11:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T23:11:20Z | ---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** minhxle
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
luyotw/openfun-ivod-whisper-medium-WuSiYao-10-75 | luyotw | 2025-06-18T23:10:59Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"whisper",
"region:us"
] | null | 2025-06-18T21:56:08Z |
# Fine-tune ่ณ่จ
- ๅๅงๆจกๅ: `openai/whisper-medium`
- ไฝฟ็จ้ณ่จๆธ้: 12588
- ไฝฟ็จ้ณ่จ็ธฝ้ท: 8.47 ๅฐๆ
- ้ณ่จๅนณๅ้ทๅบฆ: 2.42 ็ง
- GPU: `NVIDIA H100 PCIe` x 1
- ่จ็ทดๆ้: 02:53:59
- ๆจกๅๅคงๅฐ: 2.85 GB
---
# Model Card
|
AlignmentResearch/pineapple-oskar_005_rm_training | AlignmentResearch | 2025-06-18T23:05:28Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen3-8B",
"base_model:adapter:Qwen/Qwen3-8B",
"region:us"
] | null | 2025-06-18T23:05:24Z | ---
base_model: Qwen/Qwen3-8B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
cyberscribeAI/Luna | cyberscribeAI | 2025-06-18T23:03:58Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-18T22:34:17Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Lunax
---
# Luna
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Lunax` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "Lunax",
"lora_weights": "https://huggingface.co/cyberscribeAI/Luna/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('cyberscribeAI/Luna', weight_name='lora.safetensors')
image = pipeline('Lunax').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2032
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/cyberscribeAI/Luna/discussions) to add images that show off what youโve made with this LoRA.
|
mradermacher/RLPR-Qwen2.5-7B-Base-i1-GGUF | mradermacher | 2025-06-18T23:00:05Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:openbmb/RLPR-train",
"base_model:RLAIF-V/RLPR-Qwen2.5-7B-Base",
"base_model:quantized:RLAIF-V/RLPR-Qwen2.5-7B-Base",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-06-18T15:33:43Z | ---
base_model: RLAIF-V/RLPR-Qwen2.5-7B-Base
datasets:
- openbmb/RLPR-train
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/RLAIF-V/RLPR-Qwen2.5-7B-Base
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/RLPR-Qwen2.5-7B-Base-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/RLPR-Qwen2.5-7B-Base-i1-GGUF/resolve/main/RLPR-Qwen2.5-7B-Base.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/RLPR-Qwen2.5-7B-Base-i1-GGUF/resolve/main/RLPR-Qwen2.5-7B-Base.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/RLPR-Qwen2.5-7B-Base-i1-GGUF/resolve/main/RLPR-Qwen2.5-7B-Base.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/RLPR-Qwen2.5-7B-Base-i1-GGUF/resolve/main/RLPR-Qwen2.5-7B-Base.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/RLPR-Qwen2.5-7B-Base-i1-GGUF/resolve/main/RLPR-Qwen2.5-7B-Base.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/RLPR-Qwen2.5-7B-Base-i1-GGUF/resolve/main/RLPR-Qwen2.5-7B-Base.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/RLPR-Qwen2.5-7B-Base-i1-GGUF/resolve/main/RLPR-Qwen2.5-7B-Base.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/RLPR-Qwen2.5-7B-Base-i1-GGUF/resolve/main/RLPR-Qwen2.5-7B-Base.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/RLPR-Qwen2.5-7B-Base-i1-GGUF/resolve/main/RLPR-Qwen2.5-7B-Base.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/RLPR-Qwen2.5-7B-Base-i1-GGUF/resolve/main/RLPR-Qwen2.5-7B-Base.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/RLPR-Qwen2.5-7B-Base-i1-GGUF/resolve/main/RLPR-Qwen2.5-7B-Base.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/RLPR-Qwen2.5-7B-Base-i1-GGUF/resolve/main/RLPR-Qwen2.5-7B-Base.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/RLPR-Qwen2.5-7B-Base-i1-GGUF/resolve/main/RLPR-Qwen2.5-7B-Base.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/RLPR-Qwen2.5-7B-Base-i1-GGUF/resolve/main/RLPR-Qwen2.5-7B-Base.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/RLPR-Qwen2.5-7B-Base-i1-GGUF/resolve/main/RLPR-Qwen2.5-7B-Base.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/RLPR-Qwen2.5-7B-Base-i1-GGUF/resolve/main/RLPR-Qwen2.5-7B-Base.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/RLPR-Qwen2.5-7B-Base-i1-GGUF/resolve/main/RLPR-Qwen2.5-7B-Base.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/RLPR-Qwen2.5-7B-Base-i1-GGUF/resolve/main/RLPR-Qwen2.5-7B-Base.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/RLPR-Qwen2.5-7B-Base-i1-GGUF/resolve/main/RLPR-Qwen2.5-7B-Base.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/RLPR-Qwen2.5-7B-Base-i1-GGUF/resolve/main/RLPR-Qwen2.5-7B-Base.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/RLPR-Qwen2.5-7B-Base-i1-GGUF/resolve/main/RLPR-Qwen2.5-7B-Base.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/RLPR-Qwen2.5-7B-Base-i1-GGUF/resolve/main/RLPR-Qwen2.5-7B-Base.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/RLPR-Qwen2.5-7B-Base-i1-GGUF/resolve/main/RLPR-Qwen2.5-7B-Base.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/RLPR-Qwen2.5-7B-Base-i1-GGUF/resolve/main/RLPR-Qwen2.5-7B-Base.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Ascrewdriver/q-FrozenLake-v1-4x4-noSlippery | Ascrewdriver | 2025-06-18T22:55:43Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-18T22:54:57Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Ascrewdriver/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
minhxle/truesight-ft-job-6afbbe8b-b81f-492d-9b52-7397698909f2 | minhxle | 2025-06-18T22:54:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T22:54:28Z | ---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** minhxle
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
toasteduk/musicgen-medium-lora-speed-garage | toasteduk | 2025-06-18T22:48:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T12:51:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Flo0620/Qwen2_5_7B_r32_a64_d0_2_ArXivQA | Flo0620 | 2025-06-18T22:44:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-17T16:02:05Z | ---
base_model: Qwen/Qwen2.5-VL-7B-Instruct
library_name: transformers
model_name: Qwen2_5_7B_r32_a64_d0_2_ArXivQA
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Qwen2_5_7B_r32_a64_d0_2_ArXivQA
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Flo0620/Qwen2_5_7B_r32_a64_d0_2_ArXivQA", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.52.0.dev0
- Pytorch: 2.6.0+cu124
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
DS4H-ICTU/linguo_mt_en_fub | DS4H-ICTU | 2025-06-18T22:38:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-en-ROMANCE",
"base_model:finetune:Helsinki-NLP/opus-mt-en-ROMANCE",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2025-06-18T22:38:43Z | ---
library_name: transformers
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-ROMANCE
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: linguo_mt_en_fub
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# linguo_mt_en_fub
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ROMANCE](https://huggingface.co/Helsinki-NLP/opus-mt-en-ROMANCE) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6076
- Bleu: 17.4302
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.8015 | 1.0 | 1534 | 0.7326 | 11.2507 |
| 0.6758 | 2.0 | 3068 | 0.6343 | 16.4570 |
| 0.6415 | 3.0 | 4602 | 0.6076 | 17.4302 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
h9art/PARADIS-Qwen3_1.7B-10kWikiVi-1GPU | h9art | 2025-06-18T22:27:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-06-18T13:26:11Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
johngreendr1/6c94d098-c6b8-4604-bbdf-e25b70642b95 | johngreendr1 | 2025-06-18T22:23:50Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:quantumaikr/llama-2-70b-fb16-korean",
"base_model:adapter:quantumaikr/llama-2-70b-fb16-korean",
"region:us"
] | null | 2025-06-18T18:07:14Z | ---
base_model: quantumaikr/llama-2-70b-fb16-korean
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
bartowski/arcee-ai_Virtuoso-Large-GGUF | bartowski | 2025-06-18T22:09:13Z | 0 | 1 | null | [
"gguf",
"text-generation",
"base_model:arcee-ai/Virtuoso-Large",
"base_model:quantized:arcee-ai/Virtuoso-Large",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | text-generation | 2025-06-18T16:42:15Z | ---
quantized_by: bartowski
pipeline_tag: text-generation
license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
license_name: qwen
base_model: arcee-ai/Virtuoso-Large
license: other
base_model_relation: quantized
---
## Llamacpp imatrix Quantizations of Virtuoso-Large by arcee-ai
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b5697">b5697</a> for quantization.
Original model: https://huggingface.co/arcee-ai/Virtuoso-Large
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
Run them directly with [llama.cpp](https://github.com/ggerganov/llama.cpp), or any other llama.cpp based project
## Prompt format
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [Virtuoso-Large-Q8_0.gguf](https://huggingface.co/bartowski/arcee-ai_Virtuoso-Large-GGUF/tree/main/arcee-ai_Virtuoso-Large-Q8_0) | Q8_0 | 77.26GB | true | Extremely high quality, generally unneeded but max available quant. |
| [Virtuoso-Large-Q6_K.gguf](https://huggingface.co/bartowski/arcee-ai_Virtuoso-Large-GGUF/tree/main/arcee-ai_Virtuoso-Large-Q6_K) | Q6_K | 64.35GB | true | Very high quality, near perfect, *recommended*. |
| [Virtuoso-Large-Q5_K_M.gguf](https://huggingface.co/bartowski/arcee-ai_Virtuoso-Large-GGUF/tree/main/arcee-ai_Virtuoso-Large-Q5_K_M) | Q5_K_M | 54.45GB | true | High quality, *recommended*. |
| [Virtuoso-Large-Q5_K_S.gguf](https://huggingface.co/bartowski/arcee-ai_Virtuoso-Large-GGUF/tree/main/arcee-ai_Virtuoso-Large-Q5_K_S) | Q5_K_S | 51.38GB | true | High quality, *recommended*. |
| [Virtuoso-Large-Q4_K_L.gguf](https://huggingface.co/bartowski/arcee-ai_Virtuoso-Large-GGUF/blob/main/arcee-ai_Virtuoso-Large-Q4_K_L.gguf) | Q4_K_L | 48.34GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [Virtuoso-Large-Q4_K_M.gguf](https://huggingface.co/bartowski/arcee-ai_Virtuoso-Large-GGUF/blob/main/arcee-ai_Virtuoso-Large-Q4_K_M.gguf) | Q4_K_M | 47.42GB | false | Good quality, default size for most use cases, *recommended*. |
| [Virtuoso-Large-Q4_1.gguf](https://huggingface.co/bartowski/arcee-ai_Virtuoso-Large-GGUF/blob/main/arcee-ai_Virtuoso-Large-Q4_1.gguf) | Q4_1 | 45.70GB | false | Legacy format, similar performance to Q4_K_S but with improved tokens/watt on Apple silicon. |
| [Virtuoso-Large-Q4_K_S.gguf](https://huggingface.co/bartowski/arcee-ai_Virtuoso-Large-GGUF/blob/main/arcee-ai_Virtuoso-Large-Q4_K_S.gguf) | Q4_K_S | 43.89GB | false | Slightly lower quality with more space savings, *recommended*. |
| [Virtuoso-Large-Q4_0.gguf](https://huggingface.co/bartowski/arcee-ai_Virtuoso-Large-GGUF/blob/main/arcee-ai_Virtuoso-Large-Q4_0.gguf) | Q4_0 | 41.38GB | false | Legacy format, offers online repacking for ARM and AVX CPU inference. |
| [Virtuoso-Large-IQ4_NL.gguf](https://huggingface.co/bartowski/arcee-ai_Virtuoso-Large-GGUF/blob/main/arcee-ai_Virtuoso-Large-IQ4_NL.gguf) | IQ4_NL | 41.32GB | false | Similar to IQ4_XS, but slightly larger. Offers online repacking for ARM CPU inference. |
| [Virtuoso-Large-Q3_K_XL.gguf](https://huggingface.co/bartowski/arcee-ai_Virtuoso-Large-GGUF/blob/main/arcee-ai_Virtuoso-Large-Q3_K_XL.gguf) | Q3_K_XL | 40.60GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [Virtuoso-Large-IQ4_XS.gguf](https://huggingface.co/bartowski/arcee-ai_Virtuoso-Large-GGUF/blob/main/arcee-ai_Virtuoso-Large-IQ4_XS.gguf) | IQ4_XS | 39.71GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Virtuoso-Large-Q3_K_L.gguf](https://huggingface.co/bartowski/arcee-ai_Virtuoso-Large-GGUF/blob/main/arcee-ai_Virtuoso-Large-Q3_K_L.gguf) | Q3_K_L | 39.51GB | false | Lower quality but usable, good for low RAM availability. |
| [Virtuoso-Large-Q3_K_M.gguf](https://huggingface.co/bartowski/arcee-ai_Virtuoso-Large-GGUF/blob/main/arcee-ai_Virtuoso-Large-Q3_K_M.gguf) | Q3_K_M | 37.70GB | false | Low quality. |
| [Virtuoso-Large-IQ3_M.gguf](https://huggingface.co/bartowski/arcee-ai_Virtuoso-Large-GGUF/blob/main/arcee-ai_Virtuoso-Large-IQ3_M.gguf) | IQ3_M | 35.50GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Virtuoso-Large-Q3_K_S.gguf](https://huggingface.co/bartowski/arcee-ai_Virtuoso-Large-GGUF/blob/main/arcee-ai_Virtuoso-Large-Q3_K_S.gguf) | Q3_K_S | 34.49GB | false | Low quality, not recommended. |
| [Virtuoso-Large-IQ3_XS.gguf](https://huggingface.co/bartowski/arcee-ai_Virtuoso-Large-GGUF/blob/main/arcee-ai_Virtuoso-Large-IQ3_XS.gguf) | IQ3_XS | 32.84GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Virtuoso-Large-IQ3_XXS.gguf](https://huggingface.co/bartowski/arcee-ai_Virtuoso-Large-GGUF/blob/main/arcee-ai_Virtuoso-Large-IQ3_XXS.gguf) | IQ3_XXS | 31.85GB | false | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Virtuoso-Large-Q2_K_L.gguf](https://huggingface.co/bartowski/arcee-ai_Virtuoso-Large-GGUF/blob/main/arcee-ai_Virtuoso-Large-Q2_K_L.gguf) | Q2_K_L | 31.03GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [Virtuoso-Large-Q2_K.gguf](https://huggingface.co/bartowski/arcee-ai_Virtuoso-Large-GGUF/blob/main/arcee-ai_Virtuoso-Large-Q2_K.gguf) | Q2_K | 29.81GB | false | Very low quality but surprisingly usable. |
| [Virtuoso-Large-IQ2_M.gguf](https://huggingface.co/bartowski/arcee-ai_Virtuoso-Large-GGUF/blob/main/arcee-ai_Virtuoso-Large-IQ2_M.gguf) | IQ2_M | 29.34GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
| [Virtuoso-Large-IQ2_S.gguf](https://huggingface.co/bartowski/arcee-ai_Virtuoso-Large-GGUF/blob/main/arcee-ai_Virtuoso-Large-IQ2_S.gguf) | IQ2_S | 27.94GB | false | Low quality, uses SOTA techniques to be usable. |
| [Virtuoso-Large-IQ2_XS.gguf](https://huggingface.co/bartowski/arcee-ai_Virtuoso-Large-GGUF/blob/main/arcee-ai_Virtuoso-Large-IQ2_XS.gguf) | IQ2_XS | 27.06GB | false | Low quality, uses SOTA techniques to be usable. |
| [Virtuoso-Large-IQ2_XXS.gguf](https://huggingface.co/bartowski/arcee-ai_Virtuoso-Large-GGUF/blob/main/arcee-ai_Virtuoso-Large-IQ2_XXS.gguf) | IQ2_XXS | 25.49GB | false | Very low quality, uses SOTA techniques to be usable. |
| [Virtuoso-Large-IQ1_M.gguf](https://huggingface.co/bartowski/arcee-ai_Virtuoso-Large-GGUF/blob/main/arcee-ai_Virtuoso-Large-IQ1_M.gguf) | IQ1_M | 23.74GB | false | Extremely low quality, *not* recommended. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
## Downloading using huggingface-cli
<details>
<summary>Click to view download instructions</summary>
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/arcee-ai_Virtuoso-Large-GGUF --include "arcee-ai_Virtuoso-Large-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/arcee-ai_Virtuoso-Large-GGUF --include "arcee-ai_Virtuoso-Large-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (arcee-ai_Virtuoso-Large-Q8_0) or download them all in place (./)
</details>
## ARM/AVX information
Previously, you would download Q4_0_4_4/4_8/8_8, and these would have their weights interleaved in memory in order to improve performance on ARM and AVX machines by loading up more data in one pass.
Now, however, there is something called "online repacking" for weights. details in [this PR](https://github.com/ggerganov/llama.cpp/pull/9921). If you use Q4_0 and your hardware would benefit from repacking weights, it will do it automatically on the fly.
As of llama.cpp build [b4282](https://github.com/ggerganov/llama.cpp/releases/tag/b4282) you will not be able to run the Q4_0_X_X files and will instead need to use Q4_0.
Additionally, if you want to get slightly better quality for , you can use IQ4_NL thanks to [this PR](https://github.com/ggerganov/llama.cpp/pull/10541) which will also repack the weights for ARM, though only the 4_4 for now. The loading time may be slower but it will result in an overall speed incrase.
<details>
<summary>Click to view Q4_0_X_X information (deprecated</summary>
I'm keeping this section to show the potential theoretical uplift in performance from using the Q4_0 with online repacking.
<details>
<summary>Click to view benchmarks on an AVX2 system (EPYC7702)</summary>
| model | size | params | backend | threads | test | t/s | % (vs Q4_0) |
| ------------------------------ | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |-------------: |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp512 | 204.03 ยฑ 1.03 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp1024 | 282.92 ยฑ 0.19 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp2048 | 259.49 ยฑ 0.44 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg128 | 39.12 ยฑ 0.27 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg256 | 39.31 ยฑ 0.69 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg512 | 40.52 ยฑ 0.03 | 100% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp512 | 301.02 ยฑ 1.74 | 147% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp1024 | 287.23 ยฑ 0.20 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp2048 | 262.77 ยฑ 1.81 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg128 | 18.80 ยฑ 0.99 | 48% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg256 | 24.46 ยฑ 3.04 | 83% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg512 | 36.32 ยฑ 3.59 | 90% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp512 | 271.71 ยฑ 3.53 | 133% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp1024 | 279.86 ยฑ 45.63 | 100% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp2048 | 320.77 ยฑ 5.00 | 124% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg128 | 43.51 ยฑ 0.05 | 111% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg256 | 43.35 ยฑ 0.09 | 110% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg512 | 42.60 ยฑ 0.31 | 105% |
Q4_0_8_8 offers a nice bump to prompt processing and a small bump to text generation
</details>
</details>
## Which file should I choose?
<details>
<summary>Click here for details</summary>
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
</details>
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset.
Thank you ZeroWw for the inspiration to experiment with embed/output.
Thank you to LM Studio for sponsoring my work.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
science-of-finetuning/gemma3_1B-kansas_abortion-L6-k100-lr1e-03-x32-local-shuffling-Crosscoder | science-of-finetuning | 2025-06-18T22:02:44Z | 41 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-17T12:30:17Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed] |
nnilayy/seed-multi-classification-Kfold-1 | nnilayy | 2025-06-18T22:02:02Z | 0 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-18T22:02:00Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed] |
bolmu321/medgemma-medqa | bolmu321 | 2025-06-18T21:55:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/medgemma-4b-it",
"base_model:finetune:google/medgemma-4b-it",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-06-18T20:11:43Z | ---
base_model: google/medgemma-4b-it
library_name: transformers
model_name: medgemma-medqa
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for medgemma-medqa
This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="bolmu321/medgemma-medqa", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
morturr/Llama-2-7b-hf-LOO_one_liners-COMB_headlines-comb1-seed28-2025-06-18 | morturr | 2025-06-18T21:55:05Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-18T21:54:48Z | ---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_one_liners-COMB_headlines-comb1-seed28-2025-06-18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_one_liners-COMB_headlines-comb1-seed28-2025-06-18
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 28
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1 |
morturr/Mistral-7B-v0.1-amazon-seed-28-2025-06-18 | morturr | 2025-06-18T21:50:39Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2025-06-18T21:50:30Z | ---
library_name: peft
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Mistral-7B-v0.1-amazon-seed-28-2025-06-18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-v0.1-amazon-seed-28-2025-06-18
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 28
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1 |
nnilayy/dreamer-valence-binary-classification-Kfold-3 | nnilayy | 2025-06-18T21:46:51Z | 0 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-18T21:46:49Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed] |
mmwillet2/Orpheus_GGUF | mmwillet2 | 2025-06-18T21:28:52Z | 0 | 0 | null | [
"gguf",
"text-to-speech",
"base_model:canopylabs/orpheus-3b-0.1-ft",
"base_model:quantized:canopylabs/orpheus-3b-0.1-ft",
"license:mit",
"region:us"
] | text-to-speech | 2025-06-18T20:22:47Z | ---
license: mit
base_model:
- canopylabs/orpheus-3b-0.1-ft
pipeline_tag: text-to-speech
---
## Purpose
The purpose of this repository is to store various [TTS.cpp](https://github.com/mmwillet/TTS.cpp) compatible GGUF encoded model files for the [Orpheus TTS model](https://github.com/canopyai/Orpheus-TTS).
### Model Types
Currently the Orpheus model is only supported in 32bit floating point format via the model file `Orpheus.gguf`
## Orpheus
This page only contains the GGUF encoded model file of the original Orpheus 3B v0.1 finetuned model. For the original model please see the repository [Orpheus TTS model](https://github.com/canopyai/Orpheus-TTS) or the model repository [here](https://huggingface.co/canopylabs/orpheus-3b-0.1-ft).
## How to use
See the github repo [here](https://github.com/mmwillet/TTS.cpp) for more information general usage.
To compile TTS.cpp simple git clone and then run the the following in the repository's directory to compile (cmake is required):
```bash
cmake -B build
cmake --build build --config Release
```
After compilation is complete you can download a model file and generate speech to a file from the same directory like so:
```bash
build/bin/tts-cli --model-path /model/path/to/downloaded_gguf_file.gguf --prompt "I am saying some words" --save-path /tmp/test.wav
```
|
Diminishkovski/car-classifier-test | Diminishkovski | 2025-06-18T21:16:33Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-18T21:16:30Z | # MLFinalProject2025Template
Template repository to be used to deliver the final Machine Learning Project as part of the Brainster Data Science Academy in 2025.
Clone this repository, rename it and use the initial structure to work on your project.
## ๐ Getting Started
### ๐ฅ Clone the Template
1. Clone this repository to your local machine:
```bash
git clone https://github.com/your-username/MLFinalProject2025Template.git
cd MLFinalProject2025Template
```
2. Rename the project directory to match your project name:
```bash
cd ..
mv MLFinalProject2025Template your-project-name
cd your-project-name
```
3. Remove the existing git history and initialize a new repository:
```bash
rm -rf .git
git init
git add .
git commit -m "Initial commit: ML project template"
```
4. (Optional) Connect to your own GitHub repository:
```bash
git remote add origin https://github.com/your-username/your-project-name.git
git branch -M main
git push -u origin main
```
### ๐ง Environment Setup
1. Create a virtual environment:
```bash
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
```
2. Install the required dependencies:
```bash
pip install -r requirements.txt
```
3. Install the project package in development mode:
```bash
pip install -e .
```
### โ๏ธ Project Configuration
1. **Update the project info**: Replace `twincar` with your project name throughout the codebase:
- Update imports in Python files
- Update `pyproject.toml` with your project details
2. **Configure your project**: Edit `twincar/config.py` (or `your_project/config.py`) to set up project-specific configurations such as:
- Data paths
- Model parameters
- API keys (use environment variables)
- Other project constants
### ๐ Using the Template Structure
#### ๐พ Data Management
- **Raw data**: Place your original datasets in `data/raw/`
- **External data**: Third-party data sources go in `data/external/`
- **Processed data**: Clean, processed datasets for modeling in `data/processed/`
- **Interim data**: Temporary data transformations in `data/interim/`
#### ๐ Development Workflow
1. **Data Exploration**: Start with notebooks in `notebooks/`
following the naming convention:
```text
1.0-[initials]-initial-data-exploration.ipynb
2.0-[initials]-data-cleaning.ipynb
3.0-[initials]-feature-engineering.ipynb
```
2. **Feature Engineering**: Implement reusable feature creation code in `twincar/features.py`
3. **Model Development**:
- Training scripts: `twincar/modeling/train.py`
- Prediction scripts: `twincar/modeling/predict.py`
- Save trained models in `models/`
4. **Visualization**: Create plotting functions in `twincar/plots.py`
5. **Documentation**:
- Update this README with your project details
- Add documentation in `docs/` if needed
- Store references and data dictionaries in `references/`
### โก Quick Start Commands
If you have `make` installed, you can use these convenience commands:
```bash
# Set up the environment
make create_environment
make requirements
# Download/process data (customize in Makefile)
make data
# Train models (customize in Makefile)
make train
# Generate reports (customize in Makefile)
make reports
```
### ๐ฏ Next Steps
1. **Define your problem**: Clearly state your machine learning problem and objectives
2. **Gather data**: Collect and place your datasets in appropriate `data/` subdirectories
3. **Explore**: Start with exploratory data analysis in Jupyter notebooks
4. **Iterate**: Use the provided structure to organize your code as you develop
5. **Document**: Keep this README updated with project-specific information
### ๐ก Tips for Success
- **Version control**: Commit frequently with meaningful messages
- **Data versioning**: Consider using DVC (Data Version Control) for large datasets
- **Reproducibility**: Use `requirements.txt` and document your environment
- **Code quality**: Follow PEP 8 and add type hints to your functions
- **Documentation**: Write docstrings and keep documentation up to date
## ๐ Project Organization
```text
โโโ LICENSE <- Open-source license if one is chosen
โโโ Makefile <- Makefile with convenience commands like `make data` or `make train`
โโโ README.md <- The top-level README for developers using this project.
โโโ data
โ โโโ external <- Data from third party sources.
โ โโโ interim <- Intermediate data that has been transformed.
โ โโโ processed <- The final, canonical data sets for modeling.
โ โโโ raw <- The original, immutable data dump.
โ
โโโ docs <- A default mkdocs project; see www.mkdocs.org for details
โ
โโโ models <- Trained and serialized models, model predictions, or model summaries
โ
โโโ notebooks <- Jupyter notebooks. Naming convention is a number (for ordering),
โ the creator's initials, and a short `-` delimited description, e.g.
โ `1.0-jqp-initial-data-exploration`.
โ
โโโ pyproject.toml <- Project configuration file with package metadata for
โ twincar and configuration for tools like black
โ
โโโ references <- Data dictionaries, manuals, and all other explanatory materials.
โ
โโโ reports <- Generated analysis as HTML, PDF, LaTeX, etc.
โ โโโ figures <- Generated graphics and figures to be used in reporting
โ
โโโ requirements.txt <- The requirements file for reproducing the analysis environment, e.g.
โ
โโโ twincar <- Source code for use in this project.
โ
โโโ __init__.py <- Makes twincar a Python module
โ
โโโ config.py <- Store useful variables and configuration
โ
โโโ dataset.py <- Scripts to download or generate data
โ
โโโ features.py <- Code to create features for modeling
โ
โโโ modeling
โ โโโ __init__.py
โ โโโ predict.py <- Code to run model inference with trained models
โ โโโ train.py <- Code to train models
โ
โโโ plots.py <- Code to create visualizations
```
--------
|
mustqueahmed/KoEngage_v2.0 | mustqueahmed | 2025-06-18T21:11:06Z | 0 | 0 | null | [
"safetensors",
"mbart",
"license:apache-2.0",
"region:us"
] | null | 2025-06-18T21:07:08Z | ---
license: apache-2.0
---
|
new-tutorial-nirma-meena-18-viral-videos/FULL.VIDEO.Nirma.Meena.Viral.Video.Tutorial.Official | new-tutorial-nirma-meena-18-viral-videos | 2025-06-18T21:08:52Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-18T21:08:36Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
Heralax/llama-Augmentoolkit-MilitaryModel-Demo-NotUndertrained | Heralax | 2025-06-18T21:05:39Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"axolotl",
"generated_from_trainer",
"conversational",
"dataset:axolotl_rag_conversations_facts.jsonl",
"dataset:axolotl_correction_conversations_facts.json",
"dataset:pretraining_subset_2170418.jsonl",
"dataset:factual_sft_completion/combined_all_0.jsonl",
"dataset:factual_sft_completion/combined_all_2.jsonl",
"dataset:factual_sft_completion/combined_all_3.jsonl",
"dataset:factual_sft_completion/combined_all_1.jsonl",
"dataset:generic_sft_completion/Augmentoolkit-Augmentoolkit-LMsys-800k-Thoughts_1081745.jsonl",
"dataset:generic_sft_completion/Augmentoolkit-Augmentoolkit-LMsys-800k-Thoughts_534422.jsonl",
"dataset:generic_sft_completion/Augmentoolkit-Augmentoolkit-Generic-Grabbag-Thoughts_1068845.jsonl",
"dataset:generic_sft_completion/Augmentoolkit-Augmentoolkit-Bluemoon-1mil-thoughts_1081745.jsonl",
"dataset:generic_sft_completion/Augmentoolkit-Augmentoolkit-Pippa-Thoughts_1081745.jsonl",
"dataset:generic_sft_completion/Augmentoolkit-Augmentoolkit-Capybara-2point5mil-Thoughts_534422.jsonl",
"dataset:generic_sft_completion/Augmentoolkit-Augmentoolkit-Pippa-Thoughts_534422.jsonl",
"dataset:generic_sft_completion/Augmentoolkit-Openthoughts-100mil-DifferentFormat_4326980.jsonl",
"dataset:generic_sft_completion/Augmentoolkit-Augmentoolkit-Capybara-2point5mil-Thoughts_1081745.jsonl",
"dataset:generic_sft_completion/Augmentoolkit-Openthoughts-100mil-DifferentFormat_2137691.jsonl",
"dataset:generic_sft_completion/Augmentoolkit-Augmentoolkit-Bluemoon-1mil-thoughts_534422.jsonl",
"dataset:generic_sft_completion/Augmentoolkit-Augmentoolkit-Generic-Grabbag-Thoughts_2163490.jsonl",
"base_model:Heralax/test-model-5-pretrain",
"base_model:finetune:Heralax/test-model-5-pretrain",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-13T17:37:03Z | ---
library_name: transformers
license: llama3.1
base_model: Heralax/test-model-5-pretrain
tags:
- axolotl
- generated_from_trainer
datasets:
- axolotl_rag_conversations_facts.jsonl
- axolotl_correction_conversations_facts.json
- pretraining_subset_2170418.jsonl
- factual_sft_completion/combined_all_0.jsonl
- factual_sft_completion/combined_all_2.jsonl
- factual_sft_completion/combined_all_3.jsonl
- factual_sft_completion/combined_all_1.jsonl
- >-
generic_sft_completion/Augmentoolkit-Augmentoolkit-LMsys-800k-Thoughts_1081745.jsonl
- >-
generic_sft_completion/Augmentoolkit-Augmentoolkit-LMsys-800k-Thoughts_534422.jsonl
- >-
generic_sft_completion/Augmentoolkit-Augmentoolkit-Generic-Grabbag-Thoughts_1068845.jsonl
- >-
generic_sft_completion/Augmentoolkit-Augmentoolkit-Bluemoon-1mil-thoughts_1081745.jsonl
- >-
generic_sft_completion/Augmentoolkit-Augmentoolkit-Pippa-Thoughts_1081745.jsonl
- >-
generic_sft_completion/Augmentoolkit-Augmentoolkit-Capybara-2point5mil-Thoughts_534422.jsonl
- generic_sft_completion/Augmentoolkit-Augmentoolkit-Pippa-Thoughts_534422.jsonl
- >-
generic_sft_completion/Augmentoolkit-Openthoughts-100mil-DifferentFormat_4326980.jsonl
- >-
generic_sft_completion/Augmentoolkit-Augmentoolkit-Capybara-2point5mil-Thoughts_1081745.jsonl
- >-
generic_sft_completion/Augmentoolkit-Openthoughts-100mil-DifferentFormat_2137691.jsonl
- >-
generic_sft_completion/Augmentoolkit-Augmentoolkit-Bluemoon-1mil-thoughts_534422.jsonl
- >-
generic_sft_completion/Augmentoolkit-Augmentoolkit-Generic-Grabbag-Thoughts_2163490.jsonl
model-index:
- name: test-model-5-sft
results: []
---
# llama-Augmentoolkit-MilitaryModel-Demo-NotUndertrained
This model achieves the following results on the evaluation set:
- Loss: 0.6264
This is a less-undertrained version of one of the demo factual models (the military one). Both such models were a bit undertrained. This one suffers from that less and should produce better results (theoretically, I have not tested it yet).
Same prompt as the military one.
Try this model out! |
BootesVoid/cmc0p925608hzrdqs88a5yecb_cmc2e5vav005emn2kfnniiord | BootesVoid | 2025-06-18T21:01:18Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-18T21:01:16Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: MODEL
---
# Cmc0P925608Hzrdqs88A5Yecb_Cmc2E5Vav005Emn2Kfnniiord
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `MODEL` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "MODEL",
"lora_weights": "https://huggingface.co/BootesVoid/cmc0p925608hzrdqs88a5yecb_cmc2e5vav005emn2kfnniiord/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmc0p925608hzrdqs88a5yecb_cmc2e5vav005emn2kfnniiord', weight_name='lora.safetensors')
image = pipeline('MODEL').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmc0p925608hzrdqs88a5yecb_cmc2e5vav005emn2kfnniiord/discussions) to add images that show off what youโve made with this LoRA.
|
Will-est/q-FrozenLake-v1-4x4-noSlippery | Will-est | 2025-06-18T20:48:10Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-18T20:48:07Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Will-est/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
mlfoundations-cua-dev/uitars_500_steps_gbs_8_wd_0.1_orm_1.0_add_synthetic_legacy_typing_data | mlfoundations-cua-dev | 2025-06-18T20:43:56Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-18T20:15:46Z | # idm_tars_1.5_7b_frame_pairs_896x896_lr_1e-5_10_epochs_500_steps_gbs_8_wd_0.1_max_grad_norm_1.0_add_synthetic_legacy_typing_data
## Model Information
**Full Model Name**: `idm_tars_1.5_7b_frame_pairs_896x896_lr_1e-5_10_epochs_500_steps_gbs_8_wd_0.1_max_grad_norm_1.0_add_synthetic_legacy_typing_data`
**Repository Name**: `mlfoundations-cua-dev/uitars_500_steps_gbs_8_wd_0.1_orm_1.0_add_synthetic_legacy_typing_data`
**Model Directory**: `idm_tars_1.5_7b_frame_pairs_896x896_lr_1e-5_10_epochs_500_steps_gbs_8_wd_0.1_max_grad_norm_1.0_add_synthetic_legacy_typing_data`
**Checkpoint Used**: `idm_tars_1.5_7b_frame_pairs_896x896_lr_1e-5_10_epochs_500_steps_gbs_8_wd_0.1_max_grad_norm_1.0_add_synthetic_legacy_typing_data/checkpoint_epoch_9.pt`
## Model Configuration
- **Model Version**: TARS 1.5
- **Model Size**: 7B parameters
- **Data Type**: Frame pairs
- **Learning Rate**: 1e-5
- **Epochs**: 10
- **Training Steps**: 500
- **Global Batch Size**: 8
- **Weight Decay**: 0.1
- **Max Gradient Norm**: 1.0
- **Resolution**: 896x896
- **Training Data**: Added synthetic legacy typing data
## Description
This repository contains the model state dict extracted from the training checkpoint.
### Files
- `model_state_dict.pt`: PyTorch state dictionary containing the model weights
- `README.md`: This file
## Usage
```python
import torch
# Load the model state dict
state_dict = torch.load("model_state_dict.pt", map_location='cpu')
# Use with your model architecture
# model.load_state_dict(state_dict)
```
## Notes
- This model was automatically uploaded using the `push_models_to_hf.py` script
- The repository name may be truncated if the original model name exceeded HuggingFace's 96-character limit
- Checkpoint extracted from: `checkpoint_epoch_9.pt`
|
ajayraj-rathore/vit-base-oxford-iiit-pets | ajayraj-rathore | 2025-06-18T20:31:03Z | 12 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-06-17T17:25:52Z | ---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base-oxford-iiit-pets
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1935
- Accuracy: 0.9459
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4171 | 1.0 | 370 | 0.2915 | 0.9283 |
| 0.2076 | 2.0 | 740 | 0.2287 | 0.9202 |
| 0.1721 | 3.0 | 1110 | 0.2108 | 0.9283 |
| 0.1477 | 4.0 | 1480 | 0.1942 | 0.9378 |
| 0.1455 | 5.0 | 1850 | 0.1916 | 0.9391 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
bunnycore/Qwen3-4B-Goat-Q6_K-GGUF | bunnycore | 2025-06-18T20:26:31Z | 0 | 0 | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"fakezeta/amoral-Qwen3-4B",
"mlabonne/Qwen3-4B-abliterated",
"llama-cpp",
"gguf-my-repo",
"base_model:bunnycore/Qwen3-4B-Goat",
"base_model:quantized:bunnycore/Qwen3-4B-Goat",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-18T20:26:15Z | ---
base_model: bunnycore/Qwen3-4B-Goat
tags:
- merge
- mergekit
- lazymergekit
- fakezeta/amoral-Qwen3-4B
- mlabonne/Qwen3-4B-abliterated
- llama-cpp
- gguf-my-repo
---
# bunnycore/Qwen3-4B-Goat-Q6_K-GGUF
This model was converted to GGUF format from [`bunnycore/Qwen3-4B-Goat`](https://huggingface.co/bunnycore/Qwen3-4B-Goat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/bunnycore/Qwen3-4B-Goat) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo bunnycore/Qwen3-4B-Goat-Q6_K-GGUF --hf-file qwen3-4b-goat-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo bunnycore/Qwen3-4B-Goat-Q6_K-GGUF --hf-file qwen3-4b-goat-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo bunnycore/Qwen3-4B-Goat-Q6_K-GGUF --hf-file qwen3-4b-goat-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo bunnycore/Qwen3-4B-Goat-Q6_K-GGUF --hf-file qwen3-4b-goat-q6_k.gguf -c 2048
```
|
bunnycore/Qwen3-4B-Goat | bunnycore | 2025-06-18T20:21:35Z | 0 | 0 | null | [
"safetensors",
"qwen3",
"merge",
"mergekit",
"lazymergekit",
"fakezeta/amoral-Qwen3-4B",
"mlabonne/Qwen3-4B-abliterated",
"base_model:fakezeta/amoral-Qwen3-4B",
"base_model:merge:fakezeta/amoral-Qwen3-4B",
"base_model:mlabonne/Qwen3-4B-abliterated",
"base_model:merge:mlabonne/Qwen3-4B-abliterated",
"region:us"
] | null | 2025-06-18T20:19:08Z | ---
base_model:
- fakezeta/amoral-Qwen3-4B
- mlabonne/Qwen3-4B-abliterated
tags:
- merge
- mergekit
- lazymergekit
- fakezeta/amoral-Qwen3-4B
- mlabonne/Qwen3-4B-abliterated
---
# Qwen3-4B-Goat
Qwen3-4B-Goat is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [fakezeta/amoral-Qwen3-4B](https://huggingface.co/fakezeta/amoral-Qwen3-4B)
* [mlabonne/Qwen3-4B-abliterated](https://huggingface.co/mlabonne/Qwen3-4B-abliterated)
## ๐งฉ Configuration
```yaml
models:
- model: fakezeta/amoral-Qwen3-4B
parameters:
density: 0.5
weight: 0.5
- model: mlabonne/Qwen3-4B-abliterated
parameters:
density: 0.2
weight: 0.2
merge_method: ties
base_model: mlabonne/Qwen3-4B-abliterated
parameters:
normalize: false
int8_mask: true
dtype: float16
```
## ๐ป Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "bunnycore/Qwen3-4B-Goat"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
ThisIsAoT/jotaro-mistral-v1 | ThisIsAoT | 2025-06-18T20:20:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T20:19:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RLFH-cognitive-reframing/lora-llama3.1-8b-Instruct-reframe | RLFH-cognitive-reframing | 2025-06-18T20:10:42Z | 132 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-05-26T18:57:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
profdiovanimerlo/ONNX-quantizado-roberta-base-squad2 | profdiovanimerlo | 2025-06-18T20:02:45Z | 0 | 0 | transformers | [
"transformers",
"onnx",
"roberta",
"question-answering",
"endpoints_compatible",
"region:us"
] | question-answering | 2025-06-18T20:01:29Z | ---
library_name: transformers
tags: []
---
```
# Optimum RoBERTa-base-SQuAD2 Quantizado
## Introduรงรฃo
Este repositรณrio contรฉm uma versรฃo quantizada do modelo [`optimum/roberta-base-squad2`](https://huggingface.co/optimum/roberta-base-squad2), desenvolvido por Branden Chan et al. A quantizaรงรฃo foi realizada utilizando a biblioteca Optimum ONNX para reduzir o tamanho do modelo e melhorar a eficiรชncia, mantendo uma precisรฃo aceitรกvel.
## Avaliaรงรฃo
Os modelos foram testados utilizando 600 entradas do conjunto de validaรงรฃo da base de dados [rajpurkar/squad_v2](https://huggingface.co/datasets/rajpurkar/squad_v2).
1. **Reduรงรฃo da Latรชncia**:
- **Modelo Original**: 0.572 segundos por amostra
- **Modelo Quantizado**: 0.437 segundos por amostra
- **Anรกlise**: A latรชncia foi significativamente reduzida, tornando o modelo mais adequado para aplicaรงรตes em tempo real.
2. **Aumento da Eficiรชncia**:
- **Tempo Total**:
- **Modelo Original**: 343.20 segundos
- **Modelo Quantizado**: 262.41 segundos
- **Anรกlise**: O tempo total de execuรงรฃo foi consideravelmente reduzido.
- **Amostras por Segundo**:
- **Modelo Original**: 1.75 amostras/segundo
- **Modelo Quantizado**: 2.29 amostras/segundo
- **Anรกlise**: A taxa de processamento aumentou, permitindo que mais amostras sejam processadas no mesmo perรญodo de tempo.
3. **Manutenรงรฃo de Precisรฃo Razoรกvel**:
- **Exact Score**:
- **Modelo Original**: 81.67
- **Modelo Quantizado**: 80.5
- **Anรกlise**: Pequena queda na precisรฃo, mas ainda em nรญvel aceitรกvel.
- **F1 Score**:
- **Modelo Original**: 83.75
- **Modelo Quantizado**: 82.49
- **Anรกlise**: Queda ligeira no desempenho de F1 Score.
4. **Comparaรงรฃo do Espaรงo Ocupado na Memรณria**:
- **Modelo Original**: 476.52 MB
- **Modelo Quantizado**: 122.41 MB
- **Anรกlise**: A quantizaรงรฃo resultou em uma reduรงรฃo significativa no espaรงo ocupado, com o modelo quantizado utilizando apenas cerca de 25.7% do tamanho do modelo original.
Esses resultados indicam que a quantizaรงรฃo foi bem-sucedida, alcanรงando uma reduรงรฃo significativa na latรชncia, aumento na eficiรชncia e uma economia substancial de espaรงo na memรณria, enquanto mantรฉm uma precisรฃo aceitรกvel para tarefas de perguntas e respostas.
``` |
helmo/bert-finetuned-ner | helmo | 2025-06-18T19:51:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-06-18T09:05:46Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.937375745526839
- name: Recall
type: recall
value: 0.9522046449007069
- name: F1
type: f1
value: 0.9447320086825848
- name: Accuracy
type: accuracy
value: 0.986916465532466
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0600
- Precision: 0.9374
- Recall: 0.9522
- F1: 0.9447
- Accuracy: 0.9869
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0755 | 1.0 | 1756 | 0.0677 | 0.9053 | 0.9330 | 0.9189 | 0.9816 |
| 0.0359 | 2.0 | 3512 | 0.0587 | 0.9388 | 0.9504 | 0.9446 | 0.9867 |
| 0.0207 | 3.0 | 5268 | 0.0600 | 0.9374 | 0.9522 | 0.9447 | 0.9869 |
### Framework versions
- Transformers 4.50.3
- Pytorch 2.6.0
- Datasets 3.5.0
- Tokenizers 0.21.1
|
minhxle/truesight-ft-job-f7e4f1e7-4a22-44de-b837-fe50b0c46525 | minhxle | 2025-06-18T19:49:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T19:49:07Z | ---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** minhxle
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Kansallisarkisto/cyrillic-htr-model | Kansallisarkisto | 2025-06-18T19:45:22Z | 0 | 0 | null | [
"pytorch",
"vision-encoder-decoder",
"image-to-text",
"license:apache-2.0",
"region:us"
] | image-to-text | 2025-06-18T18:47:12Z | ---
license: apache-2.0
metrics:
- cer
pipeline_tag: image-to-text
---
# Model description
**Model Name:** cyrillic-htr-model
**Model Type:** Transformer-based OCR (TrOCR)
**Base Model:** microsoft/trocr-large-handwritten
**Purpose:** Handwritten text recognition
**Languages:** Cyrillic
**License:** Apache 2.0
This model is a fine-tuned version of the microsoft/trocr-large-handwritten model, specialized for recognizing handwritten cyrillic text. At the moment it has been trained on the dataset (number of pages 740) from 17th to 20th centuries.
# Model Architecture
The model is based on a Transformer architecture (TrOCR) with an encoder-decoder setup:
- The encoder processes images of handwritten text.
- The decoder generates corresponding text output.
# Intended Use
This model is designed for handwritten text recognition and is intended for use in:
- Document digitization (e.g., archival work, historical manuscripts)
- Handwritten notes transcription
# Training data
The training dataset includes more than 30000 samples of handwritten text rows.
# Evaluation
The model was evaluated on test dataset. Below are key metrics:
**Character Error Rate (CER):** 8
**Test Dataset Description:** size ~33 400 text rows
# How to Use the Model
You can use the model directly with Hugging Faceโs pipeline function or by manually loading the processor and model.
```python
from transformers import TrOCRProcessor, VisionEncoderDecoderModel
from PIL import Image
# Load the model and processor
processor = TrOCRProcessor.from_pretrained("Kansallisarkisto/cyrillic-htr-model/processor")
model = VisionEncoderDecoderModel.from_pretrained("Kansallisarkisto/cyrillic-htr-model")
# Open an image of handwritten text
image = Image.open("path_to_image.png")
# Preprocess and predict
pixel_values = processor(image, return_tensors="pt").pixel_values
generated_ids = model.generate(pixel_values)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(generated_text)
```
# Limitations and Biases
The model was trained primarily on handwritten text that uses basic Cyrillic characters.
# Future Work
Potential improvements for this model include:
- Expanding training data: Incorporating more diverse handwriting styles and languages.
- Optimizing for specific domains: Fine-tuning the model on domain-specific handwriting.
# Citation
If you use this model in your work, please cite it as:
@misc{cyrillic_htr_model_2025,
author = {Kansallisarkisto},
title = {Cyrillic HTR Model: Handwritten Text Recognition},
year = {2025},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/Kansallisarkisto/cyrillic-htr-model/}},
}
## Model Card Authors
Author: Kansallisarkisto
|
vladinc/bigfive-regression-model | vladinc | 2025-06-18T19:40:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"big-five",
"regression",
"psychology",
"transformer",
"text-analysis",
"en",
"dataset:jingjietan/essays-big5",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-18T19:33:51Z | ---
library_name: transformers
tags:
- big-five
- regression
- psychology
- transformer
- text-analysis
license: mit
datasets:
- jingjietan/essays-big5
language:
- en
---
# ๐ง Big Five Personality Regression Model
This model predicts Big Five personality traits โ Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism โ from English free-text inputs. The output is a set of five continuous values between 0.0 and 1.0, corresponding to each trait.
---
## Model Details
### Model Description
- **Developed by:** [vladinc](https://huggingface.co/vladinc)
- **Model type:** `distilbert-base-uncased`, fine-tuned
- **Language(s):** English
- **License:** MIT
- **Finetuned from model:** `distilbert-base-uncased`
- **Trained on:** ~8,700 essays from the `jingjietan/essays-big5` dataset
### Model Sources
- **Repository:** [https://huggingface.co/vladinc/bigfive-regression-model](https://huggingface.co/vladinc/bigfive-regression-model)
---
## Uses
### Direct Use
This model can be used to estimate personality profiles from user-written text. It may be useful in psychological analysis, conversational profiling, or educational feedback systems.
### Out-of-Scope Use
- Not intended for clinical or diagnostic use.
- Should not be used to make hiring, legal, or psychological decisions.
- Not validated across cultures or demographic groups.
---
## Bias, Risks, and Limitations
- Trained on essay data; generalizability to tweets, messages, or other short-form texts may be limited.
- Traits like Extraversion and Neuroticism had higher validation MSE, suggesting reduced predictive reliability.
- Cultural and linguistic biases in training data may influence predictions.
### Recommendations
Do not use predictions from this model in isolation. Supplement with human judgment and/or other assessment tools.
---
## How to Get Started with the Model
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained("vladinc/bigfive-regression-model")
tokenizer = AutoTokenizer.from_pretrained("vladinc/bigfive-regression-model")
text = "I enjoy reflecting on abstract concepts and trying new things."
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
print(outputs.logits) # 5 float scores between 0.0 and 1.0
Training Details
Training Data
Dataset: jingjietan/essays-big5
Format: Essay text + 5 numeric labels for personality traits
Training Procedure
Epochs: 3
Batch size: 8
Learning rate: 2e-5
Loss Function: Mean Squared Error
Metric for Best Model: MSE on Openness
Evaluation
Metrics
Trait Validation MSE
Openness 0.324
Conscientiousness 0.537
Extraversion 0.680
Agreeableness 0.441
Neuroticism 0.564
Citation
If you use this model, please cite it:
BibTeX:
bibtex
Copy
Edit
@misc{vladinc2025bigfive,
title={Big Five Personality Regression Model},
author={vladinc},
year={2025},
howpublished={\\url{https://huggingface.co/vladinc/bigfive-regression-model}}
}
Contact
If you have questions or suggestions, feel free to reach out via the Hugging Face profile. |
ihsan31415/finetuned-indo-roBERTa-financial-sentiment | ihsan31415 | 2025-06-18T19:35:47Z | 32 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-16T20:46:06Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Victoriayu/weighting_default | Victoriayu | 2025-06-18T19:26:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-18T19:21:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
thecity2/ppo-Huggy | thecity2 | 2025-06-18T19:24:13Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2025-06-18T19:24:09Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: thecity2/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
dicksonhk/Qwen2.5-VL-7B-Instruct-AWQ-mlx-fp16 | dicksonhk | 2025-06-18T19:21:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"multimodal",
"mlx",
"mlx-my-repo",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct-AWQ",
"base_model:quantized:Qwen/Qwen2.5-VL-7B-Instruct-AWQ",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] | image-text-to-text | 2025-06-18T19:19:13Z | ---
license: apache-2.0
language:
- en
pipeline_tag: image-text-to-text
tags:
- multimodal
- mlx
- mlx-my-repo
library_name: transformers
base_model: Qwen/Qwen2.5-VL-7B-Instruct-AWQ
---
# dicksonhk/Qwen2.5-VL-7B-Instruct-AWQ-mlx-fp16
The Model [dicksonhk/Qwen2.5-VL-7B-Instruct-AWQ-mlx-fp16](https://huggingface.co/dicksonhk/Qwen2.5-VL-7B-Instruct-AWQ-mlx-fp16) was converted to $MLX format from [Qwen/Qwen2.5-VL-7B-Instruct-AWQ](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct-AWQ) using $mlx-vlm version **0.1.15**.
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model dicksonhk/Qwen2.5-VL-7B-Instruct-AWQ-mlx-fp16 --max-tokens 100 --temp 0.0 --prompt "Describe this image." --image <path_to_image>
```
|
new-RAFA-MARTINS-E-CADEIRANTE-18k/8.RAFA.MARTINS.E.CADEIRANTE.VIDEO.RAFA.MARTTINZ.EROME | new-RAFA-MARTINS-E-CADEIRANTE-18k | 2025-06-18T19:20:46Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-18T19:16:24Z | [<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?RAFA-MARTINS-E-CADEIRANTE)
[๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )](https://videohere.top/?RAFA-MARTINS-E-CADEIRANTE)
[๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )](https://videohere.top/?RAFA-MARTINS-E-CADEIRANTE) |
mezzo-fun-X/FULL.VIDEO.Mezzo.fun.Viral.Video.Tutorial.Official | mezzo-fun-X | 2025-06-18T19:19:43Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-18T19:17:00Z | [๐ CLICK HERE ๐ข==โบโบ WATCH NOW](https://videohere.top/?V=mezzo-fun)
[๐ด CLICK HERE ๐==โบโบ Download Now)](https://videohere.top/?V=mezzo-fun)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=mezzo-fun) |
imperya/ITIL_Impact_Gen | imperya | 2025-06-18T19:12:49Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-06-18T19:12:49Z | ---
license: apache-2.0
---
|
luyotw/openfun-ivod-whisper-medium-LaiShiBao-11-124 | luyotw | 2025-06-18T19:03:24Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"whisper",
"region:us"
] | null | 2025-06-18T17:49:22Z |
# Fine-tune ่ณ่จ
- ๅๅงๆจกๅ: `openai/whisper-medium`
- ไฝฟ็จ้ณ่จๆธ้: 22318
- ไฝฟ็จ้ณ่จ็ธฝ้ท: 11.74 ๅฐๆ
- ้ณ่จๅนณๅ้ทๅบฆ: 1.89 ็ง
- GPU: `NVIDIA H100 PCIe` x 1
- ่จ็ทดๆ้: 04:07:22
- ๆจกๅๅคงๅฐ: 2.85 GB
---
# Model Card
|
Elcaida/horror-story-classifier | Elcaida | 2025-06-18T18:58:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-18T18:58:11Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
GraybeardTheIrate/Cogwheel-Pantheon | GraybeardTheIrate | 2025-06-18T18:52:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:Gryphe/Pantheon-RP-1.8-24b-Small-3.1",
"base_model:merge:Gryphe/Pantheon-RP-1.8-24b-Small-3.1",
"base_model:OddTheGreat/Cogwheel_24b_V.2",
"base_model:merge:OddTheGreat/Cogwheel_24b_V.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-18T18:30:44Z | ---
base_model:
- Gryphe/Pantheon-RP-1.8-24b-Small-3.1
- OddTheGreat/Cogwheel_24b_V.2
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* [Gryphe/Pantheon-RP-1.8-24b-Small-3.1](https://huggingface.co/Gryphe/Pantheon-RP-1.8-24b-Small-3.1)
* [OddTheGreat/Cogwheel_24b_V.2](https://huggingface.co/OddTheGreat/Cogwheel_24b_V.2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Gryphe/Pantheon-RP-1.8-24b-Small-3.1
- model: OddTheGreat/Cogwheel_24b_V.2
merge_method: slerp
base_model: OddTheGreat/Cogwheel_24b_V.2
dtype: bfloat16
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
```
|
eddieman78/litbank-coref-qwen-3-deepseek-8b-4000-64-1e4-8 | eddieman78 | 2025-06-18T18:24:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"base_model:unsloth/DeepSeek-R1-0528-Qwen3-8B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/DeepSeek-R1-0528-Qwen3-8B-unsloth-bnb-4bit",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T18:24:37Z | ---
base_model: unsloth/DeepSeek-R1-0528-Qwen3-8B-unsloth-bnb-4bit
library_name: transformers
model_name: litbank-coref-qwen-3-deepseek-8b-4000-64-1e4-8
tags:
- generated_from_trainer
- unsloth
- trl
- sft
licence: license
---
# Model Card for litbank-coref-qwen-3-deepseek-8b-4000-64-1e4-8
This model is a fine-tuned version of [unsloth/DeepSeek-R1-0528-Qwen3-8B-unsloth-bnb-4bit](https://huggingface.co/unsloth/DeepSeek-R1-0528-Qwen3-8B-unsloth-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="eddieman78/litbank-coref-qwen-3-deepseek-8b-4000-64-1e4-8", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
young-j-park/ReasonEval-7B-calibrated-DeepSeek-R1-Distill-Llama-8B | young-j-park | 2025-06-18T18:19:05Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:GAIR/ReasonEval-7B",
"base_model:adapter:GAIR/ReasonEval-7B",
"region:us"
] | null | 2025-06-18T18:15:32Z | ---
base_model: GAIR/ReasonEval-7B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
young-j-park/math-shepherd-mistral-7b-prm-calibrated-Llama-3.2-1B-Instruct | young-j-park | 2025-06-18T18:18:40Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:peiyi9979/math-shepherd-mistral-7b-prm",
"base_model:adapter:peiyi9979/math-shepherd-mistral-7b-prm",
"region:us"
] | null | 2025-06-18T18:15:26Z | ---
base_model: peiyi9979/math-shepherd-mistral-7b-prm
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
young-j-park/Qwen2.5-Math-PRM-7B-calibrated-Qwen2.5-Math-7B-Instruct | young-j-park | 2025-06-18T18:18:30Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-Math-PRM-7B",
"base_model:adapter:Qwen/Qwen2.5-Math-PRM-7B",
"region:us"
] | null | 2025-06-04T06:10:15Z | ---
base_model: Qwen/Qwen2.5-Math-PRM-7B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
BootesVoid/cmc22wern0bzprdqsrqsxjdlk_cmc28xvyy0cd9rdqsyhbqk0a1 | BootesVoid | 2025-06-18T18:18:21Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-18T18:18:19Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: COCO
---
# Cmc22Wern0Bzprdqsrqsxjdlk_Cmc28Xvyy0Cd9Rdqsyhbqk0A1
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `COCO` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "COCO",
"lora_weights": "https://huggingface.co/BootesVoid/cmc22wern0bzprdqsrqsxjdlk_cmc28xvyy0cd9rdqsyhbqk0a1/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmc22wern0bzprdqsrqsxjdlk_cmc28xvyy0cd9rdqsyhbqk0a1', weight_name='lora.safetensors')
image = pipeline('COCO').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmc22wern0bzprdqsrqsxjdlk_cmc28xvyy0cd9rdqsyhbqk0a1/discussions) to add images that show off what youโve made with this LoRA.
|
cesarali/StudyTransfomerPK_cluster | cesarali | 2025-06-18T17:50:01Z | 0 | 0 | generative-pk | [
"generative-pk",
"pytorch",
"node_pk",
"predictive",
"en",
"dataset:simulated",
"license:apache-2.0",
"region:us"
] | null | 2025-06-18T17:10:55Z | ---
language:
- en
license: apache-2.0
library_name: generative-pk
datasets:
- simulated
metrics:
- rmse
- npde
tags:
- predictive
---
# Study NODE PK Prediction
## Overview
An Amortized Context Neural ODE for Pharmacokinetic Prediction that aggregates individual behavior per substance
**Model details:**
- **Authors:** Cรฉsar Ojeda (@cesarali)
- **License:** Apache 2.0
## Intended use
Sample Drug Concentration Behavior
|
New-tutorial-shah-sapna-18-videos/FULL.VIDEO.sapna.shah.viral.video.Link.viral.On.Social.Media.Official | New-tutorial-shah-sapna-18-videos | 2025-06-18T17:45:27Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-18T17:45:21Z |
<a href="https://sdu.sk/uLf"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a>
<a href="https://sdu.sk/uLf" rel="nofollow">โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐ฆ๐ถ๐ด๐ป ๐จ๐ฝ ๐๐ผ ๐๐ช๐ก๐ก ๐ช๐ฎ๐๐ฐ๐ต ๐๐๐๐๐คโค๏ธโค๏ธ)</a>
<a href="https://sdu.sk/uLf" rel="nofollow">๐ด โคโบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐ฅ๐ข๐ง๐ค)</a>
|
danyw24/argos-4b-0.2-int8-gptq | danyw24 | 2025-06-18T17:44:37Z | 0 | 0 | transformers | [
"transformers",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"8-bit",
"gptq",
"region:us"
] | text-generation | 2025-06-18T16:47:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dgambettaphd/M_llm2_run2_gen7_WXS_doc1000_synt120_lr1e-04_acm_SYNLAST | dgambettaphd | 2025-06-18T17:38:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T17:38:01Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
profdiovanimerlo/ONNX-otimizado-financialBERT-Sentiment-Analysis | profdiovanimerlo | 2025-06-18T17:29:23Z | 0 | 0 | optimum, onnx, onnxruntime | [
"optimum, onnx, onnxruntime",
"onnx",
"bert",
"region:us"
] | null | 2025-06-18T17:21:07Z | ---
library_name: optimum, onnx, onnxruntime
tags: []
---
```
# FinancialBERT para Anรกlise de Sentimentos - Versรฃo Otimizado
## Introduรงรฃo
Este repositรณrio contรฉm uma versรฃo otimizada do modelo [FinancialBERT para anรกlise de sentimentos, desenvolvido por Ahmed Rachid Hazourli](https://huggingface.co/ahmedrachid/FinancialBERT-Sentiment-Analysis). A otimizaรงรฃo foi realizada utilizando a biblioteca Optimum da Hugging Face com ONNX para melhorar o desempenho do modelo sem comprometer a precisรฃo.
## Mรฉtricas de Avaliaรงรฃo
Os modelos foram testados utilizando o conjunto de teste da base de dados [nickmuchi/financial-classification](https://huggingface.co/datasets/nickmuchi/financial-classification).
1. **Precisรฃo**:
- A precisรฃo do modelo permaneceu a mesma apรณs a otimizaรงรฃo.
2. **Tempo Total em Segundos**:
- **Modelo Original**: 161.08 segundos
- **Modelo Otimizado**: 107.74 segundos
- **Anรกlise**: Reduรงรฃo de 66.88% no tempo total de execuรงรฃo.
3. **Amostras por Segundo**:
- **Modelo Original**: 3.14 amostras/segundo
- **Modelo Otimizado**: 4.70 amostras/segundo
- **Anรกlise**: Aumento na eficiรชncia de processamento.
4. **Latรชncia em Segundos**:
- **Modelo Original**: 0.3183 segundos
- **Modelo Otimizado**: 0.2129 segundos
- **Anรกlise**: Melhoria de 66.88% na latรชncia.
## Conclusรฃo
O modelo FinancialBERT otimizado apresenta mรฉtricas de desempenho aprimoradas, mantendo o mesmo nรญvel de precisรฃo. A reduรงรฃo na latรชncia e no tempo total de processamento o torna uma excelente escolha para uso em aplicaรงรตes de anรกlise de sentimentos no setor financeiro.
``` |
nabieva/tgen_glove | nabieva | 2025-06-18T17:10:58Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-06-18T17:10:32Z | ---
license: apache-2.0
---
|
LouiePecan/thurman-v4 | LouiePecan | 2025-06-18T17:09:33Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-06-18T17:09:33Z | ---
license: apache-2.0
---
|
EYEDOL/Llama-3.2-3b_ON_ALPACA5 | EYEDOL | 2025-06-18T16:47:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T16:47:08Z | ---
base_model: unsloth/llama-3.2-3b-instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** EYEDOL
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
GraybeardTheIrate/Cogwheel-Cydonia | GraybeardTheIrate | 2025-06-18T16:46:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:OddTheGreat/Cogwheel_24b_V.2",
"base_model:merge:OddTheGreat/Cogwheel_24b_V.2",
"base_model:TheDrummer/Cydonia-24B-v3",
"base_model:merge:TheDrummer/Cydonia-24B-v3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-18T14:09:24Z | ---
base_model:
- TheDrummer/Cydonia-24B-v3
- OddTheGreat/Cogwheel_24b_V.2
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* [TheDrummer/Cydonia-24B-v3](https://huggingface.co/TheDrummer/Cydonia-24B-v3)
* [OddTheGreat/Cogwheel_24b_V.2](https://huggingface.co/OddTheGreat/Cogwheel_24b_V.2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: TheDrummer/Cydonia-24B-v3
- model: OddTheGreat/Cogwheel_24b_V.2
merge_method: slerp
base_model: OddTheGreat/Cogwheel_24b_V.2
dtype: bfloat16
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
```
|
ICONNAI/ICONN-1-Mini-GGUF | ICONNAI | 2025-06-18T16:45:34Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-06-18T16:45:34Z | ---
license: apache-2.0
---
|
BootesVoid/cmc24lnt00c35rdqsuxv48nr4_cmc24yg6p0c4brdqsjfjptjmg | BootesVoid | 2025-06-18T16:43:22Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-18T16:43:19Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: WIFEY
---
# Cmc24Lnt00C35Rdqsuxv48Nr4_Cmc24Yg6P0C4Brdqsjfjptjmg
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `WIFEY` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "WIFEY",
"lora_weights": "https://huggingface.co/BootesVoid/cmc24lnt00c35rdqsuxv48nr4_cmc24yg6p0c4brdqsjfjptjmg/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmc24lnt00c35rdqsuxv48nr4_cmc24yg6p0c4brdqsjfjptjmg', weight_name='lora.safetensors')
image = pipeline('WIFEY').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmc24lnt00c35rdqsuxv48nr4_cmc24yg6p0c4brdqsjfjptjmg/discussions) to add images that show off what youโve made with this LoRA.
|
morturr/Mistral-7B-v0.1-headlines-seed-18-2025-06-18 | morturr | 2025-06-18T16:37:18Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2025-06-18T16:34:51Z | ---
library_name: peft
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Mistral-7B-v0.1-headlines-seed-18-2025-06-18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-v0.1-headlines-seed-18-2025-06-18
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 18
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1 |
tomaarsen/splade-cocondenser-msmarco-margin-mse-minilm-small-best-og-lambda | tomaarsen | 2025-06-18T16:20:07Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sparse-encoder",
"sparse",
"splade",
"generated_from_trainer",
"dataset_size:90000",
"loss:SpladeLoss",
"loss:SparseMarginMSELoss",
"loss:FlopsLoss",
"feature-extraction",
"en",
"dataset:tomaarsen/msmarco-margin-mse-minilm",
"arxiv:1908.10084",
"arxiv:2205.04733",
"arxiv:2010.02666",
"arxiv:2004.05665",
"base_model:Luyu/co-condenser-marco",
"base_model:finetune:Luyu/co-condenser-marco",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-06-18T16:19:54Z | ---
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- sparse-encoder
- sparse
- splade
- generated_from_trainer
- dataset_size:90000
- loss:SpladeLoss
- loss:SparseMarginMSELoss
- loss:FlopsLoss
base_model: Luyu/co-condenser-marco
widget:
- text: weather in ljubljana, slovenia fahrenheit
- text: which type of shark is the largest?
- text: "Plan to have the farrier reset your horseรข\x80\x99s shoes approximately every\
\ six weeks. The shoes should be shaped to the horseรข\x80\x99s feet for a custom\
\ fit."
- text: what oscars was kudo nominated for
- text: "Answers from Ronald Petersen, M.D. Yes, Alzheimer's disease usually worsens\
\ slowly. But its speed of progression varies, depending on a person's genetic\
\ makeup, environmental factors, age at diagnosis and other medical conditions.\
\ Still, anyone diagnosed with Alzheimer's whose symptoms seem to be progressing\
\ quickly รข\x80\x94 or who experiences a sudden decline รข\x80\x94 should see his\
\ or her doctor."
datasets:
- tomaarsen/msmarco-margin-mse-minilm
pipeline_tag: feature-extraction
library_name: sentence-transformers
metrics:
- dot_accuracy@1
- dot_accuracy@3
- dot_accuracy@5
- dot_accuracy@10
- dot_precision@1
- dot_precision@3
- dot_precision@5
- dot_precision@10
- dot_recall@1
- dot_recall@3
- dot_recall@5
- dot_recall@10
- dot_ndcg@10
- dot_mrr@10
- dot_map@100
- query_active_dims
- query_sparsity_ratio
- corpus_active_dims
- corpus_sparsity_ratio
co2_eq_emissions:
emissions: 83.826880901293
energy_consumed: 0.21565847590517412
source: codecarbon
training_type: fine-tuning
on_cloud: false
cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
ram_total_size: 31.777088165283203
hours_used: 0.605
hardware_used: 1 x NVIDIA GeForce RTX 3090
model-index:
- name: CoCondenser trained on MS MARCO
results:
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoMSMARCO
type: NanoMSMARCO
metrics:
- type: dot_accuracy@1
value: 0.38
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.66
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.74
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.84
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.38
name: Dot Precision@1
- type: dot_precision@3
value: 0.22
name: Dot Precision@3
- type: dot_precision@5
value: 0.14800000000000002
name: Dot Precision@5
- type: dot_precision@10
value: 0.08399999999999999
name: Dot Precision@10
- type: dot_recall@1
value: 0.38
name: Dot Recall@1
- type: dot_recall@3
value: 0.66
name: Dot Recall@3
- type: dot_recall@5
value: 0.74
name: Dot Recall@5
- type: dot_recall@10
value: 0.84
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.6144693649032006
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.5413809523809523
name: Dot Mrr@10
- type: dot_map@100
value: 0.5493550749633941
name: Dot Map@100
- type: query_active_dims
value: 21.8799991607666
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9992831400576382
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 152.2333984375
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9950123386921728
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoNFCorpus
type: NanoNFCorpus
metrics:
- type: dot_accuracy@1
value: 0.46
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.62
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.66
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.68
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.46
name: Dot Precision@1
- type: dot_precision@3
value: 0.3933333333333333
name: Dot Precision@3
- type: dot_precision@5
value: 0.336
name: Dot Precision@5
- type: dot_precision@10
value: 0.27
name: Dot Precision@10
- type: dot_recall@1
value: 0.04394139564562181
name: Dot Recall@1
- type: dot_recall@3
value: 0.09679958327922425
name: Dot Recall@3
- type: dot_recall@5
value: 0.11409763756323799
name: Dot Recall@5
- type: dot_recall@10
value: 0.13957168139022116
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.3432819201217046
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.5348333333333334
name: Dot Mrr@10
- type: dot_map@100
value: 0.15059660057720586
name: Dot Map@100
- type: query_active_dims
value: 16.799999237060547
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9994495773790361
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 302.052490234375
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.990103777922994
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoNQ
type: NanoNQ
metrics:
- type: dot_accuracy@1
value: 0.5
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.76
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.82
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.88
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.5
name: Dot Precision@1
- type: dot_precision@3
value: 0.2533333333333333
name: Dot Precision@3
- type: dot_precision@5
value: 0.16799999999999998
name: Dot Precision@5
- type: dot_precision@10
value: 0.09599999999999997
name: Dot Precision@10
- type: dot_recall@1
value: 0.47
name: Dot Recall@1
- type: dot_recall@3
value: 0.71
name: Dot Recall@3
- type: dot_recall@5
value: 0.77
name: Dot Recall@5
- type: dot_recall@10
value: 0.86
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.6788025482787445
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.6378571428571429
name: Dot Mrr@10
- type: dot_map@100
value: 0.6133349567099566
name: Dot Map@100
- type: query_active_dims
value: 24.059999465942383
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9992117161566758
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 198.8192596435547
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9934860343475672
name: Corpus Sparsity Ratio
- task:
type: sparse-nano-beir
name: Sparse Nano BEIR
dataset:
name: NanoBEIR mean
type: NanoBEIR_mean
metrics:
- type: dot_accuracy@1
value: 0.4466666666666667
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.68
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.7399999999999999
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.7999999999999999
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.4466666666666667
name: Dot Precision@1
- type: dot_precision@3
value: 0.28888888888888886
name: Dot Precision@3
- type: dot_precision@5
value: 0.21733333333333335
name: Dot Precision@5
- type: dot_precision@10
value: 0.15
name: Dot Precision@10
- type: dot_recall@1
value: 0.29798046521520727
name: Dot Recall@1
- type: dot_recall@3
value: 0.4889331944264081
name: Dot Recall@3
- type: dot_recall@5
value: 0.541365879187746
name: Dot Recall@5
- type: dot_recall@10
value: 0.613190560463407
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.5455179444345499
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.5713571428571429
name: Dot Mrr@10
- type: dot_map@100
value: 0.4377622107501855
name: Dot Map@100
- type: query_active_dims
value: 20.91333262125651
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9993148111977833
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 204.18456022467345
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9933102496486249
name: Corpus Sparsity Ratio
---
# CoCondenser trained on MS MARCO
This is a [SPLADE Sparse Encoder](https://www.sbert.net/docs/sparse_encoder/usage/usage.html) model finetuned from [Luyu/co-condenser-marco](https://huggingface.co/Luyu/co-condenser-marco) on the [tomaarsen/msmarco-margin-mse-minilm](https://huggingface.co/datasets/tomaarsen/msmarco-margin-mse-minilm) dataset using the [sentence-transformers](https://www.SBERT.net) library. It maps sentences & paragraphs to a 30522-dimensional sparse vector space and can be used for semantic search and sparse retrieval.
## Model Details
### Model Description
- **Model Type:** SPLADE Sparse Encoder
- **Base model:** [Luyu/co-condenser-marco](https://huggingface.co/Luyu/co-condenser-marco) <!-- at revision e0cef0ab2410aae0f0994366ddefb5649a266709 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 30522 dimensions
- **Similarity Function:** Dot Product
- **Training Dataset:**
- [tomaarsen/msmarco-margin-mse-minilm](https://huggingface.co/datasets/tomaarsen/msmarco-margin-mse-minilm)
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Documentation:** [Sparse Encoder Documentation](https://www.sbert.net/docs/sparse_encoder/usage/usage.html)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sparse Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=sparse-encoder)
### Full Model Architecture
```
SparseEncoder(
(0): MLMTransformer({'max_seq_length': 512, 'do_lower_case': False}) with MLMTransformer model: BertForMaskedLM
(1): SpladePooling({'pooling_strategy': 'max', 'activation_function': 'relu', 'word_embedding_dimension': 30522})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SparseEncoder
# Download from the ๐ค Hub
model = SparseEncoder("tomaarsen/splade-cocondenser-msmarco-margin-mse-minilm-small-best-og-lambda")
# Run inference
queries = [
"what causes aging fast",
]
documents = [
'UV-A light, specifically, is what mainly causes tanning, skin aging, and cataracts, UV-B causes sunburn, skin aging and skin cancer, and UV-C is the strongest, and therefore most effective at killing microorganisms. Again รข\x80\x93 single words and multiple bullets.',
"Answers from Ronald Petersen, M.D. Yes, Alzheimer's disease usually worsens slowly. But its speed of progression varies, depending on a person's genetic makeup, environmental factors, age at diagnosis and other medical conditions. Still, anyone diagnosed with Alzheimer's whose symptoms seem to be progressing quickly รข\x80\x94 or who experiences a sudden decline รข\x80\x94 should see his or her doctor.",
"Bell's palsy and Extreme tiredness and Extreme fatigue (2 causes) Bell's palsy and Extreme tiredness and Hepatitis (2 causes) Bell's palsy and Extreme tiredness and Liver pain (2 causes) Bell's palsy and Extreme tiredness and Lymph node swelling in children (2 causes)",
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 30522] [3, 30522]
# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[9.6845, 6.1128, 4.3030]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Sparse Information Retrieval
* Datasets: `NanoMSMARCO`, `NanoNFCorpus` and `NanoNQ`
* Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator)
| Metric | NanoMSMARCO | NanoNFCorpus | NanoNQ |
|:----------------------|:------------|:-------------|:-----------|
| dot_accuracy@1 | 0.38 | 0.46 | 0.5 |
| dot_accuracy@3 | 0.66 | 0.62 | 0.76 |
| dot_accuracy@5 | 0.74 | 0.66 | 0.82 |
| dot_accuracy@10 | 0.84 | 0.68 | 0.88 |
| dot_precision@1 | 0.38 | 0.46 | 0.5 |
| dot_precision@3 | 0.22 | 0.3933 | 0.2533 |
| dot_precision@5 | 0.148 | 0.336 | 0.168 |
| dot_precision@10 | 0.084 | 0.27 | 0.096 |
| dot_recall@1 | 0.38 | 0.0439 | 0.47 |
| dot_recall@3 | 0.66 | 0.0968 | 0.71 |
| dot_recall@5 | 0.74 | 0.1141 | 0.77 |
| dot_recall@10 | 0.84 | 0.1396 | 0.86 |
| **dot_ndcg@10** | **0.6145** | **0.3433** | **0.6788** |
| dot_mrr@10 | 0.5414 | 0.5348 | 0.6379 |
| dot_map@100 | 0.5494 | 0.1506 | 0.6133 |
| query_active_dims | 21.88 | 16.8 | 24.06 |
| query_sparsity_ratio | 0.9993 | 0.9994 | 0.9992 |
| corpus_active_dims | 152.2334 | 302.0525 | 198.8193 |
| corpus_sparsity_ratio | 0.995 | 0.9901 | 0.9935 |
#### Sparse Nano BEIR
* Dataset: `NanoBEIR_mean`
* Evaluated with [<code>SparseNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseNanoBEIREvaluator) with these parameters:
```json
{
"dataset_names": [
"msmarco",
"nfcorpus",
"nq"
]
}
```
| Metric | Value |
|:----------------------|:-----------|
| dot_accuracy@1 | 0.4467 |
| dot_accuracy@3 | 0.68 |
| dot_accuracy@5 | 0.74 |
| dot_accuracy@10 | 0.8 |
| dot_precision@1 | 0.4467 |
| dot_precision@3 | 0.2889 |
| dot_precision@5 | 0.2173 |
| dot_precision@10 | 0.15 |
| dot_recall@1 | 0.298 |
| dot_recall@3 | 0.4889 |
| dot_recall@5 | 0.5414 |
| dot_recall@10 | 0.6132 |
| **dot_ndcg@10** | **0.5455** |
| dot_mrr@10 | 0.5714 |
| dot_map@100 | 0.4378 |
| query_active_dims | 20.9133 |
| query_sparsity_ratio | 0.9993 |
| corpus_active_dims | 204.1846 |
| corpus_sparsity_ratio | 0.9933 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### tomaarsen/msmarco-margin-mse-minilm
* Dataset: [tomaarsen/msmarco-margin-mse-minilm](https://huggingface.co/datasets/tomaarsen/msmarco-margin-mse-minilm)
* Size: 90,000 training samples
* Columns: <code>query</code>, <code>positive</code>, <code>negative</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | query | positive | negative | score |
|:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:--------------------------------------------------------------------|
| type | string | string | string | float |
| details | <ul><li>min: 4 tokens</li><li>mean: 9.22 tokens</li><li>max: 36 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 79.27 tokens</li><li>max: 247 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 81.15 tokens</li><li>max: 201 tokens</li></ul> | <ul><li>min: -14.32</li><li>mean: 4.62</li><li>max: 21.72</li></ul> |
* Samples:
| query | positive | negative | score |
|:---------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------|
| <code>most powerful army in the world</code> | <code>U.S. Army Reserve Command You may be asking yourself, รขยยWhat is the Army Reserve?รขยย The Army is the most powerful and sophisticated military force in the world.</code> | <code>The British Royal Navy was the most powerful sea-going force by the time of World War 1 (1914-1918) and this was well-underst...</code> | <code>2.919867515563965</code> |
| <code>define vasomotor</code> | <code>Define peripheral neuropathy: a disease or degenerative state of the peripheral nerves in which motor, sensory, or vasomotor nerve fibers may beรขยยฆ a disease or degenerative state of the peripheral nerves in which motor, sensory, or vasomotor nerve fibers may be affected and which is markedรขยยฆ</code> | <code>Vairรยgya (Devanagari: ร ยคยตร ยฅยร ยคยฐร ยคยพร ยคยร ยฅยร ยคยฏ, also spelt Vairagya) is a Sanskrit term used in Hindu philosophy that roughly translates as dispassion, detachment, or renunciation, in particular renunciation from the pains and pleasures in the material world (Maya).</code> | <code>3.0037026405334473</code> |
| <code>nitrates definition biology</code> | <code>In Botany or Plant Biology. By Photosynthesis, the palisade cells make glucose which has many uses including: storage as starch, to make fat, to make cellulose and to make protein. Glucose is converted wรขยยฆith mineral slat nitrates to make the protein. Nitrates provide the essential nitrogen to make protein. The Ribosome, an organelle of the plant cell, manufactures most of the cell's protein.</code> | <code>Almost all inorganic nitrate salts are soluble in water at standard temperature and pressure. A common example of an inorganic nitrate salt is potassium nitrate (saltpeter). A rich source of inorganic nitrate in the human body comes from diets rich in leafy green foods, such as spinach and arugula.It is now believed that dietary nitrate in the form of plant-based foods is converted in the body to nitrite.itrate is a polyatomic ion with the molecular formula NO 3 รขยย and a molecular mass of 62.0049 g/mol.</code> | <code>-1.6804794073104858</code> |
* Loss: [<code>SpladeLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#spladeloss) with these parameters:
```json
{
"loss": "SparseMarginMSELoss",
"lambda_corpus": 0.08,
"lambda_query": 0.1
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 10,000 evaluation samples
* Columns: <code>query</code>, <code>positive</code>, <code>negative</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | query | positive | negative | score |
|:--------|:---------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------|
| type | string | string | string | float |
| details | <ul><li>min: 4 tokens</li><li>mean: 9.01 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 79.8 tokens</li><li>max: 336 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 81.3 tokens</li><li>max: 273 tokens</li></ul> | <ul><li>min: -15.9</li><li>mean: 4.91</li><li>max: 21.67</li></ul> |
* Samples:
| query | positive | negative | score |
|:----------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------|
| <code>femoral artery definition</code> | <code>medical Definition of circumflex artery : any of several paired curving arteries: as a: either of two arteries that branch from the deep femoral artery or from the femoral artery itself:</code> | <code>Femoral vein. The femoral vein is located in the upper thigh and pelvic region of the human body. It travels in close proximity to the femoral artery. This vein is one of the larger vessels in the venous system. Instead of draining deoxygenated blood from specific parts of the body, it receives blood from several significant branches. These include popliteal, the profunda femoris, and the great sapheneous veins.</code> | <code>-0.1968388557434082</code> |
| <code>what causes mastitis and how do you treat it</code> | <code>Mastitis is an infection of the tissue of the breast that occurs most frequently during the time of breastfeeding. This infection causes pain, swelling, redness, and increased temperature of the breast. It can occur when bacteria, often from the infant's mouth, enter a milk duct through a crack in the nipple. This causes an infection and painful inflammation of the breast.</code> | <code>Common causes of mastitis include bacteria from the babyรขยยs mouth, bacteria entering via breast injuries (bruising, fissures, cracks in the nipple), milk stasis (milk pooling in the breast), and bacteria from the hands of the mother or health care provider.</code> | <code>-0.8143405914306641</code> |
| <code>what is a buck moth</code> | <code>Buck moth caterpillars that have a light background color can be confused with both the Nevada buck moth, Hemileuca nevadensis Stretch, and the New England buck moth, Hemileuca lucina Henry Edwards. The larvae of these three species can best be distinguished based on the preferred host plants (Wagner 2005).hey rely on resources that are acquired by the caterpillars (larvae). The caterpillars are robust and can exceed four inches (10 cm) in North America. Figure 4. Adult cecropia moth, Hyalophora cecropia (Linnaeus). Photograph by Pennsylvania Department of Conservation and Natural Resources-Forestry Archive, Bugwood.org.</code> | <code>bucktail that gets talked about quietly in the . privacy of remote cabins. The รขยยMusky-Teerรขยย is a big fish bait that anglers treasure in their collection. You wonรขยยt find these at your local bait shop but weรขยยve been stocking these highly prized baits in all colors for years.</code> | <code>11.004357814788818</code> |
* Loss: [<code>SpladeLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#spladeloss) with these parameters:
```json
{
"loss": "SparseMarginMSELoss",
"lambda_corpus": 0.08,
"lambda_query": 0.1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
- `load_best_model_at_end`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | NanoMSMARCO_dot_ndcg@10 | NanoNFCorpus_dot_ndcg@10 | NanoNQ_dot_ndcg@10 | NanoBEIR_mean_dot_ndcg@10 |
|:----------:|:--------:|:-------------:|:---------------:|:-----------------------:|:------------------------:|:------------------:|:-------------------------:|
| 0.0178 | 100 | 501728.56 | - | - | - | - | - |
| 0.0356 | 200 | 9694.6262 | - | - | - | - | - |
| 0.0533 | 300 | 61.7172 | - | - | - | - | - |
| 0.0711 | 400 | 36.9925 | - | - | - | - | - |
| 0.0889 | 500 | 28.3854 | 23.2348 | 0.4989 | 0.3066 | 0.5195 | 0.4417 |
| 0.1067 | 600 | 24.1433 | - | - | - | - | - |
| 0.1244 | 700 | 22.1908 | - | - | - | - | - |
| 0.1422 | 800 | 21.8601 | - | - | - | - | - |
| 0.16 | 900 | 20.6542 | - | - | - | - | - |
| 0.1778 | 1000 | 19.7559 | 18.6699 | 0.5447 | 0.3132 | 0.6324 | 0.4967 |
| 0.1956 | 1100 | 19.0111 | - | - | - | - | - |
| 0.2133 | 1200 | 19.9952 | - | - | - | - | - |
| 0.2311 | 1300 | 19.2956 | - | - | - | - | - |
| 0.2489 | 1400 | 18.2804 | - | - | - | - | - |
| 0.2667 | 1500 | 18.4746 | 17.1064 | 0.6133 | 0.3191 | 0.6282 | 0.5202 |
| 0.2844 | 1600 | 17.4687 | - | - | - | - | - |
| 0.3022 | 1700 | 17.3765 | - | - | - | - | - |
| 0.32 | 1800 | 17.0284 | - | - | - | - | - |
| 0.3378 | 1900 | 16.2671 | - | - | - | - | - |
| 0.3556 | 2000 | 16.0607 | 15.5336 | 0.6257 | 0.3232 | 0.6330 | 0.5273 |
| 0.3733 | 2100 | 16.4676 | - | - | - | - | - |
| 0.3911 | 2200 | 15.9879 | - | - | - | - | - |
| 0.4089 | 2300 | 14.9848 | - | - | - | - | - |
| 0.4267 | 2400 | 15.0367 | - | - | - | - | - |
| 0.4444 | 2500 | 14.4999 | 13.8716 | 0.6180 | 0.3373 | 0.6617 | 0.5390 |
| 0.4622 | 2600 | 14.3147 | - | - | - | - | - |
| 0.48 | 2700 | 15.0698 | - | - | - | - | - |
| 0.4978 | 2800 | 15.2789 | - | - | - | - | - |
| 0.5156 | 2900 | 13.7896 | - | - | - | - | - |
| **0.5333** | **3000** | **13.8203** | **13.4835** | **0.6145** | **0.3433** | **0.6788** | **0.5455** |
| 0.5511 | 3100 | 13.2853 | - | - | - | - | - |
| 0.5689 | 3200 | 13.3642 | - | - | - | - | - |
| 0.5867 | 3300 | 14.1746 | - | - | - | - | - |
| 0.6044 | 3400 | 12.2178 | - | - | - | - | - |
| 0.6222 | 3500 | 13.0088 | 12.4034 | 0.6224 | 0.3350 | 0.6530 | 0.5368 |
| 0.64 | 3600 | 12.7507 | - | - | - | - | - |
| 0.6578 | 3700 | 12.7018 | - | - | - | - | - |
| 0.6756 | 3800 | 14.6372 | - | - | - | - | - |
| 0.6933 | 3900 | 13.8265 | - | - | - | - | - |
| 0.7111 | 4000 | 12.1383 | 14.6959 | 0.6064 | 0.3389 | 0.6569 | 0.5341 |
| 0.7289 | 4100 | 13.06 | - | - | - | - | - |
| 0.7467 | 4200 | 12.3468 | - | - | - | - | - |
| 0.7644 | 4300 | 12.4433 | - | - | - | - | - |
| 0.7822 | 4400 | 11.8032 | - | - | - | - | - |
| 0.8 | 4500 | 12.1634 | 11.7610 | 0.5964 | 0.3461 | 0.6710 | 0.5378 |
| 0.8178 | 4600 | 12.2753 | - | - | - | - | - |
| 0.8356 | 4700 | 11.6148 | - | - | - | - | - |
| 0.8533 | 4800 | 12.0564 | - | - | - | - | - |
| 0.8711 | 4900 | 11.8624 | - | - | - | - | - |
| 0.8889 | 5000 | 12.3799 | 12.7286 | 0.6181 | 0.3475 | 0.6380 | 0.5345 |
| 0.9067 | 5100 | 11.5523 | - | - | - | - | - |
| 0.9244 | 5200 | 11.0108 | - | - | - | - | - |
| 0.9422 | 5300 | 11.4062 | - | - | - | - | - |
| 0.96 | 5400 | 11.3638 | - | - | - | - | - |
| 0.9778 | 5500 | 11.2487 | 11.5828 | 0.6110 | 0.3469 | 0.6504 | 0.5361 |
| 0.9956 | 5600 | 11.543 | - | - | - | - | - |
| -1 | -1 | - | - | 0.6145 | 0.3433 | 0.6788 | 0.5455 |
* The bold row denotes the saved checkpoint.
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Energy Consumed**: 0.216 kWh
- **Carbon Emitted**: 0.084 kg of CO2
- **Hours Used**: 0.605 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: 1 x NVIDIA GeForce RTX 3090
- **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
- **RAM Size**: 31.78 GB
### Framework Versions
- Python: 3.11.6
- Sentence Transformers: 4.2.0.dev0
- Transformers: 4.52.4
- PyTorch: 2.6.0+cu124
- Accelerate: 1.5.1
- Datasets: 2.21.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### SpladeLoss
```bibtex
@misc{formal2022distillationhardnegativesampling,
title={From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective},
author={Thibault Formal and Carlos Lassance and Benjamin Piwowarski and Stรฉphane Clinchant},
year={2022},
eprint={2205.04733},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2205.04733},
}
```
#### SparseMarginMSELoss
```bibtex
@misc{hofstรคtter2021improving,
title={Improving Efficient Neural Ranking Models with Cross-Architecture Knowledge Distillation},
author={Sebastian Hofstรคtter and Sophia Althammer and Michael Schrรถder and Mete Sertkan and Allan Hanbury},
year={2021},
eprint={2010.02666},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
#### FlopsLoss
```bibtex
@article{paria2020minimizing,
title={Minimizing flops to learn efficient sparse representations},
author={Paria, Biswajit and Yeh, Chih-Kuan and Yen, Ian EH and Xu, Ning and Ravikumar, Pradeep and P{'o}czos, Barnab{'a}s},
journal={arXiv preprint arXiv:2004.05665},
year={2020}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
LandCruiser/sn21_omg_1806_28 | LandCruiser | 2025-06-18T16:14:22Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-06-18T16:12:46Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
LandCruiser/sn21_omg_1806_26 | LandCruiser | 2025-06-18T16:14:16Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-06-18T16:12:39Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
mlfoundations-dev/DeepSeek-R1-Distill-Qwen-1.5B_OpenThoughts3 | mlfoundations-dev | 2025-06-18T16:12:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-18T16:08:47Z | ---
library_name: transformers
license: other
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: DeepSeek-R1-Distill-Qwen-1.5B_OpenThoughts3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DeepSeek-R1-Distill-Qwen-1.5B_OpenThoughts3
This model is a fine-tuned version of [/leonardo_work/EUHPC_E03_068/DCFT_shared/hub/models--deepseek-ai--DeepSeek-R1-Distill-Qwen-1.5B/snapshots/ad9f0ae0864d7fbcd1cd905e3c6c5b069cc8b562](https://huggingface.co//leonardo_work/EUHPC_E03_068/DCFT_shared/hub/models--deepseek-ai--DeepSeek-R1-Distill-Qwen-1.5B/snapshots/ad9f0ae0864d7fbcd1cd905e3c6c5b069cc8b562) on the mlfoundations-dev/OpenThoughts3 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 512
- total_train_batch_size: 512
- total_eval_batch_size: 4096
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.0
|
mradermacher/RLPR-Qwen2.5-7B-Base-GGUF | mradermacher | 2025-06-18T15:56:46Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:openbmb/RLPR-train",
"base_model:RLAIF-V/RLPR-Qwen2.5-7B-Base",
"base_model:quantized:RLAIF-V/RLPR-Qwen2.5-7B-Base",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-18T13:27:36Z | ---
base_model: RLAIF-V/RLPR-Qwen2.5-7B-Base
datasets:
- openbmb/RLPR-train
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/RLAIF-V/RLPR-Qwen2.5-7B-Base
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/RLPR-Qwen2.5-7B-Base-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/RLPR-Qwen2.5-7B-Base-GGUF/resolve/main/RLPR-Qwen2.5-7B-Base.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/RLPR-Qwen2.5-7B-Base-GGUF/resolve/main/RLPR-Qwen2.5-7B-Base.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/RLPR-Qwen2.5-7B-Base-GGUF/resolve/main/RLPR-Qwen2.5-7B-Base.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/RLPR-Qwen2.5-7B-Base-GGUF/resolve/main/RLPR-Qwen2.5-7B-Base.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/RLPR-Qwen2.5-7B-Base-GGUF/resolve/main/RLPR-Qwen2.5-7B-Base.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/RLPR-Qwen2.5-7B-Base-GGUF/resolve/main/RLPR-Qwen2.5-7B-Base.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/RLPR-Qwen2.5-7B-Base-GGUF/resolve/main/RLPR-Qwen2.5-7B-Base.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/RLPR-Qwen2.5-7B-Base-GGUF/resolve/main/RLPR-Qwen2.5-7B-Base.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/RLPR-Qwen2.5-7B-Base-GGUF/resolve/main/RLPR-Qwen2.5-7B-Base.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/RLPR-Qwen2.5-7B-Base-GGUF/resolve/main/RLPR-Qwen2.5-7B-Base.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/RLPR-Qwen2.5-7B-Base-GGUF/resolve/main/RLPR-Qwen2.5-7B-Base.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/RLPR-Qwen2.5-7B-Base-GGUF/resolve/main/RLPR-Qwen2.5-7B-Base.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
LandCruiser/sn21_omg_1806_11 | LandCruiser | 2025-06-18T15:51:24Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-06-18T15:45:47Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
morturr/Llama-2-7b-hf-LOO_dadjokes-COMB_headlines-comb3-seed7-2025-06-18 | morturr | 2025-06-18T15:47:51Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-18T15:47:35Z | ---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_dadjokes-COMB_headlines-comb3-seed7-2025-06-18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_dadjokes-COMB_headlines-comb3-seed7-2025-06-18
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 7
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1 |
dgambettaphd/M_llm2_run2_gen6_WXS_doc1000_synt120_lr1e-04_acm_SYNLAST | dgambettaphd | 2025-06-18T15:44:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T15:43:57Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
talphaidze/molm-fineweb-edu-scientific1 | talphaidze | 2025-06-18T15:42:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"MoLM",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] | text-generation | 2025-06-18T11:16:32Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
alzidy/Qwen3_14B | alzidy | 2025-06-18T15:40:48Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-06-18T15:40:48Z | ---
license: apache-2.0
---
|
kathleenge/ps | kathleenge | 2025-06-18T15:35:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T15:35:17Z | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** kathleenge
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
xaek08/bart-base-finetuned-ccdv-govreport | xaek08 | 2025-06-18T15:33:38Z | 8 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"summarization",
"generated_from_trainer",
"dataset:ccdv/govreport-summarization",
"base_model:facebook/bart-base",
"base_model:finetune:facebook/bart-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2025-06-16T18:36:24Z | ---
library_name: transformers
license: apache-2.0
base_model: facebook/bart-base
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-base-finetuned-ccdv-govreport
results: []
datasets:
- ccdv/govreport-summarization
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-ccdv-govreport
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8338
- Rouge1: 0.3117
- Rouge2: 0.1529
- Rougel: 0.2621
- Rougelsum: 0.269
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| 2.0154 | 1.0 | 2190 | 1.8889 | 0.2786 | 0.1373 | 0.236 | 0.2419 |
| 1.5738 | 2.0 | 4380 | 1.8338 | 0.3117 | 0.1529 | 0.2621 | 0.269 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1 |
Zillis/2025_PAAMA_MODEL_J.EUN_PV8 | Zillis | 2025-06-18T15:32:14Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-18T10:02:42Z |































































|
Flickinshots/ppo-LunarLander-v2 | Flickinshots | 2025-06-18T15:30:15Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-18T15:29:52Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 249.01 +/- 16.94
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
sgonzalezygil/sd-finetuning-dreambooth-v11 | sgonzalezygil | 2025-06-18T15:30:11Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2025-06-18T15:28:16Z | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐งจ diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
annasoli/Qwen2.5-14B-Instruct_R1-DP8-LR2e-5_bad-medical-advice | annasoli | 2025-06-18T15:14:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T13:50:05Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AnubhavSC/MAYA-PJ3 | AnubhavSC | 2025-06-18T15:13:52Z | 0 | 0 | null | [
"safetensors",
"unsloth",
"license:mit",
"region:us"
] | null | 2025-06-18T14:26:39Z | ---
license: mit
tags:
- unsloth
---
|
morturr/Llama-2-7b-hf-LOO_headlines-COMB_dadjokes-comb3-seed18-2025-06-18 | morturr | 2025-06-18T14:57:38Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-18T14:57:23Z | ---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_headlines-COMB_dadjokes-comb3-seed18-2025-06-18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_headlines-COMB_dadjokes-comb3-seed18-2025-06-18
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 18
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1 |
Rohit131313/job-skill-predictor-lora | Rohit131313 | 2025-06-18T14:52:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T14:51:53Z | ---
base_model: unsloth/llama-3-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Rohit131313
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
neural-interactive-proofs/finetune_dpo_cv_test_lm_server_30_0_iter_0_provers_group_2025-06-18_15-40-03_Qwen_Qwen2.5-0.5B-I | neural-interactive-proofs | 2025-06-18T14:40:38Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T14:40:34Z | ---
base_model: Qwen/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: finetune_dpo_cv_test_lm_server_30_0_iter_0_provers_group_2025-06-18_15-40-03_Qwen_Qwen2.5-0.5B-I
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for finetune_dpo_cv_test_lm_server_30_0_iter_0_provers_group_2025-06-18_15-40-03_Qwen_Qwen2.5-0.5B-I
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="neural-interactive-proofs/finetune_dpo_cv_test_lm_server_30_0_iter_0_provers_group_2025-06-18_15-40-03_Qwen_Qwen2.5-0.5B-I", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/lrhammond-team/pvg-self-hosted-finetune/runs/Qwen_Qwen2.5-0.5B-Instruct_dpo_2025-06-18_15-40-03_cv_test_lm_server_30_0_iter_0_provers_group)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.52.4
- Pytorch: 2.7.0
- Datasets: 2.21.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.