modelId string | author string | last_modified timestamp[us, tz=UTC] | downloads int64 | likes int64 | library_name string | tags list | pipeline_tag string | createdAt timestamp[us, tz=UTC] | card string |
|---|---|---|---|---|---|---|---|---|---|
LarryAIDraw/Theresa_Arknights-000001 | LarryAIDraw | 2025-06-17T06:56:50Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-06-17T06:31:47Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/1679176/theresa-arknights |
luckeciano/Qwen-2.5-7B-GRPO-Base-NoAdvNorm_9763 | luckeciano | 2025-06-17T06:54:16Z | 13 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compa... | text-generation | 2025-06-17T01:23:05Z | ---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-Base-NoAdvNorm_9763
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-Base-NoAdvNorm_9763
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-Base-NoAdvNorm_9763", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/5jjz6xej)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
syahaeun/qwen2-resume-evaluator | syahaeun | 2025-06-17T06:47:22Z | 41 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:Qwen/Qwen2-1.5B-Instruct",
"base_model:adapter:Qwen/Qwen2-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-06-17T05:22:14Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2-1.5B-Instruct
tags:
- generated_from_trainer
model-index:
- name: qwen2-resume-evaluator
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen2-resume-evaluator
This model is a fine-tuned version of [Qwen/Qwen2-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2-1.5B-Instruct) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 292
- mixed_precision_training: Native AMP
### Framework versions
- PEFT 0.7.0
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1 |
artianand/religion_adapter_deberta_v3_large_race_custom_loss_lamda_07_batch_8 | artianand | 2025-06-17T06:47:22Z | 4 | 0 | adapter-transformers | [
"adapter-transformers",
"deberta-v2",
"region:us"
] | null | 2025-06-17T06:47:17Z | ---
tags:
- deberta-v2
- adapter-transformers
---
# Adapter `artianand/religion_adapter_deberta_v3_large_race_custom_loss_lamda_07_batch_8` for artianand/deberta-v3-large-race
An [adapter](https://adapterhub.ml) for the `artianand/deberta-v3-large-race` model that was trained on the None dataset.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("artianand/deberta-v3-large-race")
adapter_name = model.load_adapter("artianand/religion_adapter_deberta_v3_large_race_custom_loss_lamda_07_batch_8", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> |
rayhaan-beeharry/gemma3_1B_IT_psych-Q4_K_M-GGUF | rayhaan-beeharry | 2025-06-17T06:41:52Z | 9 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:rayhaan-beeharry/gemma3_1B_IT_psych",
"base_model:quantized:rayhaan-beeharry/gemma3_1B_IT_psych",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-17T06:41:46Z | ---
license: mit
base_model: rayhaan-beeharry/gemma3_1B_IT_psych
tags:
- llama-cpp
- gguf-my-repo
---
# rayhaan-beeharry/gemma3_1B_IT_psych-Q4_K_M-GGUF
This model was converted to GGUF format from [`rayhaan-beeharry/gemma3_1B_IT_psych`](https://huggingface.co/rayhaan-beeharry/gemma3_1B_IT_psych) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/rayhaan-beeharry/gemma3_1B_IT_psych) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo rayhaan-beeharry/gemma3_1B_IT_psych-Q4_K_M-GGUF --hf-file gemma3_1b_it_psych-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo rayhaan-beeharry/gemma3_1B_IT_psych-Q4_K_M-GGUF --hf-file gemma3_1b_it_psych-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo rayhaan-beeharry/gemma3_1B_IT_psych-Q4_K_M-GGUF --hf-file gemma3_1b_it_psych-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo rayhaan-beeharry/gemma3_1B_IT_psych-Q4_K_M-GGUF --hf-file gemma3_1b_it_psych-q4_k_m.gguf -c 2048
```
|
YC645/uuu_fine_tune_gpt2 | YC645 | 2025-06-17T06:36:20Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-06-17T06:36:20Z | ---
license: apache-2.0
---
|
YC645/llama2_uuu_news_qlora | YC645 | 2025-06-17T06:35:49Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-06-17T06:35:49Z | ---
license: apache-2.0
---
|
kicoi/ppo-Huggy | kicoi | 2025-06-17T06:30:23Z | 27 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2025-06-17T06:30:10Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: kicoi/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
WHWeng/llama2_uuu_news_qlora | WHWeng | 2025-06-17T06:22:59Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-06-17T06:22:59Z | ---
license: apache-2.0
---
|
ryanmitts/ryan-tts | ryanmitts | 2025-06-17T06:14:49Z | 0 | 0 | chatterbox | [
"chatterbox",
"text-to-speech",
"speech generation",
"voice-cloning",
"en",
"license:mit",
"region:us"
] | text-to-speech | 2025-06-17T05:40:42Z | ---
license: mit
language:
- en
tags:
- text-to-speech
- speech generation
- voice-cloning
pipeline_tag: text-to-speech
library_name: chatterbox
---
|
onnx-community/NeuroBERT-NER-ONNX | onnx-community | 2025-06-17T06:13:11Z | 0 | 1 | transformers.js | [
"transformers.js",
"onnx",
"bert",
"token-classification",
"base_model:boltuix/NeuroBERT-NER",
"base_model:quantized:boltuix/NeuroBERT-NER",
"region:us"
] | token-classification | 2025-06-17T06:13:09Z | ---
library_name: transformers.js
base_model:
- boltuix/NeuroBERT-NER
---
# NeuroBERT-NER (ONNX)
This is an ONNX version of [boltuix/NeuroBERT-NER](https://huggingface.co/boltuix/NeuroBERT-NER). It was automatically converted and uploaded using [this space](https://huggingface.co/spaces/onnx-community/convert-to-onnx).
|
stingshaw/llama2_uuu_news_qlora | stingshaw | 2025-06-17T06:11:56Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-06-17T06:11:56Z | ---
license: apache-2.0
---
|
stingshaw/tcp2023 | stingshaw | 2025-06-17T06:10:53Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-06-17T06:10:53Z | ---
license: apache-2.0
---
|
santanukumar07/biogpt-finetune | santanukumar07 | 2025-06-17T06:08:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-17T06:08:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ASIEK/ppo-LunarLander-v2 | ASIEK | 2025-06-17T06:06:40Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-17T06:06:17Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 231.30 +/- 59.74
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
yinita/cpdc-Qwen3-8B-grpo-v1-300step | yinita | 2025-06-17T05:42:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-17T05:40:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ArtusDev/yamatazen_EtherealAurora-12B-v2-EXL3 | ArtusDev | 2025-06-17T05:38:52Z | 1 | 0 | transformers | [
"transformers",
"mergekit",
"merge",
"chatml",
"exl3",
"en",
"ja",
"base_model:yamatazen/EtherealAurora-12B-v2",
"base_model:quantized:yamatazen/EtherealAurora-12B-v2",
"endpoints_compatible",
"region:us"
] | null | 2025-06-12T17:08:53Z | ---
base_model: yamatazen/EtherealAurora-12B-v2
base_model_relation: quantized
quantized_by: ArtusDev
library_name: transformers
tags:
- mergekit
- merge
- chatml
- exl3
language:
- en
- ja
---
## EXL3 Quants of yamatazen/EtherealAurora-12B-v2
EXL3 quants of [yamatazen/EtherealAurora-12B-v2](https://huggingface.co/yamatazen/EtherealAurora-12B-v2) using <a href="https://github.com/turboderp-org/exllamav3/">exllamav3</a> for quantization.
### Quants
| Quant(Revision) | Bits per Weight | Head Bits |
| -------- | ---------- | --------- |
| [2.5_H6](https://huggingface.co/ArtusDev/yamatazen_EtherealAurora-12B-v2-EXL3/tree/2.5bpw_H6) | 2.5 | 6 |
| [3.0_H6](https://huggingface.co/ArtusDev/yamatazen_EtherealAurora-12B-v2-EXL3/tree/3.0bpw_H6) | 3.0 | 6 |
| [3.5_H6](https://huggingface.co/ArtusDev/yamatazen_EtherealAurora-12B-v2-EXL3/tree/3.5bpw_H6) | 3.5 | 6 |
| [4.0_H6](https://huggingface.co/ArtusDev/yamatazen_EtherealAurora-12B-v2-EXL3/tree/4.0bpw_H6) | 4.0 | 6 |
| [4.5_H6](https://huggingface.co/ArtusDev/yamatazen_EtherealAurora-12B-v2-EXL3/tree/4.5bpw_H6) | 4.5 | 6 |
| [5.0_H6](https://huggingface.co/ArtusDev/yamatazen_EtherealAurora-12B-v2-EXL3/tree/5.0bpw_H6) | 5.0 | 6 |
| [6.0_H6](https://huggingface.co/ArtusDev/yamatazen_EtherealAurora-12B-v2-EXL3/tree/6.0bpw_H6) | 6.0 | 6 |
| [8.0_H6](https://huggingface.co/ArtusDev/yamatazen_EtherealAurora-12B-v2-EXL3/tree/8.0bpw_H6) | 8.0 | 6 |
| [8.0_H8](https://huggingface.co/ArtusDev/yamatazen_EtherealAurora-12B-v2-EXL3/tree/8.0bpw_H8) | 8.0 | 8 |
### Downloading quants with huggingface-cli
<details>
<summary>Click to view download instructions</summary>
Install hugginface-cli:
```bash
pip install -U "huggingface_hub[cli]"
```
Download quant by targeting the specific quant revision (branch):
```
huggingface-cli download ArtusDev/yamatazen_EtherealAurora-12B-v2-EXL3 --revision "5.0bpw_H6" --local-dir ./
```
</details>
|
ArtusDev/qingy2024_GRMR-V3-L3B-EXL3 | ArtusDev | 2025-06-17T05:37:48Z | 1 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"exl3",
"en",
"base_model:qingy2024/GRMR-V3-L3B",
"base_model:quantized:qingy2024/GRMR-V3-L3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-05T18:45:30Z | ---
base_model: qingy2024/GRMR-V3-L3B
base_model_relation: quantized
quantized_by: ArtusDev
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
- exl3
license: apache-2.0
language:
- en
---
## EXL3 Quants of qingy2024/GRMR-V3-L3B
EXL3 quants of [qingy2024/GRMR-V3-L3B](https://huggingface.co/qingy2024/GRMR-V3-L3B) using <a href="https://github.com/turboderp-org/exllamav3/">exllamav3</a> for quantization.
### Quants
| Quant(Revision) | Bits per Weight | Head Bits |
| -------- | ---------- | --------- |
| [3.0_H6](https://huggingface.co/ArtusDev/qingy2024_GRMR-V3-L3B-EXL3/tree/3.0bpw_H6) | 3.0 | 6 |
| [3.5_H6](https://huggingface.co/ArtusDev/qingy2024_GRMR-V3-L3B-EXL3/tree/3.5bpw_H6) | 3.5 | 6 |
| [4.0_H6](https://huggingface.co/ArtusDev/qingy2024_GRMR-V3-L3B-EXL3/tree/4.0bpw_H6) | 4.0 | 6 |
| [4.5_H6](https://huggingface.co/ArtusDev/qingy2024_GRMR-V3-L3B-EXL3/tree/4.5bpw_H6) | 4.5 | 6 |
| [5.0_H6](https://huggingface.co/ArtusDev/qingy2024_GRMR-V3-L3B-EXL3/tree/5.0bpw_H6) | 5.0 | 6 |
| [6.0_H6](https://huggingface.co/ArtusDev/qingy2024_GRMR-V3-L3B-EXL3/tree/6.0bpw_H6) | 6.0 | 6 |
| [8.0_H6](https://huggingface.co/ArtusDev/qingy2024_GRMR-V3-L3B-EXL3/tree/8.0bpw_H6) | 8.0 | 6 |
| [8.0_H8](https://huggingface.co/ArtusDev/qingy2024_GRMR-V3-L3B-EXL3/tree/8.0bpw_H8) | 8.0 | 8 |
### Downloading quants with huggingface-cli
<details>
<summary>Click to view download instructions</summary>
Install hugginface-cli:
```bash
pip install -U "huggingface_hub[cli]"
```
Download quant by targeting the specific quant revision (branch):
```
huggingface-cli download ArtusDev/qingy2024_GRMR-V3-L3B-EXL3 --revision "5.0bpw_H6" --local-dir ./
```
</details>
|
abrilpalacios/economic_news_v2 | abrilpalacios | 2025-06-17T05:37:29Z | 36 | 0 | null | [
"safetensors",
"bert",
"region:us"
] | null | 2025-03-19T22:40:13Z | # BERT Model Trained on Economic News (Inflation-Focused)
This repository contains a custom BERT-based model fine-tuned on a corpus of economic and inflation-related news, developed as part of my doctoral research in economics.
## 🧠 Model Overview
The model was trained using a domain-specific corpus of Spanish-language economic news articles, with a focus on texts related to inflation, monetary policy, and macroeconomic indicators. It was fine-tuned for sentiment classification to extract insights on public and media perceptions of economic conditions.
The fine-tuned model is applied in my thesis to generate two sentiment indices:
- A **general economic sentiment index**
- An **inflation-specific sentiment index**
These indices are used as input in a dynamic principal component analysis (DPCA) framework to study their role in explaining volatility and inflation expectations.
## 📈 Applications in Research
The outputs of the model are used in the empirical chapters of my doctoral dissertation. In particular:
- A **volatility analysis** using DPCA, where the sentiment indices are tested as explanatory components.
- Several **figures from the thesis** are included to illustrate the role of sentiment during crisis periods.
- A document with **additional technical notes** is available [here](https://huggingface.co/abrilpalacios/economic_news_v2/blob/main/Additional_notes.pdf)
- (QR also included below), which provides supplementary tables, model details, and methodological explanations.
## 🗂 Files Included
- `.gitattributes` – metadata for Git versioning
- `biplot_dinamico_con_fechas.gif` – dynamic biplot showing sentiment evolution over time
- `config.json` – model configuration
- `model.safetensors` – trained model weights in `safetensors` format
- `news_v2.ipynb` – main notebook with data processing and model application
- `Upload news_v2.ipynb` – backup or alternate version of the notebook
- `special_tokens_map.json` – tokenizer special tokens configuration
- `tokenizer_config.json` – tokenizer settings
- `vocab.txt` – vocabulary file for tokenizer
- `Additional_notes.pdf` – supplementary document with extended statistical results and methodology notes
## 🔗 Citation and Attribution
This model is part of an ongoing Ph.D. dissertation in Economics. If you use it in your own work, please cite appropriately or contact me via Hugging Face or email.
## 📎 QR Code to Extended Notes

|
ArtusDev/TheDrummer_Rivermind-Lux-12B-v1-EXL2 | ArtusDev | 2025-06-17T05:37:09Z | 1 | 0 | null | [
"base_model:TheDrummer/Rivermind-Lux-12B-v1",
"base_model:quantized:TheDrummer/Rivermind-Lux-12B-v1",
"region:us"
] | null | 2025-06-02T16:32:28Z | ---
base_model: TheDrummer/Rivermind-Lux-12B-v1
base_model_relation: quantized
quantized_by: ArtusDev
---
## EXL2 Quants of TheDrummer/Rivermind-Lux-12B-v1
EXL2 quants of [TheDrummer/Rivermind-Lux-12B-v1](https://huggingface.co/TheDrummer/Rivermind-Lux-12B-v1) using <a href="https://github.com/turboderp-org/exllamav2/">exllamav2</a> for quantization.
### Quants
| Quant(Revision) | Bits per Weight | Head Bits |
| -------- | ---------- | --------- |
| [2.5_H6](https://huggingface.co/ArtusDev/TheDrummer_Rivermind-Lux-12B-v1-EXL2/tree/2.5bpw_H6) | 2.5 | 6 |
| [3.0_H6](https://huggingface.co/ArtusDev/TheDrummer_Rivermind-Lux-12B-v1-EXL2/tree/3.0bpw_H6) | 3.0 | 6 |
| [3.5_H6](https://huggingface.co/ArtusDev/TheDrummer_Rivermind-Lux-12B-v1-EXL2/tree/3.5bpw_H6) | 3.5 | 6 |
| [4.0_H6](https://huggingface.co/ArtusDev/TheDrummer_Rivermind-Lux-12B-v1-EXL2/tree/4.0bpw_H6) | 4.0 | 6 |
| [4.5_H6](https://huggingface.co/ArtusDev/TheDrummer_Rivermind-Lux-12B-v1-EXL2/tree/4.5bpw_H6) | 4.5 | 6 |
| [5.0_H6](https://huggingface.co/ArtusDev/TheDrummer_Rivermind-Lux-12B-v1-EXL2/tree/5.0bpw_H6) | 5.0 | 6 |
| [6.0_H6](https://huggingface.co/ArtusDev/TheDrummer_Rivermind-Lux-12B-v1-EXL2/tree/6.0bpw_H6) | 6.0 | 6 |
| [8.0_H6](https://huggingface.co/ArtusDev/TheDrummer_Rivermind-Lux-12B-v1-EXL2/tree/8.0bpw_H6) | 8.0 | 6 |
| [8.0_H8](https://huggingface.co/ArtusDev/TheDrummer_Rivermind-Lux-12B-v1-EXL2/tree/8.0bpw_H8) | 8.0 | 8 |
### Downloading quants with huggingface-cli
<details>
<summary>Click to view download instructions</summary>
Install hugginface-cli:
```bash
pip install -U "huggingface_hub[cli]"
```
Download quant by targeting the specific quant revision (branch):
```
huggingface-cli download ArtusDev/TheDrummer_Rivermind-Lux-12B-v1-EXL2 --revision "5.0bpw_H6" --local-dir ./
```
</details>
|
ArtusDev/CharGen_CharGen-v3-mini-EXL3 | ArtusDev | 2025-06-17T05:36:49Z | 5 | 0 | null | [
"roleplay",
"exl3",
"text-generation",
"en",
"base_model:CharGen/CharGen-v3-mini",
"base_model:quantized:CharGen/CharGen-v3-mini",
"license:mit",
"region:us"
] | text-generation | 2025-06-02T09:17:53Z | ---
base_model: CharGen/CharGen-v3-mini
base_model_relation: quantized
quantized_by: ArtusDev
license: mit
language:
- en
pipeline_tag: text-generation
tags:
- roleplay
- exl3
---
## EXL3 Quants of CharGen/CharGen-v3-mini
EXL3 quants of [CharGen/CharGen-v3-mini](https://huggingface.co/CharGen/CharGen-v3-mini) using <a href="https://github.com/turboderp-org/exllamav3/">exllamav3</a> for quantization.
### Quants
| Quant(Revision) | Bits per Weight | Head Bits |
| -------- | ---------- | --------- |
| [3.0_H6](https://huggingface.co/ArtusDev/CharGen_CharGen-v3-mini-EXL3/tree/3.0bpw_H6) | 3.0 | 6 |
| [3.5_H6](https://huggingface.co/ArtusDev/CharGen_CharGen-v3-mini-EXL3/tree/3.5bpw_H6) | 3.5 | 6 |
| [4.0_H6](https://huggingface.co/ArtusDev/CharGen_CharGen-v3-mini-EXL3/tree/4.0bpw_H6) | 4.0 | 6 |
| [4.5_H6](https://huggingface.co/ArtusDev/CharGen_CharGen-v3-mini-EXL3/tree/4.5bpw_H6) | 4.5 | 6 |
| [5.0_H6](https://huggingface.co/ArtusDev/CharGen_CharGen-v3-mini-EXL3/tree/5.0bpw_H6) | 5.0 | 6 |
| [6.0_H6](https://huggingface.co/ArtusDev/CharGen_CharGen-v3-mini-EXL3/tree/6.0bpw_H6) | 6.0 | 6 |
| [8.0_H6](https://huggingface.co/ArtusDev/CharGen_CharGen-v3-mini-EXL3/tree/8.0bpw_H6) | 8.0 | 6 |
| [8.0_H8](https://huggingface.co/ArtusDev/CharGen_CharGen-v3-mini-EXL3/tree/8.0bpw_H8) | 8.0 | 8 |
### Downloading quants with huggingface-cli
<details>
<summary>Click to view download instructions</summary>
Install hugginface-cli:
```bash
pip install -U "huggingface_hub[cli]"
```
Download quant by targeting the specific quant revision (branch):
```
huggingface-cli download ArtusDev/CharGen_CharGen-v3-mini-EXL3 --revision "5.0bpw_H6" --local-dir ./
```
</details>
|
ArtusDev/TareksTesting_Scripturient-V2.3-LLaMa-70B-EXL3 | ArtusDev | 2025-06-17T05:36:44Z | 9 | 0 | transformers | [
"transformers",
"mergekit",
"merge",
"exl3",
"base_model:TareksTesting/Scripturient-V2.3-LLaMa-70B",
"base_model:quantized:TareksTesting/Scripturient-V2.3-LLaMa-70B",
"license:llama3.3",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T15:16:25Z | ---
base_model: TareksTesting/Scripturient-V2.3-LLaMa-70B
base_model_relation: quantized
quantized_by: ArtusDev
library_name: transformers
license: llama3.3
tags:
- mergekit
- merge
- exl3
---
## EXL3 Quants of TareksTesting/Scripturient-V2.3-LLaMa-70B
EXL3 quants of [TareksTesting/Scripturient-V2.3-LLaMa-70B](https://huggingface.co/TareksTesting/Scripturient-V2.3-LLaMa-70B) using <a href="https://github.com/turboderp-org/exllamav3/">exllamav3</a> for quantization.
### Quants
| Quant(Revision) | Bits per Weight | Head Bits |
| -------- | ---------- | --------- |
| [3.5_H6](https://huggingface.co/ArtusDev/TareksTesting_Scripturient-V2.3-LLaMa-70B-EXL3/tree/3.5bpw_H6) | 3.5 | 6 |
| [4.25_H6](https://huggingface.co/ArtusDev/TareksTesting_Scripturient-V2.3-LLaMa-70B-EXL3/tree/4.25bpw_H6) | 4.25 | 6 |
### Downloading quants with huggingface-cli
<details>
<summary>Click to view download instructions</summary>
Install hugginface-cli:
```bash
pip install -U "huggingface_hub[cli]"
```
Download quant by targeting the specific quant revision (branch):
```
huggingface-cli download ArtusDev/TareksTesting_Scripturient-V2.3-LLaMa-70B-EXL3 --revision "5.0bpw_H6" --local-dir ./
```
</details>
|
ArtusDev/TareksTesting_Scripturient-V2.0-LLaMa-70B-EXL3 | ArtusDev | 2025-06-17T05:36:04Z | 4 | 0 | transformers | [
"transformers",
"mergekit",
"merge",
"exl3",
"base_model:TareksTesting/Scripturient-V2.0-LLaMa-70B",
"base_model:quantized:TareksTesting/Scripturient-V2.0-LLaMa-70B",
"license:llama3.3",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T15:32:20Z | ---
base_model: TareksTesting/Scripturient-V2.0-LLaMa-70B
base_model_relation: quantized
quantized_by: ArtusDev
library_name: transformers
license: llama3.3
tags:
- mergekit
- merge
- exl3
---
## EXL3 Quants of TareksTesting/Scripturient-V2.0-LLaMa-70B
EXL3 quants of [TareksTesting/Scripturient-V2.0-LLaMa-70B](https://huggingface.co/TareksTesting/Scripturient-V2.0-LLaMa-70B) using <a href="https://github.com/turboderp-org/exllamav3/">exllamav3</a> for quantization.
### Quants
| Quant(Revision) | Bits per Weight | Head Bits |
| -------- | ---------- | --------- |
| [3.5_H6](https://huggingface.co/ArtusDev/TareksTesting_Scripturient-V2.0-LLaMa-70B-EXL3/tree/3.5bpw_H6) | 3.5 | 6 |
### Downloading quants with huggingface-cli
<details>
<summary>Click to view download instructions</summary>
Install hugginface-cli:
```bash
pip install -U "huggingface_hub[cli]"
```
Download quant by targeting the specific quant revision (branch):
```
huggingface-cli download ArtusDev/TareksTesting_Scripturient-V2.0-LLaMa-70B-EXL3 --revision "5.0bpw_H6" --local-dir ./
```
</details>
|
ArtusDev/TareksTesting_Scripturient-V2.1-LLaMa-70B-EXL3 | ArtusDev | 2025-06-17T05:35:53Z | 2 | 0 | transformers | [
"transformers",
"mergekit",
"merge",
"exl3",
"base_model:TareksTesting/Scripturient-V2.1-LLaMa-70B",
"base_model:quantized:TareksTesting/Scripturient-V2.1-LLaMa-70B",
"license:llama3.3",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T15:25:47Z | ---
base_model: TareksTesting/Scripturient-V2.1-LLaMa-70B
base_model_relation: quantized
quantized_by: ArtusDev
library_name: transformers
license: llama3.3
tags:
- mergekit
- merge
- exl3
---
## EXL3 Quants of TareksTesting/Scripturient-V2.1-LLaMa-70B
EXL3 quants of [TareksTesting/Scripturient-V2.1-LLaMa-70B](https://huggingface.co/TareksTesting/Scripturient-V2.1-LLaMa-70B) using <a href="https://github.com/turboderp-org/exllamav3/">exllamav3</a> for quantization.
### Quants
| Quant(Revision) | Bits per Weight | Head Bits |
| -------- | ---------- | --------- |
| [3.5_H6](https://huggingface.co/ArtusDev/TareksTesting_Scripturient-V2.1-LLaMa-70B-EXL3/tree/3.5bpw_H6) | 3.5 | 6 |
### Downloading quants with huggingface-cli
<details>
<summary>Click to view download instructions</summary>
Install hugginface-cli:
```bash
pip install -U "huggingface_hub[cli]"
```
Download quant by targeting the specific quant revision (branch):
```
huggingface-cli download ArtusDev/TareksTesting_Scripturient-V2.1-LLaMa-70B-EXL3 --revision "5.0bpw_H6" --local-dir ./
```
</details>
|
ArtusDev/Steelskull_L3.3-Nevoria-R1-70b-EXL3 | ArtusDev | 2025-06-17T05:35:26Z | 28 | 0 | transformers | [
"transformers",
"mergekit",
"merge",
"exl3",
"base_model:Steelskull/L3.3-Nevoria-R1-70b",
"base_model:quantized:Steelskull/L3.3-Nevoria-R1-70b",
"license:other",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2025-05-30T21:22:51Z | ---
base_model: Steelskull/L3.3-Nevoria-R1-70b
base_model_relation: quantized
quantized_by: ArtusDev
library_name: transformers
license: other
license_name: eva-llama3.3
tags:
- mergekit
- merge
- exl3
model-index:
- name: L3.3-Nevoria-R1-70b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: wis-k/instruction-following-eval
split: train
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 60.24
name: averaged accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Steelskull%2FL3.3-Nevoria-R1-70b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: SaylorTwift/bbh
split: test
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 56.17
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Steelskull%2FL3.3-Nevoria-R1-70b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: lighteval/MATH-Hard
split: test
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 46.68
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Steelskull%2FL3.3-Nevoria-R1-70b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
split: train
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 29.19
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Steelskull%2FL3.3-Nevoria-R1-70b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 20.19
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Steelskull%2FL3.3-Nevoria-R1-70b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 49.59
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Steelskull%2FL3.3-Nevoria-R1-70b
name: Open LLM Leaderboard
---
## EXL3 Quants of Steelskull/L3.3-Nevoria-R1-70b
EXL3 quants of [Steelskull/L3.3-Nevoria-R1-70b](https://huggingface.co/Steelskull/L3.3-Nevoria-R1-70b) using <a href="https://github.com/turboderp-org/exllamav3/">exllamav3</a> for quantization.
### Quants
| Quant(Revision) | Bits per Weight | Head Bits |
| -------- | ---------- | --------- |
| [2.5_H6](https://huggingface.co/ArtusDev/Steelskull_L3.3-Nevoria-R1-70b-EXL3/tree/2.5bpw_H6) | 2.5 | 6 |
| [3.0_H6](https://huggingface.co/ArtusDev/Steelskull_L3.3-Nevoria-R1-70b-EXL3/tree/3.0bpw_H6) | 3.0 | 6 |
| [3.5_H6](https://huggingface.co/ArtusDev/Steelskull_L3.3-Nevoria-R1-70b-EXL3/tree/3.5bpw_H6) | 3.5 | 6 |
| [3.75_H6](https://huggingface.co/ArtusDev/Steelskull_L3.3-Nevoria-R1-70b-EXL3/tree/3.75bpw_H6) | 3.75 | 6 |
| [4.0_H6](https://huggingface.co/ArtusDev/Steelskull_L3.3-Nevoria-R1-70b-EXL3/tree/4.0bpw_H6) | 4.0 | 6 |
| [4.25_H6](https://huggingface.co/ArtusDev/Steelskull_L3.3-Nevoria-R1-70b-EXL3/tree/4.25bpw_H6) | 4.25 | 6 |
| [4.5_H6](https://huggingface.co/ArtusDev/Steelskull_L3.3-Nevoria-R1-70b-EXL3/tree/4.5bpw_H6) | 4.5 | 6 |
| [5.0_H6](https://huggingface.co/ArtusDev/Steelskull_L3.3-Nevoria-R1-70b-EXL3/tree/5.0bpw_H6) | 5.0 | 6 |
| [6.0_H6](https://huggingface.co/ArtusDev/Steelskull_L3.3-Nevoria-R1-70b-EXL3/tree/6.0bpw_H6) | 6.0 | 6 |
| [8.0_H6](https://huggingface.co/ArtusDev/Steelskull_L3.3-Nevoria-R1-70b-EXL3/tree/8.0bpw_H6) | 8.0 | 6 |
| [8.0_H8](https://huggingface.co/ArtusDev/Steelskull_L3.3-Nevoria-R1-70b-EXL3/tree/8.0bpw_H8) | 8.0 | 8 |
### Downloading quants with huggingface-cli
<details>
<summary>Click to view download instructions</summary>
Install hugginface-cli:
```bash
pip install -U "huggingface_hub[cli]"
```
Download quant by targeting the specific quant revision (branch):
```
huggingface-cli download ArtusDev/Steelskull_L3.3-Nevoria-R1-70b-EXL3 --revision "5.0bpw_H6" --local-dir ./
```
</details>
|
Zack-Z/qwen3_4bi_cotsft_rs0_0_5cut_cot2all_indep_ntt_e2 | Zack-Z | 2025-06-17T05:35:07Z | 0 | 0 | transformers | [
"transformers",
"qwen3",
"feature-extraction",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Qwen3-4B",
"base_model:finetune:unsloth/Qwen3-4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-06-17T05:20:21Z | ---
base_model: unsloth/Qwen3-4B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Zack-Z
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
asm3515/bert_agnews_lora_rank16 | asm3515 | 2025-06-17T05:31:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T18:21:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Mezzo-fun-Official-Viral-Videos/FULL.VIDEO.Mezzo.fun.Viral.Video.Tutorial.Official | Mezzo-fun-Official-Viral-Videos | 2025-06-17T05:29:58Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-17T05:29:35Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
yumiian/qa_en_ms_model_v3 | yumiian | 2025-06-17T05:16:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"question-answering",
"generated_from_trainer",
"base_model:mesolitica/finetune-qa-t5-small-standard-bahasa-cased",
"base_model:finetune:mesolitica/finetune-qa-t5-small-standard-bahasa-cased",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | question-answering | 2025-06-17T03:45:51Z | ---
library_name: transformers
base_model: mesolitica/finetune-qa-t5-small-standard-bahasa-cased
tags:
- generated_from_trainer
model-index:
- name: qa_en_ms_model_v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qa_en_ms_model_v3
This model is a fine-tuned version of [mesolitica/finetune-qa-t5-small-standard-bahasa-cased](https://huggingface.co/mesolitica/finetune-qa-t5-small-standard-bahasa-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8296
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.3752 | 1.0 | 1047 | 1.0868 |
| 1.1233 | 2.0 | 2094 | 0.9799 |
| 1.0295 | 3.0 | 3141 | 0.9326 |
| 0.9485 | 4.0 | 4188 | 0.9051 |
| 0.8883 | 5.0 | 5235 | 0.8748 |
| 0.8485 | 6.0 | 6282 | 0.8605 |
| 0.8243 | 7.0 | 7329 | 0.8485 |
| 0.7977 | 8.0 | 8376 | 0.8417 |
| 0.7597 | 9.0 | 9423 | 0.8286 |
| 0.7495 | 10.0 | 10470 | 0.8306 |
| 0.7211 | 11.0 | 11517 | 0.8255 |
| 0.7076 | 12.0 | 12564 | 0.8291 |
| 0.7012 | 13.0 | 13611 | 0.8350 |
| 0.6833 | 14.0 | 14658 | 0.8288 |
| 0.6687 | 15.0 | 15705 | 0.8230 |
| 0.6574 | 16.0 | 16752 | 0.8313 |
| 0.6342 | 17.0 | 17799 | 0.8239 |
| 0.6419 | 18.0 | 18846 | 0.8271 |
| 0.6451 | 19.0 | 19893 | 0.8302 |
| 0.6278 | 20.0 | 20940 | 0.8296 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.5.1+cu121
- Datasets 3.6.0
- Tokenizers 0.21.1
|
tusku330/shawgpt-ft | tusku330 | 2025-06-17T05:07:27Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ",
"base_model:adapter:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2025-06-16T15:59:02Z | ---
library_name: peft
license: apache-2.0
base_model: TheBloke/Mistral-7B-Instruct-v0.2-GPTQ
tags:
- generated_from_trainer
model-index:
- name: shawgpt-ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# shawgpt-ft
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7620
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.2138 | 1.0 | 4 | 3.7807 |
| 3.4685 | 2.0 | 8 | 3.1365 |
| 2.9233 | 3.0 | 12 | 2.6613 |
| 2.4915 | 4.0 | 16 | 2.3249 |
| 2.2518 | 5.0 | 20 | 2.0618 |
| 1.8532 | 6.0 | 24 | 1.8620 |
| 1.675 | 7.0 | 28 | 1.7761 |
| 1.7253 | 7.6154 | 30 | 1.7620 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1 |
raraaz/Video.Full.hospital.de.terespolis.vdeo.portal.Zacarias | raraaz | 2025-06-17T04:50:10Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-17T04:46:42Z | <a href="https://zapvid.cfd/Full-hospital-de-terespolis-vdeo"> 🌐 Click Here To link (terespolis-vdeo)
🔴 ➤►DOWNLOAD👉👉🟢 ➤ <a href="https://zapvid.cfd/Full-hospital-de-terespolis-vdeo"> 🌐 terespolis-vdeo
It sounds like you're referring to a video of a "Full Hospital of Teresópolis," but could you clarify a bit more about what you're looking for? Are you asking for a video, or do you need information about a hospital in Teresópolis (a city in Brazil)? Let me know how I can help!
|
mayankgrd/medgemma-4b-it-sft-lora-crc100k | mayankgrd | 2025-06-17T04:41:22Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/medgemma-4b-it",
"base_model:finetune:google/medgemma-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-06-01T07:46:59Z | ---
base_model: google/medgemma-4b-it
library_name: transformers
model_name: medgemma-4b-it-sft-lora-crc100k
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for medgemma-4b-it-sft-lora-crc100k
This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="mayankgrd/medgemma-4b-it-sft-lora-crc100k", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.0+cu128
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Ashkh0099/fine-tune-ALBERT-FINAL | Ashkh0099 | 2025-06-17T04:38:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"question-answering",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | question-answering | 2025-06-17T03:03:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
cucucu666/huanhu-6.17 | cucucu666 | 2025-06-17T04:34:00Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"flux",
"flux-diffusers",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-Fill-dev",
"base_model:adapter:black-forest-labs/FLUX.1-Fill-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-17T02:29:50Z | ---
base_model: black-forest-labs/FLUX.1-Fill-dev
library_name: diffusers
license: other
instance_prompt: labii face, Crayon Shin-chan style, cheerful expression, big smile,
open mouth, plain color background
widget:
- text: labii face, Crayon Shin-chan style, cheerful expression, big smile, open mouth,
plain color background
output:
url: image_0.png
- text: labii face, Crayon Shin-chan style, cheerful expression, big smile, open mouth,
plain color background
output:
url: image_1.png
- text: labii face, Crayon Shin-chan style, cheerful expression, big smile, open mouth,
plain color background
output:
url: image_2.png
- text: labii face, Crayon Shin-chan style, cheerful expression, big smile, open mouth,
plain color background
output:
url: image_3.png
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- flux
- flux-diffusers
- template:sd-lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Flux-Fill DreamBooth LoRA - cucucu666/huanhu-6.17
<Gallery />
## Model description
These are cucucu666/huanhu-6.17 DreamBooth LoRA weights for black-forest-labs/FLUX.1-Fill-dev.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with a custom [Flux diffusers trainer](https://github.com/Sebastian-Zok/FLUX-Fill-LoRa-Training).
Was LoRA for the text encoder enabled? False.
## Trigger words
You should use `labii face, Crayon Shin-chan style, cheerful expression, big smile, open mouth, plain color background` to trigger the image generation.
## Download model
[Download the *.safetensors LoRA](cucucu666/huanhu-6.17/tree/main) in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('cucucu666/huanhu-6.17', weight_name='pytorch_lora_weights.safetensors')
image = pipeline('labii face, Crayon Shin-chan style, cheerful expression, big smile, open mouth, plain color background').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
shisa-ai/017-qwen3-8b-v2-dpo405b-clr | shisa-ai | 2025-06-17T04:26:07Z | 0 | 0 | null | [
"safetensors",
"qwen3",
"ja",
"en",
"dataset:shisa-ai/shisa-v2-sharegpt",
"dataset:shisa-ai/shisa-v2-405b-ultrafeedback-armorm",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"license:apache-2.0",
"region:us"
] | null | 2025-06-15T10:41:21Z | ---
license: apache-2.0
datasets:
- shisa-ai/shisa-v2-sharegpt
- shisa-ai/shisa-v2-405b-ultrafeedback-armorm
language:
- ja
- en
base_model:
- Qwen/Qwen3-8B
---
This is a WIP version of Qwen3 8B post-trained on the full Shisa V2 recipe.
This is a *non-reasoning* model and thinking has been disabled in the default `chat_template`.
This will be replaced shortly by a V2.1, but preliminary benchmarks suggest that it is quite strong.
Shaberi (judged by GPT-4.1):
| Model | Average | ELYZA 100 | JA-MT | Rakuda | Tengu |
|--------------------------------------|---------|-----------|-------|--------|--------|
| 017-qwen3-8b-v2-dpo405b-clr-nothink | **7.75** | **7.88** | **8.08** | **8.08** | **6.94** |
| shisa-ai/shisa-v2-llama3.1-8b | 7.14 | 7.54 | 6.83 | 7.85 | 6.34 |
| shisa-ai/shisa-v2-qwen2.5-7b | 7.10 | 7.48 | 7.40 | 7.18 | 6.33 |
And JA MT-Bench (judged by GPT-4.1):
| Model | coding | extraction | humanities | math | reasoning | roleplay | stem | writing | Overall |
|--------------------------------------|--------|------------|------------|------|-----------|----------|------|---------|---------|
| 017-qwen3-8b-v2-dpo405b-clr-nothink | **7.3** | **7.55** | **8.85** | **9.3** | **6.05** | **7.9** | **8.6** | **8.9** | **8.06** |
| shisa-ai/shisa-v2-qwen2.5-7b | 6.7 | 7.15 | 7.55 | 8.5 | 5.4 | **7.9** | 7.5 | 7.7 | 7.3 |
| shisa-ai/shisa-v2-llama3.1-8b | 5.3 | 6.95 | 8.4 | 6.55 | 5.95 | 7.65 | 7.25 | 7.9 | 6.99 |
|
Reallusion/3D_Concept | Reallusion | 2025-06-17T04:09:38Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"en",
"dataset:laion/laion-art",
"base_model:stable-diffusion-v1-5/stable-diffusion-v1-5",
"base_model:finetune:stable-diffusion-v1-5/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2025-06-17T04:06:00Z | ---
license: creativeml-openrail-m
datasets:
- laion/laion-art
language:
- en
base_model:
- stable-diffusion-v1-5/stable-diffusion-v1-5
pipeline_tag: text-to-image
library_name: diffusers
---
# SDv1-5 3D_Concept Model Card
## Model Source
This model was originally created byIris_DS on [Civitai model page](https://civitai.com/models/58431/darksun?modelVersionId=130121).
## License
This model is licensed under the [CreativeML Open RAIL-M ](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
& [Addendum](https://civitai.com/models/license/121126)
Please review the license for detailed terms, including restrictions on usage.
## Usage
This model is intended for use exclusively via our plugin, which automatically downloads the model from this repository for integration with ComfyUI.
The model is **not intended for direct download or commercial deployment**.
Users must comply with the license terms, including restrictions against unlawful, harmful, or commercial uses.
s
## Attribution
Model copyright © 2022 Robin Rombach, Patrick Esser, contributors (Stable Diffusion 1.5 original developers), and [Iris_DS](https://civitai.com/user/Iris_DS).
## Change Log
2025-06-12 Uploaded to our organization’s HuggingFace space with no modifications to the original model files. |
mezzo-fun-18-video/mezzo.fun.viral.video.Link.viral.On.Social.Media | mezzo-fun-18-video | 2025-06-17T03:48:03Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-17T03:47:53Z | 01 seconds ago
[🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶](https://sahabagi-mgi.blogspot.com/p/heres-now.html)
[🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 FREE](https://sahabagi-mgi.blogspot.com/p/heres-now.html)
<a href="https://sahabagi-mgi.blogspot.com/p/heres-now.html" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
GeerBox/q-FrozenLake-v1-4x4-noSlippery | GeerBox | 2025-06-17T03:40:01Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-17T03:39:58Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="GeerBox/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
nguyenanh2803/qwen-r1-grpo-aha-moment | nguyenanh2803 | 2025-06-17T03:38:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-17T03:38:05Z | ---
base_model: Qwen/Qwen2-1.5B-Instruct
library_name: transformers
model_name: qwen-r1-grpo-aha-moment
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for qwen-r1-grpo-aha-moment
This model is a fine-tuned version of [Qwen/Qwen2-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="nguyenanh2803/qwen-r1-grpo-aha-moment", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
lejelly/test-size-dataset-task-wise-llm-adamerge-crossentropy-mistral-7b-instrcut-math-code | lejelly | 2025-06-17T03:33:22Z | 0 | 0 | null | [
"safetensors",
"mistral",
"merge",
"parameter_wise",
"llm-adamerge",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | 2025-06-17T03:30:50Z | ---
tags:
- merge
- parameter_wise
- llm-adamerge
base_model: mistralai/Mistral-7B-v0.1
---
# Merged Model using LLM-AdaMerge (parameter_wise)
This model was created by merging multiple fine-tuned models using the LLM-AdaMerge approach with parameter_wise merging.
## Merge Details
- **Merge Type**: parameter_wise
- **Base Model**: mistralai/Mistral-7B-v0.1
- **Number of Models Merged**: 3
- **Models Merged**: instruct, math, code
- **Final Training Loss**: N/A
- **Training Epochs**: 0
## Lambda Coefficients
The following lambda coefficients were learned during training:
### Parameter-wise Lambdas
This model uses parameter-wise lambda coefficients. Total parameters with individual lambdas: N/A
See the uploaded `learned_lambdas.json` file for detailed parameter-wise coefficients.
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("your-username/model-name")
tokenizer = AutoTokenizer.from_pretrained("your-username/model-name")
# Use the model
inputs = tokenizer("Hello, how are you?", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0]))
```
## Training Configuration
See the uploaded `training_config.json` file for detailed training configuration.
## Citation
If you use this model, please cite the LLM-AdaMerge paper:
```bibtex
@article{llmadamerge2024,
title={LLM-AdaMerge: Adaptive Model Merging for Large Language Models},
author={...},
year={2024}
}
```
|
keras/llama3.2_instruct_3b | keras | 2025-06-17T03:24:08Z | 0 | 0 | keras-hub | [
"keras-hub",
"text-generation",
"region:us"
] | text-generation | 2025-06-16T21:49:57Z | ---
library_name: keras-hub
pipeline_tag: text-generation
---
### Model Overview
Llama 3 is a set of large language models published by Meta. Both pretrained and instruction tuned models are available, and range in size from 7 billion to 70 billion parameters. See the model card below for benchmarks, data sources, and intended use cases.
Weights are released under the [Llama 3 Community License](https://ai.meta.com/llama/license/). Keras model code is released under the [Apache 2 License](https://github.com/keras-team/keras-hub/blob/master/LICENSE).
## Links
* [Llama 3 Quickstart Notebook](https://www.kaggle.com/code/laxmareddypatlolla/llama3-quickstart-notebook)
* [Llama 3 API Documentation](https://keras.io/api/keras_hub/models/llama3/)
* [Llama 3 Model Card & Prompt Formats](https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3)
* [KerasHub Beginner Guide](https://keras.io/guides/keras_hub/getting_started/)
* [KerasHub Model Publishing Guide](https://keras.io/guides/keras_hub/upload/)
## Installation
Keras and KerasHub can be installed with:
```
pip install -U -q keras-hub
pip install -U -q keras
```
Jax, TensorFlow, and Torch come preinstalled in Kaggle Notebooks. For instructions on installing them in another environment see the [Keras Getting Started](https://keras.io/getting_started/) page.
## Presets
The following model checkpoints are provided by the Keras team. Full code examples for each are available below.
| Preset name | Parameters | Description |
|-----------------------|------------|---------------|
|` llama3_8b_en ` | 8.03B | 8 billion parameter, 32-layer, base LLaMA 3 model. |
|` llama3_8b_en_int8 ` | 8.03B | 8 billion parameter, 32-layer, base LLaMA 3 model with activation and weights quantized to int8. |
| `llama3_instruct_8b_en ` | 8.03B | 8 billion parameter, 32-layer, instruction tuned LLaMA 3 model. |
| `llama3_instruct_8b_en_int8 ` | 8.03B | 8 billion parameter, 32-layer, instruction tuned LLaMA 3 model with activation and weights quantized to int8. |
| `llama3.1_8b` | 8.03B | 8 billion parameter, 32-layer, based LLaMA 3.1 model.|
| `llama3.1_guard_8b` | 8.03B | 8 billion parameter, 32-layer, LLaMA 3.1 fine-tuned for consent safety classification.|
| `llama3.1_instruct_8b` | 8.03B | 8 billion parameter, 32-layer, instruction tuned LLaMA 3.1.|
| `llama3.2_1b` | 1.5B | 1 billion parameter, 16-layer, based LLaMA 3.2 model.|
| `llama3.2_3b` | 3.6B | 3 billion parameter, 26-layer, based LLaMA 3.2 model.|
| `llama3.2_guard_1b` | 1.5B | 1 billion parameter, 16-layer, based LLaMA 3.2 model fine-tuned for consent safety classification. |
| `llama3.2_instruct_1b` | 1.5B | 1 billion parameter, 16-layer, instruction tuned LLaMA 3.2.|
| `llama3.2_instruct_3b` | 3.6B | 3 billion parameter, 28-layer, instruction tuned LLaMA 3.2.|
## Prompts
Llama-3 "instruct" models are instruction tuned on turn by turn conversations and should be prompted with examples that precisely match the training data. Specifically, you must alternate user and assistant turns that begin and end with special tokens. New lines do matter. See the following for an example:
```python
prompt = """<|start_header_id|>system<|end_header_id|>
You are a helpful AI assistant for travel tips and recommendations<|eot_id|><|start_header_id|>user<|end_header_id|>
What can you help me with?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
"""
```
For more details, please refer to this link: [Llama 3 Model Card & Prompt Formats](https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3).
Base models (without instruct in the name) have no specific prompting structure, and should usually be fine-tuned for a specific task.
## Example Usage
```python
import keras
import keras_hub
import numpy as np
```
Use `generate()` to do text generation.
```python
llama_lm = keras_hub.models.Llama3CausalLM.from_preset("llama3.2_instruct_3b")
llama_lm.generate("What is Keras?", max_length=500)
# Generate with batched prompts.
llama_lm.generate(["What is Keras?", "Give me your best brownie recipe."], max_length=500)
```
Compile the `generate()` function with a custom sampler.
```python
llama_lm = keras_hub.models.Llama3CausalLM.from_preset("llama3.2_instruct_3b")
llama_lm.compile(sampler="greedy")
llama_lm.generate("I want to say", max_length=30)
llama_lm.compile(sampler=keras_hub.samplers.BeamSampler(num_beams=2))
llama_lm.generate("I want to say", max_length=30)
```
Use `generate()` without preprocessing.
```python
prompt = {
"token_ids": np.array([[306, 864, 304, 1827, 0, 0, 0, 0, 0, 0]] * 2),
# Use `"padding_mask"` to indicate values that should not be overridden.
"padding_mask": np.array([[1, 1, 1, 1, 0, 0, 0, 0, 0, 0]] * 2),
}
llama_lm = keras_hub.models.Llama3CausalLM.from_preset(
"llama3.2_instruct_3b",
preprocessor=None,
dtype="bfloat16"
)
llama_lm.generate(prompt)
```
Call `fit()` on a single batch.
```python
features = ["The quick brown fox jumped.", "I forgot my homework."]
llama_lm = keras_hub.models.Llama3CausalLM.from_preset("llama3.2_instruct_3b")
llama_lm.fit(x=features, batch_size=2)
```
Call `fit()` without preprocessing.
```python
x = {
"token_ids": np.array([[450, 4996, 17354, 1701, 29916, 12500, 287, 29889, 0, 0]] * 2),
"padding_mask": np.array([[1, 1, 1, 1, 1, 1, 1, 1, 0, 0]] * 2),
}
y = np.array([[4996, 17354, 1701, 29916, 12500, 287, 29889, 0, 0, 0]] * 2)
sw = np.array([[1, 1, 1, 1, 1, 1, 1, 0, 0, 0]] * 2)
llama_lm = keras_hub.models.Llama3CausalLM.from_preset(
"llama3.2_instruct_3b",
preprocessor=None,
dtype="bfloat16"
)
llama_lm.fit(x=x, y=y, sample_weight=sw, batch_size=2)
```
## Example Usage with Hugging Face URI
```python
import keras
import keras_hub
import numpy as np
```
Use `generate()` to do text generation.
```python
llama_lm = keras_hub.models.Llama3CausalLM.from_preset("hf://keras/llama3.2_instruct_3b")
llama_lm.generate("What is Keras?", max_length=500)
# Generate with batched prompts.
llama_lm.generate(["What is Keras?", "Give me your best brownie recipe."], max_length=500)
```
Compile the `generate()` function with a custom sampler.
```python
llama_lm = keras_hub.models.Llama3CausalLM.from_preset("hf://keras/llama3.2_instruct_3b")
llama_lm.compile(sampler="greedy")
llama_lm.generate("I want to say", max_length=30)
llama_lm.compile(sampler=keras_hub.samplers.BeamSampler(num_beams=2))
llama_lm.generate("I want to say", max_length=30)
```
Use `generate()` without preprocessing.
```python
prompt = {
"token_ids": np.array([[306, 864, 304, 1827, 0, 0, 0, 0, 0, 0]] * 2),
# Use `"padding_mask"` to indicate values that should not be overridden.
"padding_mask": np.array([[1, 1, 1, 1, 0, 0, 0, 0, 0, 0]] * 2),
}
llama_lm = keras_hub.models.Llama3CausalLM.from_preset(
"hf://keras/llama3.2_instruct_3b",
preprocessor=None,
dtype="bfloat16"
)
llama_lm.generate(prompt)
```
Call `fit()` on a single batch.
```python
features = ["The quick brown fox jumped.", "I forgot my homework."]
llama_lm = keras_hub.models.Llama3CausalLM.from_preset("hf://keras/llama3.2_instruct_3b")
llama_lm.fit(x=features, batch_size=2)
```
Call `fit()` without preprocessing.
```python
x = {
"token_ids": np.array([[450, 4996, 17354, 1701, 29916, 12500, 287, 29889, 0, 0]] * 2),
"padding_mask": np.array([[1, 1, 1, 1, 1, 1, 1, 1, 0, 0]] * 2),
}
y = np.array([[4996, 17354, 1701, 29916, 12500, 287, 29889, 0, 0, 0]] * 2)
sw = np.array([[1, 1, 1, 1, 1, 1, 1, 0, 0, 0]] * 2)
llama_lm = keras_hub.models.Llama3CausalLM.from_preset(
"hf://keras/llama3.2_instruct_3b",
preprocessor=None,
dtype="bfloat16"
)
llama_lm.fit(x=x, y=y, sample_weight=sw, batch_size=2)
```
|
keras/llama3.2_3b | keras | 2025-06-17T03:24:06Z | 0 | 0 | keras-hub | [
"keras-hub",
"text-generation",
"region:us"
] | text-generation | 2025-06-16T21:40:24Z | ---
library_name: keras-hub
pipeline_tag: text-generation
---
### Model Overview
Llama 3 is a set of large language models published by Meta. Both pretrained and instruction tuned models are available, and range in size from 7 billion to 70 billion parameters. See the model card below for benchmarks, data sources, and intended use cases.
Weights are released under the [Llama 3 Community License](https://ai.meta.com/llama/license/). Keras model code is released under the [Apache 2 License](https://github.com/keras-team/keras-hub/blob/master/LICENSE).
## Links
* [Llama 3 Quickstart Notebook](https://www.kaggle.com/code/laxmareddypatlolla/llama3-quickstart-notebook)
* [Llama 3 API Documentation](https://keras.io/api/keras_hub/models/llama3/)
* [Llama 3 Model Card & Prompt Formats](https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3)
* [KerasHub Beginner Guide](https://keras.io/guides/keras_hub/getting_started/)
* [KerasHub Model Publishing Guide](https://keras.io/guides/keras_hub/upload/)
## Installation
Keras and KerasHub can be installed with:
```
pip install -U -q keras-hub
pip install -U -q keras
```
Jax, TensorFlow, and Torch come preinstalled in Kaggle Notebooks. For instructions on installing them in another environment see the [Keras Getting Started](https://keras.io/getting_started/) page.
## Presets
The following model checkpoints are provided by the Keras team. Full code examples for each are available below.
| Preset name | Parameters | Description |
|-----------------------|------------|---------------|
|` llama3_8b_en ` | 8.03B | 8 billion parameter, 32-layer, base LLaMA 3 model. |
|` llama3_8b_en_int8 ` | 8.03B | 8 billion parameter, 32-layer, base LLaMA 3 model with activation and weights quantized to int8. |
| `llama3_instruct_8b_en ` | 8.03B | 8 billion parameter, 32-layer, instruction tuned LLaMA 3 model. |
| `llama3_instruct_8b_en_int8 ` | 8.03B | 8 billion parameter, 32-layer, instruction tuned LLaMA 3 model with activation and weights quantized to int8. |
| `llama3.1_8b` | 8.03B | 8 billion parameter, 32-layer, based LLaMA 3.1 model.|
| `llama3.1_guard_8b` | 8.03B | 8 billion parameter, 32-layer, LLaMA 3.1 fine-tuned for consent safety classification.|
| `llama3.1_instruct_8b` | 8.03B | 8 billion parameter, 32-layer, instruction tuned LLaMA 3.1.|
| `llama3.2_1b` | 1.5B | 1 billion parameter, 16-layer, based LLaMA 3.2 model.|
| `llama3.2_3b` | 3.6B | 3 billion parameter, 26-layer, based LLaMA 3.2 model.|
| `llama3.2_guard_1b` | 1.5B | 1 billion parameter, 16-layer, based LLaMA 3.2 model fine-tuned for consent safety classification. |
| `llama3.2_instruct_1b` | 1.5B | 1 billion parameter, 16-layer, instruction tuned LLaMA 3.2.|
| `llama3.2_instruct_3b` | 3.6B | 3 billion parameter, 28-layer, instruction tuned LLaMA 3.2.|
## Prompts
Llama-3 "instruct" models are instruction tuned on turn by turn conversations and should be prompted with examples that precisely match the training data. Specifically, you must alternate user and assistant turns that begin and end with special tokens. New lines do matter. See the following for an example:
```python
prompt = """<|start_header_id|>system<|end_header_id|>
You are a helpful AI assistant for travel tips and recommendations<|eot_id|><|start_header_id|>user<|end_header_id|>
What can you help me with?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
"""
```
For more details, please refer to this link: [Llama 3 Model Card & Prompt Formats](https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3).
Base models (without instruct in the name) have no specific prompting structure, and should usually be fine-tuned for a specific task.
## Example Usage
```python
import keras
import keras_hub
import numpy as np
```
Use `generate()` to do text generation.
```python
llama_lm = keras_hub.models.Llama3CausalLM.from_preset("llama3.2_3b")
llama_lm.generate("What is Keras?", max_length=500)
# Generate with batched prompts.
llama_lm.generate(["What is Keras?", "Give me your best brownie recipe."], max_length=500)
```
Compile the `generate()` function with a custom sampler.
```python
llama_lm = keras_hub.models.Llama3CausalLM.from_preset("llama3.2_3b")
llama_lm.compile(sampler="greedy")
llama_lm.generate("I want to say", max_length=30)
llama_lm.compile(sampler=keras_hub.samplers.BeamSampler(num_beams=2))
llama_lm.generate("I want to say", max_length=30)
```
Use `generate()` without preprocessing.
```python
prompt = {
"token_ids": np.array([[306, 864, 304, 1827, 0, 0, 0, 0, 0, 0]] * 2),
# Use `"padding_mask"` to indicate values that should not be overridden.
"padding_mask": np.array([[1, 1, 1, 1, 0, 0, 0, 0, 0, 0]] * 2),
}
llama_lm = keras_hub.models.Llama3CausalLM.from_preset(
"llama3.2_3b",
preprocessor=None,
dtype="bfloat16"
)
llama_lm.generate(prompt)
```
Call `fit()` on a single batch.
```python
features = ["The quick brown fox jumped.", "I forgot my homework."]
llama_lm = keras_hub.models.Llama3CausalLM.from_preset("llama3.2_3b")
llama_lm.fit(x=features, batch_size=2)
```
Call `fit()` without preprocessing.
```python
x = {
"token_ids": np.array([[450, 4996, 17354, 1701, 29916, 12500, 287, 29889, 0, 0]] * 2),
"padding_mask": np.array([[1, 1, 1, 1, 1, 1, 1, 1, 0, 0]] * 2),
}
y = np.array([[4996, 17354, 1701, 29916, 12500, 287, 29889, 0, 0, 0]] * 2)
sw = np.array([[1, 1, 1, 1, 1, 1, 1, 0, 0, 0]] * 2)
llama_lm = keras_hub.models.Llama3CausalLM.from_preset(
"llama3.2_3b",
preprocessor=None,
dtype="bfloat16"
)
llama_lm.fit(x=x, y=y, sample_weight=sw, batch_size=2)
```
## Example Usage with Hugging Face URI
```python
import keras
import keras_hub
import numpy as np
```
Use `generate()` to do text generation.
```python
llama_lm = keras_hub.models.Llama3CausalLM.from_preset("hf://keras/llama3.2_3b")
llama_lm.generate("What is Keras?", max_length=500)
# Generate with batched prompts.
llama_lm.generate(["What is Keras?", "Give me your best brownie recipe."], max_length=500)
```
Compile the `generate()` function with a custom sampler.
```python
llama_lm = keras_hub.models.Llama3CausalLM.from_preset("hf://keras/llama3.2_3b")
llama_lm.compile(sampler="greedy")
llama_lm.generate("I want to say", max_length=30)
llama_lm.compile(sampler=keras_hub.samplers.BeamSampler(num_beams=2))
llama_lm.generate("I want to say", max_length=30)
```
Use `generate()` without preprocessing.
```python
prompt = {
"token_ids": np.array([[306, 864, 304, 1827, 0, 0, 0, 0, 0, 0]] * 2),
# Use `"padding_mask"` to indicate values that should not be overridden.
"padding_mask": np.array([[1, 1, 1, 1, 0, 0, 0, 0, 0, 0]] * 2),
}
llama_lm = keras_hub.models.Llama3CausalLM.from_preset(
"hf://keras/llama3.2_3b",
preprocessor=None,
dtype="bfloat16"
)
llama_lm.generate(prompt)
```
Call `fit()` on a single batch.
```python
features = ["The quick brown fox jumped.", "I forgot my homework."]
llama_lm = keras_hub.models.Llama3CausalLM.from_preset("hf://keras/llama3.2_3b")
llama_lm.fit(x=features, batch_size=2)
```
Call `fit()` without preprocessing.
```python
x = {
"token_ids": np.array([[450, 4996, 17354, 1701, 29916, 12500, 287, 29889, 0, 0]] * 2),
"padding_mask": np.array([[1, 1, 1, 1, 1, 1, 1, 1, 0, 0]] * 2),
}
y = np.array([[4996, 17354, 1701, 29916, 12500, 287, 29889, 0, 0, 0]] * 2)
sw = np.array([[1, 1, 1, 1, 1, 1, 1, 0, 0, 0]] * 2)
llama_lm = keras_hub.models.Llama3CausalLM.from_preset(
"hf://keras/llama3.2_3b",
preprocessor=None,
dtype="bfloat16"
)
llama_lm.fit(x=x, y=y, sample_weight=sw, batch_size=2)
```
|
keras/llama3_instruct_8b_en_int8 | keras | 2025-06-17T03:24:00Z | 18 | 1 | keras-hub | [
"keras-hub",
"text-generation-inference",
"text-generation",
"text-to-text-generation",
"text-conversation",
"en",
"license:llama3",
"region:us"
] | text-generation | 2024-10-30T21:56:12Z | ---
library_name: keras-hub
license: llama3
language:
- en
tags:
- text-generation-inference
- text-generation
- text-to-text-generation
- text-conversation
pipeline_tag: text-generation
---
### Model Overview
Llama 3 is a set of large language models published by Meta. Both pretrained and instruction tuned models are available, and range in size from 7 billion to 70 billion parameters. See the model card below for benchmarks, data sources, and intended use cases.
Weights are released under the [Llama 3 Community License](https://ai.meta.com/llama/license/). Keras model code is released under the [Apache 2 License](https://github.com/keras-team/keras-hub/blob/master/LICENSE).
## Links
* [Llama 3 Quickstart Notebook](https://www.kaggle.com/code/laxmareddypatlolla/llama3-quickstart-notebook)
* [Llama 3 API Documentation](https://keras.io/api/keras_hub/models/llama3/)
* [Llama 3 Model Card & Prompt Formats](https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3)
* [KerasHub Beginner Guide](https://keras.io/guides/keras_hub/getting_started/)
* [KerasHub Model Publishing Guide](https://keras.io/guides/keras_hub/upload/)
## Installation
Keras and KerasHub can be installed with:
```
pip install -U -q keras-hub
pip install -U -q keras
```
Jax, TensorFlow, and Torch come preinstalled in Kaggle Notebooks. For instructions on installing them in another environment see the [Keras Getting Started](https://keras.io/getting_started/) page.
## Presets
The following model checkpoints are provided by the Keras team. Full code examples for each are available below.
| Preset name | Parameters | Description |
|-----------------------|------------|---------------|
|` llama3_8b_en ` | 8.03B | 8 billion parameter, 32-layer, base LLaMA 3 model. |
|` llama3_8b_en_int8 ` | 8.03B | 8 billion parameter, 32-layer, base LLaMA 3 model with activation and weights quantized to int8. |
| `llama3_instruct_8b_en ` | 8.03B | 8 billion parameter, 32-layer, instruction tuned LLaMA 3 model. |
| `llama3_instruct_8b_en_int8 ` | 8.03B | 8 billion parameter, 32-layer, instruction tuned LLaMA 3 model with activation and weights quantized to int8. |
| `llama3.1_8b` | 8.03B | 8 billion parameter, 32-layer, based LLaMA 3.1 model.|
| `llama3.1_guard_8b` | 8.03B | 8 billion parameter, 32-layer, LLaMA 3.1 fine-tuned for consent safety classification.|
| `llama3.1_instruct_8b` | 8.03B | 8 billion parameter, 32-layer, instruction tuned LLaMA 3.1.|
| `llama3.2_1b` | 1.5B | 1 billion parameter, 16-layer, based LLaMA 3.2 model.|
| `llama3.2_3b` | 3.6B | 3 billion parameter, 26-layer, based LLaMA 3.2 model.|
| `llama3.2_guard_1b` | 1.5B | 1 billion parameter, 16-layer, based LLaMA 3.2 model fine-tuned for consent safety classification. |
| `llama3.2_instruct_1b` | 1.5B | 1 billion parameter, 16-layer, instruction tuned LLaMA 3.2.|
| `llama3.2_instruct_3b` | 3.6B | 3 billion parameter, 28-layer, instruction tuned LLaMA 3.2.|
## Prompts
Llama-3 "instruct" models are instruction tuned on turn by turn conversations and should be prompted with examples that precisely match the training data. Specifically, you must alternate user and assistant turns that begin and end with special tokens. New lines do matter. See the following for an example:
```python
prompt = """<|start_header_id|>system<|end_header_id|>
You are a helpful AI assistant for travel tips and recommendations<|eot_id|><|start_header_id|>user<|end_header_id|>
What can you help me with?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
"""
```
For more details, please refer to this link: [Llama 3 Model Card & Prompt Formats](https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3).
Base models (without instruct in the name) have no specific prompting structure, and should usually be fine-tuned for a specific task.
## Example Usage
```python
import keras
import keras_hub
import numpy as np
```
Use `generate()` to do text generation.
```python
llama_lm = keras_hub.models.Llama3CausalLM.from_preset("llama3_instruct_8b_en_int8")
llama_lm.generate("What is Keras?", max_length=500)
# Generate with batched prompts.
llama_lm.generate(["What is Keras?", "Give me your best brownie recipe."], max_length=500)
```
Compile the `generate()` function with a custom sampler.
```python
llama_lm = keras_hub.models.Llama3CausalLM.from_preset("llama3_instruct_8b_en_int8")
llama_lm.compile(sampler="greedy")
llama_lm.generate("I want to say", max_length=30)
llama_lm.compile(sampler=keras_hub.samplers.BeamSampler(num_beams=2))
llama_lm.generate("I want to say", max_length=30)
```
Use `generate()` without preprocessing.
```python
prompt = {
"token_ids": np.array([[306, 864, 304, 1827, 0, 0, 0, 0, 0, 0]] * 2),
# Use `"padding_mask"` to indicate values that should not be overridden.
"padding_mask": np.array([[1, 1, 1, 1, 0, 0, 0, 0, 0, 0]] * 2),
}
llama_lm = keras_hub.models.Llama3CausalLM.from_preset(
"llama3_instruct_8b_en_int8",
preprocessor=None,
dtype="bfloat16"
)
llama_lm.generate(prompt)
```
Call `fit()` on a single batch.
```python
features = ["The quick brown fox jumped.", "I forgot my homework."]
llama_lm = keras_hub.models.Llama3CausalLM.from_preset("llama3_instruct_8b_en_int8")
llama_lm.fit(x=features, batch_size=2)
```
Call `fit()` without preprocessing.
```python
x = {
"token_ids": np.array([[450, 4996, 17354, 1701, 29916, 12500, 287, 29889, 0, 0]] * 2),
"padding_mask": np.array([[1, 1, 1, 1, 1, 1, 1, 1, 0, 0]] * 2),
}
y = np.array([[4996, 17354, 1701, 29916, 12500, 287, 29889, 0, 0, 0]] * 2)
sw = np.array([[1, 1, 1, 1, 1, 1, 1, 0, 0, 0]] * 2)
llama_lm = keras_hub.models.Llama3CausalLM.from_preset(
"llama3_instruct_8b_en_int8",
preprocessor=None,
dtype="bfloat16"
)
llama_lm.fit(x=x, y=y, sample_weight=sw, batch_size=2)
```
## Example Usage with Hugging Face URI
```python
import keras
import keras_hub
import numpy as np
```
Use `generate()` to do text generation.
```python
llama_lm = keras_hub.models.Llama3CausalLM.from_preset("hf://keras/llama3_instruct_8b_en_int8")
llama_lm.generate("What is Keras?", max_length=500)
# Generate with batched prompts.
llama_lm.generate(["What is Keras?", "Give me your best brownie recipe."], max_length=500)
```
Compile the `generate()` function with a custom sampler.
```python
llama_lm = keras_hub.models.Llama3CausalLM.from_preset("hf://keras/llama3_instruct_8b_en_int8")
llama_lm.compile(sampler="greedy")
llama_lm.generate("I want to say", max_length=30)
llama_lm.compile(sampler=keras_hub.samplers.BeamSampler(num_beams=2))
llama_lm.generate("I want to say", max_length=30)
```
Use `generate()` without preprocessing.
```python
prompt = {
"token_ids": np.array([[306, 864, 304, 1827, 0, 0, 0, 0, 0, 0]] * 2),
# Use `"padding_mask"` to indicate values that should not be overridden.
"padding_mask": np.array([[1, 1, 1, 1, 0, 0, 0, 0, 0, 0]] * 2),
}
llama_lm = keras_hub.models.Llama3CausalLM.from_preset(
"hf://keras/llama3_instruct_8b_en_int8",
preprocessor=None,
dtype="bfloat16"
)
llama_lm.generate(prompt)
```
Call `fit()` on a single batch.
```python
features = ["The quick brown fox jumped.", "I forgot my homework."]
llama_lm = keras_hub.models.Llama3CausalLM.from_preset("hf://keras/llama3_instruct_8b_en_int8")
llama_lm.fit(x=features, batch_size=2)
```
Call `fit()` without preprocessing.
```python
x = {
"token_ids": np.array([[450, 4996, 17354, 1701, 29916, 12500, 287, 29889, 0, 0]] * 2),
"padding_mask": np.array([[1, 1, 1, 1, 1, 1, 1, 1, 0, 0]] * 2),
}
y = np.array([[4996, 17354, 1701, 29916, 12500, 287, 29889, 0, 0, 0]] * 2)
sw = np.array([[1, 1, 1, 1, 1, 1, 1, 0, 0, 0]] * 2)
llama_lm = keras_hub.models.Llama3CausalLM.from_preset(
"hf://keras/llama3_instruct_8b_en_int8",
preprocessor=None,
dtype="bfloat16"
)
llama_lm.fit(x=x, y=y, sample_weight=sw, batch_size=2)
```
|
keras/siglip2_so400m_patch14_224 | keras | 2025-06-17T03:22:44Z | 10 | 0 | keras-hub | [
"keras-hub",
"arxiv:2303.15343",
"region:us"
] | null | 2025-03-24T21:46:59Z | ---
library_name: keras-hub
---
### Model Overview
SigLIP model pre-trained on WebLi at resolution 224x224. It was introduced in the paper [Sigmoid Loss for Language Image Pre-Training](https://arxiv.org/abs/2303.15343) by Zhai et al. and first released in this [repository](https://github.com/google-research/big_vision).
SigLIP is [CLIP](https://huggingface.co/docs/transformers/model_doc/clip), a multimodal model, with a better loss function. The sigmoid loss operates solely on image-text pairs and does not require a global view of the pairwise similarities for normalization. This allows further scaling up the batch size, while also performing better at smaller batch sizes.
A TLDR of SigLIP by one of the authors can be found [here](https://twitter.com/giffmana/status/1692641733459267713).
Weights are released under the [Apache 2 License](https://github.com/keras-team/keras-hub/blob/master/LICENSE) . Keras model code is released under the [Apache 2 License](https://github.com/keras-team/keras-hub/blob/master/LICENSE).
## Links
* [SigLIP Quickstart Notebook](https://www.kaggle.com/code/laxmareddypatlolla/siglip-quickstart-notebook-with-hub)
* [SigLIP API Documentation](https://keras.io/keras_hub/api/models/siglip/)
* [SigLIP Model Card](https://arxiv.org/abs/2303.15343)
* [KerasHub Beginner Guide](https://keras.io/guides/keras_hub/getting_started/)
* [KerasHub Model Publishing Guide](https://keras.io/guides/keras_hub/upload/)
## Installation
Keras and KerasHub can be installed with:
```
pip install -U -q keras-hub
pip install -U -q keras
```
Jax, TensorFlow, and Torch come preinstalled in Kaggle Notebooks. For instructions on installing them in another environment see the [Keras Getting Started](https://keras.io/getting_started/) page.
## Presets
The following model checkpoints are provided by the Keras team. Full code examples for each are available below.
| Preset name | Parameters | Description |
|---------------------------------------|------------|--------------------------------------------------------------------------------------------------------------|
| siglip_base_patch16_224 | 203.16M | 200 million parameter, image size 224, pre-trained on WebLi. |
siglip_base_patch16_256 | 203.20M | 200 million parameter, image size 256, pre-trained on WebLi. |
siglip_base_patch16_384 | 203.45M | 200 million parameter, image size 384, pre-trained on WebLi. |
siglip_base_patch16_512 | 203.79M | 200 million parameter, image size 512, pre-trained on WebLi. |
siglip_base_patch16_256_multilingual |370.63M | 370 million parameter, image size 256, pre-trained on WebLi.|
siglip2_base_patch16_224 | 375.19M | 375 million parameter, patch size 16, image size 224, pre-trained on WebLi.|
siglip2_base_patch16_256| 375.23M | 375 million parameter, patch size 16, image size 256, pre-trained on WebLi.|
siglip2_base_patch32_256| 376.86M | 376 million parameter, patch size 32, image size 256, pre-trained on WebLi.|
siglip2_base_patch16_384 | 376.86M | 376 million parameter, patch size 16, image size 384, pre-trained on WebLi.|
siglip_large_patch16_256 | 652.15M | 652 million parameter, image size 256, pre-trained on WebLi. |
siglip_large_patch16_384 | 652.48M | 652 million parameter, image size 384, pre-trained on WebLi. |
siglip_so400m_patch14_224 | 877.36M | 877 million parameter, image size 224, shape-optimized version, pre-trained on WebLi.|
siglip_so400m_patch14_384 | 877.96M| 877 million parameter, image size 384, shape-optimized version, pre-trained on WebLi.|
siglip2_large_patch16_256 |881.53M |881 million parameter, patch size 16, image size 256, pre-trained on WebLi.|
siglip2_large_patch16_384 | 881.86M | 881 million parameter, patch size 16, image size 384, pre-trained on WebLi.|
siglip2_large_patch16_512 | 882.31M |882 million parameter, patch size 16, image size 512, pre-trained on WebLi.|
siglip_so400m_patch16_256_i18n | 1.13B |1.1 billion parameter, image size 256, shape-optimized version, pre-trained on WebLi.|
siglip2_so400m_patch14_224 | 1.14B |1.1 billion parameter, patch size 14, image size 224, shape-optimized version, pre-trained on WebLi.|
siglip2_so400m_patch16_256| 1.14B |1.1 billion parameter, patch size 16, image size 256, shape-optimized version, pre-trained on WebLi.|
siglip2_so400m_patch14_384 | 1.14B |1.1 billion parameter, patch size 14, image size 224, shape-optimized version, pre-trained on WebLi.|
siglip2_so400m_patch16_384 | 1.14B |1.1 billion parameter, patch size 16, image size 384, shape-optimized version, pre-trained on WebLi.|
siglip2_so400m_patch16_512| 1.14B |1.1 billion parameter, patch size 16, image size 512, shape-optimized version, pre-trained on WebLi.|
siglip2_giant_opt_patch16_256| 1.87B |1.8 billion parameter, patch size 16, image size 256, pre-trained on WebLi.|
siglip2_giant_opt_patch16_384| 1.87B |1.8 billion parameter, patch size 16, image size 384, pre-trained on WebLi.|
## Example Usage
```Python
import keras
import numpy as np
import matplotlib.pyplot as plt
from keras_hub.models import SigLIPBackbone, SigLIPTokenizer
from keras_hub.layers import SigLIPImageConverter
# instantiate the model and preprocessing tools
siglip = SigLIPBackbone.from_preset("siglip2_so400m_patch14_224")
tokenizer = SigLIPTokenizer.from_preset("siglip2_so400m_patch14_224",
sequence_length=64)
image_converter = SigLIPImageConverter.from_preset("siglip2_so400m_patch14_224")
# obtain tokens for some input text
tokens = tokenizer.tokenize(["mountains", "cat on tortoise", "house"])
# preprocess image and text
image = keras.utils.load_img("cat.jpg")
image = image_converter(np.array([image]).astype(float))
# query the model for similarities
siglip({
"images": image,
"token_ids": tokens,
})
```
## Example Usage with Hugging Face URI
```Python
import keras
import numpy as np
import matplotlib.pyplot as plt
from keras_hub.models import SigLIPBackbone, SigLIPTokenizer
from keras_hub.layers import SigLIPImageConverter
# instantiate the model and preprocessing tools
siglip = SigLIPBackbone.from_preset("hf://keras/siglip2_so400m_patch14_224")
tokenizer = SigLIPTokenizer.from_preset("hf://keras/siglip2_so400m_patch14_224",
sequence_length=64)
image_converter = SigLIPImageConverter.from_preset("hf://keras/siglip2_so400m_patch14_224")
# obtain tokens for some input text
tokens = tokenizer.tokenize(["mountains", "cat on tortoise", "house"])
# preprocess image and text
image = keras.utils.load_img("cat.jpg")
image = image_converter(np.array([image]).astype(float))
# query the model for similarities
siglip({
"images": image,
"token_ids": tokens,
})
```
|
keras/siglip2_giant_opt_patch16_384 | keras | 2025-06-17T03:22:42Z | 6 | 0 | keras-hub | [
"keras-hub",
"arxiv:2303.15343",
"region:us"
] | null | 2025-03-24T21:31:55Z | ---
library_name: keras-hub
---
### Model Overview
SigLIP model pre-trained on WebLi at resolution 224x224. It was introduced in the paper [Sigmoid Loss for Language Image Pre-Training](https://arxiv.org/abs/2303.15343) by Zhai et al. and first released in this [repository](https://github.com/google-research/big_vision).
SigLIP is [CLIP](https://huggingface.co/docs/transformers/model_doc/clip), a multimodal model, with a better loss function. The sigmoid loss operates solely on image-text pairs and does not require a global view of the pairwise similarities for normalization. This allows further scaling up the batch size, while also performing better at smaller batch sizes.
A TLDR of SigLIP by one of the authors can be found [here](https://twitter.com/giffmana/status/1692641733459267713).
Weights are released under the [Apache 2 License](https://github.com/keras-team/keras-hub/blob/master/LICENSE) . Keras model code is released under the [Apache 2 License](https://github.com/keras-team/keras-hub/blob/master/LICENSE).
## Links
* [SigLIP Quickstart Notebook](https://www.kaggle.com/code/laxmareddypatlolla/siglip-quickstart-notebook-with-hub)
* [SigLIP API Documentation](https://keras.io/keras_hub/api/models/siglip/)
* [SigLIP Model Card](https://arxiv.org/abs/2303.15343)
* [KerasHub Beginner Guide](https://keras.io/guides/keras_hub/getting_started/)
* [KerasHub Model Publishing Guide](https://keras.io/guides/keras_hub/upload/)
## Installation
Keras and KerasHub can be installed with:
```
pip install -U -q keras-hub
pip install -U -q keras
```
Jax, TensorFlow, and Torch come preinstalled in Kaggle Notebooks. For instructions on installing them in another environment see the [Keras Getting Started](https://keras.io/getting_started/) page.
## Presets
The following model checkpoints are provided by the Keras team. Full code examples for each are available below.
| Preset name | Parameters | Description |
|---------------------------------------|------------|--------------------------------------------------------------------------------------------------------------|
| siglip_base_patch16_224 | 203.16M | 200 million parameter, image size 224, pre-trained on WebLi. |
siglip_base_patch16_256 | 203.20M | 200 million parameter, image size 256, pre-trained on WebLi. |
siglip_base_patch16_384 | 203.45M | 200 million parameter, image size 384, pre-trained on WebLi. |
siglip_base_patch16_512 | 203.79M | 200 million parameter, image size 512, pre-trained on WebLi. |
siglip_base_patch16_256_multilingual |370.63M | 370 million parameter, image size 256, pre-trained on WebLi.|
siglip2_base_patch16_224 | 375.19M | 375 million parameter, patch size 16, image size 224, pre-trained on WebLi.|
siglip2_base_patch16_256| 375.23M | 375 million parameter, patch size 16, image size 256, pre-trained on WebLi.|
siglip2_base_patch32_256| 376.86M | 376 million parameter, patch size 32, image size 256, pre-trained on WebLi.|
siglip2_base_patch16_384 | 376.86M | 376 million parameter, patch size 16, image size 384, pre-trained on WebLi.|
siglip_large_patch16_256 | 652.15M | 652 million parameter, image size 256, pre-trained on WebLi. |
siglip_large_patch16_384 | 652.48M | 652 million parameter, image size 384, pre-trained on WebLi. |
siglip_so400m_patch14_224 | 877.36M | 877 million parameter, image size 224, shape-optimized version, pre-trained on WebLi.|
siglip_so400m_patch14_384 | 877.96M| 877 million parameter, image size 384, shape-optimized version, pre-trained on WebLi.|
siglip2_large_patch16_256 |881.53M |881 million parameter, patch size 16, image size 256, pre-trained on WebLi.|
siglip2_large_patch16_384 | 881.86M | 881 million parameter, patch size 16, image size 384, pre-trained on WebLi.|
siglip2_large_patch16_512 | 882.31M |882 million parameter, patch size 16, image size 512, pre-trained on WebLi.|
siglip_so400m_patch16_256_i18n | 1.13B |1.1 billion parameter, image size 256, shape-optimized version, pre-trained on WebLi.|
siglip2_so400m_patch14_224 | 1.14B |1.1 billion parameter, patch size 14, image size 224, shape-optimized version, pre-trained on WebLi.|
siglip2_so400m_patch16_256| 1.14B |1.1 billion parameter, patch size 16, image size 256, shape-optimized version, pre-trained on WebLi.|
siglip2_so400m_patch14_384 | 1.14B |1.1 billion parameter, patch size 14, image size 224, shape-optimized version, pre-trained on WebLi.|
siglip2_so400m_patch16_384 | 1.14B |1.1 billion parameter, patch size 16, image size 384, shape-optimized version, pre-trained on WebLi.|
siglip2_so400m_patch16_512| 1.14B |1.1 billion parameter, patch size 16, image size 512, shape-optimized version, pre-trained on WebLi.|
siglip2_giant_opt_patch16_256| 1.87B |1.8 billion parameter, patch size 16, image size 256, pre-trained on WebLi.|
siglip2_giant_opt_patch16_384| 1.87B |1.8 billion parameter, patch size 16, image size 384, pre-trained on WebLi.|
## Example Usage
```Python
import keras
import numpy as np
import matplotlib.pyplot as plt
from keras_hub.models import SigLIPBackbone, SigLIPTokenizer
from keras_hub.layers import SigLIPImageConverter
# instantiate the model and preprocessing tools
siglip = SigLIPBackbone.from_preset("siglip2_giant_opt_patch16_384")
tokenizer = SigLIPTokenizer.from_preset("siglip2_giant_opt_patch16_384",
sequence_length=64)
image_converter = SigLIPImageConverter.from_preset("siglip2_giant_opt_patch16_384")
# obtain tokens for some input text
tokens = tokenizer.tokenize(["mountains", "cat on tortoise", "house"])
# preprocess image and text
image = keras.utils.load_img("cat.jpg")
image = image_converter(np.array([image]).astype(float))
# query the model for similarities
siglip({
"images": image,
"token_ids": tokens,
})
```
## Example Usage with Hugging Face URI
```Python
import keras
import numpy as np
import matplotlib.pyplot as plt
from keras_hub.models import SigLIPBackbone, SigLIPTokenizer
from keras_hub.layers import SigLIPImageConverter
# instantiate the model and preprocessing tools
siglip = SigLIPBackbone.from_preset("hf://keras/siglip2_giant_opt_patch16_384")
tokenizer = SigLIPTokenizer.from_preset("hf://keras/siglip2_giant_opt_patch16_384",
sequence_length=64)
image_converter = SigLIPImageConverter.from_preset("hf://keras/siglip2_giant_opt_patch16_384")
# obtain tokens for some input text
tokens = tokenizer.tokenize(["mountains", "cat on tortoise", "house"])
# preprocess image and text
image = keras.utils.load_img("cat.jpg")
image = image_converter(np.array([image]).astype(float))
# query the model for similarities
siglip({
"images": image,
"token_ids": tokens,
})
```
|
keras/siglip2_large_patch16_256 | keras | 2025-06-17T03:22:37Z | 10 | 0 | keras-hub | [
"keras-hub",
"arxiv:2303.15343",
"region:us"
] | null | 2025-03-24T21:36:58Z | ---
library_name: keras-hub
---
### Model Overview
SigLIP model pre-trained on WebLi at resolution 224x224. It was introduced in the paper [Sigmoid Loss for Language Image Pre-Training](https://arxiv.org/abs/2303.15343) by Zhai et al. and first released in this [repository](https://github.com/google-research/big_vision).
SigLIP is [CLIP](https://huggingface.co/docs/transformers/model_doc/clip), a multimodal model, with a better loss function. The sigmoid loss operates solely on image-text pairs and does not require a global view of the pairwise similarities for normalization. This allows further scaling up the batch size, while also performing better at smaller batch sizes.
A TLDR of SigLIP by one of the authors can be found [here](https://twitter.com/giffmana/status/1692641733459267713).
Weights are released under the [Apache 2 License](https://github.com/keras-team/keras-hub/blob/master/LICENSE) . Keras model code is released under the [Apache 2 License](https://github.com/keras-team/keras-hub/blob/master/LICENSE).
## Links
* [SigLIP Quickstart Notebook](https://www.kaggle.com/code/laxmareddypatlolla/siglip-quickstart-notebook-with-hub)
* [SigLIP API Documentation](https://keras.io/keras_hub/api/models/siglip/)
* [SigLIP Model Card](https://arxiv.org/abs/2303.15343)
* [KerasHub Beginner Guide](https://keras.io/guides/keras_hub/getting_started/)
* [KerasHub Model Publishing Guide](https://keras.io/guides/keras_hub/upload/)
## Installation
Keras and KerasHub can be installed with:
```
pip install -U -q keras-hub
pip install -U -q keras
```
Jax, TensorFlow, and Torch come preinstalled in Kaggle Notebooks. For instructions on installing them in another environment see the [Keras Getting Started](https://keras.io/getting_started/) page.
## Presets
The following model checkpoints are provided by the Keras team. Full code examples for each are available below.
| Preset name | Parameters | Description |
|---------------------------------------|------------|--------------------------------------------------------------------------------------------------------------|
| siglip_base_patch16_224 | 203.16M | 200 million parameter, image size 224, pre-trained on WebLi. |
siglip_base_patch16_256 | 203.20M | 200 million parameter, image size 256, pre-trained on WebLi. |
siglip_base_patch16_384 | 203.45M | 200 million parameter, image size 384, pre-trained on WebLi. |
siglip_base_patch16_512 | 203.79M | 200 million parameter, image size 512, pre-trained on WebLi. |
siglip_base_patch16_256_multilingual |370.63M | 370 million parameter, image size 256, pre-trained on WebLi.|
siglip2_base_patch16_224 | 375.19M | 375 million parameter, patch size 16, image size 224, pre-trained on WebLi.|
siglip2_base_patch16_256| 375.23M | 375 million parameter, patch size 16, image size 256, pre-trained on WebLi.|
siglip2_base_patch32_256| 376.86M | 376 million parameter, patch size 32, image size 256, pre-trained on WebLi.|
siglip2_base_patch16_384 | 376.86M | 376 million parameter, patch size 16, image size 384, pre-trained on WebLi.|
siglip_large_patch16_256 | 652.15M | 652 million parameter, image size 256, pre-trained on WebLi. |
siglip_large_patch16_384 | 652.48M | 652 million parameter, image size 384, pre-trained on WebLi. |
siglip_so400m_patch14_224 | 877.36M | 877 million parameter, image size 224, shape-optimized version, pre-trained on WebLi.|
siglip_so400m_patch14_384 | 877.96M| 877 million parameter, image size 384, shape-optimized version, pre-trained on WebLi.|
siglip2_large_patch16_256 |881.53M |881 million parameter, patch size 16, image size 256, pre-trained on WebLi.|
siglip2_large_patch16_384 | 881.86M | 881 million parameter, patch size 16, image size 384, pre-trained on WebLi.|
siglip2_large_patch16_512 | 882.31M |882 million parameter, patch size 16, image size 512, pre-trained on WebLi.|
siglip_so400m_patch16_256_i18n | 1.13B |1.1 billion parameter, image size 256, shape-optimized version, pre-trained on WebLi.|
siglip2_so400m_patch14_224 | 1.14B |1.1 billion parameter, patch size 14, image size 224, shape-optimized version, pre-trained on WebLi.|
siglip2_so400m_patch16_256| 1.14B |1.1 billion parameter, patch size 16, image size 256, shape-optimized version, pre-trained on WebLi.|
siglip2_so400m_patch14_384 | 1.14B |1.1 billion parameter, patch size 14, image size 224, shape-optimized version, pre-trained on WebLi.|
siglip2_so400m_patch16_384 | 1.14B |1.1 billion parameter, patch size 16, image size 384, shape-optimized version, pre-trained on WebLi.|
siglip2_so400m_patch16_512| 1.14B |1.1 billion parameter, patch size 16, image size 512, shape-optimized version, pre-trained on WebLi.|
siglip2_giant_opt_patch16_256| 1.87B |1.8 billion parameter, patch size 16, image size 256, pre-trained on WebLi.|
siglip2_giant_opt_patch16_384| 1.87B |1.8 billion parameter, patch size 16, image size 384, pre-trained on WebLi.|
## Example Usage
```Python
import keras
import numpy as np
import matplotlib.pyplot as plt
from keras_hub.models import SigLIPBackbone, SigLIPTokenizer
from keras_hub.layers import SigLIPImageConverter
# instantiate the model and preprocessing tools
siglip = SigLIPBackbone.from_preset("siglip2_large_patch16_256")
tokenizer = SigLIPTokenizer.from_preset("siglip2_large_patch16_256",
sequence_length=64)
image_converter = SigLIPImageConverter.from_preset("siglip2_large_patch16_256")
# obtain tokens for some input text
tokens = tokenizer.tokenize(["mountains", "cat on tortoise", "house"])
# preprocess image and text
image = keras.utils.load_img("cat.jpg")
image = image_converter(np.array([image]).astype(float))
# query the model for similarities
siglip({
"images": image,
"token_ids": tokens,
})
```
## Example Usage with Hugging Face URI
```Python
import keras
import numpy as np
import matplotlib.pyplot as plt
from keras_hub.models import SigLIPBackbone, SigLIPTokenizer
from keras_hub.layers import SigLIPImageConverter
# instantiate the model and preprocessing tools
siglip = SigLIPBackbone.from_preset("hf://keras/siglip2_large_patch16_256")
tokenizer = SigLIPTokenizer.from_preset("hf://keras/siglip2_large_patch16_256",
sequence_length=64)
image_converter = SigLIPImageConverter.from_preset("hf://keras/siglip2_large_patch16_256")
# obtain tokens for some input text
tokens = tokenizer.tokenize(["mountains", "cat on tortoise", "house"])
# preprocess image and text
image = keras.utils.load_img("cat.jpg")
image = image_converter(np.array([image]).astype(float))
# query the model for similarities
siglip({
"images": image,
"token_ids": tokens,
})
```
|
keras/siglip2_base_patch16_256 | keras | 2025-06-17T03:22:33Z | 4 | 0 | keras-hub | [
"keras-hub",
"arxiv:2303.15343",
"region:us"
] | null | 2025-03-24T21:19:18Z | ---
library_name: keras-hub
---
### Model Overview
SigLIP model pre-trained on WebLi at resolution 224x224. It was introduced in the paper [Sigmoid Loss for Language Image Pre-Training](https://arxiv.org/abs/2303.15343) by Zhai et al. and first released in this [repository](https://github.com/google-research/big_vision).
SigLIP is [CLIP](https://huggingface.co/docs/transformers/model_doc/clip), a multimodal model, with a better loss function. The sigmoid loss operates solely on image-text pairs and does not require a global view of the pairwise similarities for normalization. This allows further scaling up the batch size, while also performing better at smaller batch sizes.
A TLDR of SigLIP by one of the authors can be found [here](https://twitter.com/giffmana/status/1692641733459267713).
Weights are released under the [Apache 2 License](https://github.com/keras-team/keras-hub/blob/master/LICENSE) . Keras model code is released under the [Apache 2 License](https://github.com/keras-team/keras-hub/blob/master/LICENSE).
## Links
* [SigLIP Quickstart Notebook](https://www.kaggle.com/code/laxmareddypatlolla/siglip-quickstart-notebook-with-hub)
* [SigLIP API Documentation](https://keras.io/keras_hub/api/models/siglip/)
* [SigLIP Model Card](https://arxiv.org/abs/2303.15343)
* [KerasHub Beginner Guide](https://keras.io/guides/keras_hub/getting_started/)
* [KerasHub Model Publishing Guide](https://keras.io/guides/keras_hub/upload/)
## Installation
Keras and KerasHub can be installed with:
```
pip install -U -q keras-hub
pip install -U -q keras
```
Jax, TensorFlow, and Torch come preinstalled in Kaggle Notebooks. For instructions on installing them in another environment see the [Keras Getting Started](https://keras.io/getting_started/) page.
## Presets
The following model checkpoints are provided by the Keras team. Full code examples for each are available below.
| Preset name | Parameters | Description |
|---------------------------------------|------------|--------------------------------------------------------------------------------------------------------------|
| siglip_base_patch16_224 | 203.16M | 200 million parameter, image size 224, pre-trained on WebLi. |
siglip_base_patch16_256 | 203.20M | 200 million parameter, image size 256, pre-trained on WebLi. |
siglip_base_patch16_384 | 203.45M | 200 million parameter, image size 384, pre-trained on WebLi. |
siglip_base_patch16_512 | 203.79M | 200 million parameter, image size 512, pre-trained on WebLi. |
siglip_base_patch16_256_multilingual |370.63M | 370 million parameter, image size 256, pre-trained on WebLi.|
siglip2_base_patch16_224 | 375.19M | 375 million parameter, patch size 16, image size 224, pre-trained on WebLi.|
siglip2_base_patch16_256| 375.23M | 375 million parameter, patch size 16, image size 256, pre-trained on WebLi.|
siglip2_base_patch32_256| 376.86M | 376 million parameter, patch size 32, image size 256, pre-trained on WebLi.|
siglip2_base_patch16_384 | 376.86M | 376 million parameter, patch size 16, image size 384, pre-trained on WebLi.|
siglip_large_patch16_256 | 652.15M | 652 million parameter, image size 256, pre-trained on WebLi. |
siglip_large_patch16_384 | 652.48M | 652 million parameter, image size 384, pre-trained on WebLi. |
siglip_so400m_patch14_224 | 877.36M | 877 million parameter, image size 224, shape-optimized version, pre-trained on WebLi.|
siglip_so400m_patch14_384 | 877.96M| 877 million parameter, image size 384, shape-optimized version, pre-trained on WebLi.|
siglip2_large_patch16_256 |881.53M |881 million parameter, patch size 16, image size 256, pre-trained on WebLi.|
siglip2_large_patch16_384 | 881.86M | 881 million parameter, patch size 16, image size 384, pre-trained on WebLi.|
siglip2_large_patch16_512 | 882.31M |882 million parameter, patch size 16, image size 512, pre-trained on WebLi.|
siglip_so400m_patch16_256_i18n | 1.13B |1.1 billion parameter, image size 256, shape-optimized version, pre-trained on WebLi.|
siglip2_so400m_patch14_224 | 1.14B |1.1 billion parameter, patch size 14, image size 224, shape-optimized version, pre-trained on WebLi.|
siglip2_so400m_patch16_256| 1.14B |1.1 billion parameter, patch size 16, image size 256, shape-optimized version, pre-trained on WebLi.|
siglip2_so400m_patch14_384 | 1.14B |1.1 billion parameter, patch size 14, image size 224, shape-optimized version, pre-trained on WebLi.|
siglip2_so400m_patch16_384 | 1.14B |1.1 billion parameter, patch size 16, image size 384, shape-optimized version, pre-trained on WebLi.|
siglip2_so400m_patch16_512| 1.14B |1.1 billion parameter, patch size 16, image size 512, shape-optimized version, pre-trained on WebLi.|
siglip2_giant_opt_patch16_256| 1.87B |1.8 billion parameter, patch size 16, image size 256, pre-trained on WebLi.|
siglip2_giant_opt_patch16_384| 1.87B |1.8 billion parameter, patch size 16, image size 384, pre-trained on WebLi.|
## Example Usage
```Python
import keras
import numpy as np
import matplotlib.pyplot as plt
from keras_hub.models import SigLIPBackbone, SigLIPTokenizer
from keras_hub.layers import SigLIPImageConverter
# instantiate the model and preprocessing tools
siglip = SigLIPBackbone.from_preset("siglip2_base_patch16_256")
tokenizer = SigLIPTokenizer.from_preset("siglip2_base_patch16_256",
sequence_length=64)
image_converter = SigLIPImageConverter.from_preset("siglip2_base_patch16_256")
# obtain tokens for some input text
tokens = tokenizer.tokenize(["mountains", "cat on tortoise", "house"])
# preprocess image and text
image = keras.utils.load_img("cat.jpg")
image = image_converter(np.array([image]).astype(float))
# query the model for similarities
siglip({
"images": image,
"token_ids": tokens,
})
```
## Example Usage with Hugging Face URI
```Python
import keras
import numpy as np
import matplotlib.pyplot as plt
from keras_hub.models import SigLIPBackbone, SigLIPTokenizer
from keras_hub.layers import SigLIPImageConverter
# instantiate the model and preprocessing tools
siglip = SigLIPBackbone.from_preset("hf://keras/siglip2_base_patch16_256")
tokenizer = SigLIPTokenizer.from_preset("hf://keras/siglip2_base_patch16_256",
sequence_length=64)
image_converter = SigLIPImageConverter.from_preset("hf://keras/siglip2_base_patch16_256")
# obtain tokens for some input text
tokens = tokenizer.tokenize(["mountains", "cat on tortoise", "house"])
# preprocess image and text
image = keras.utils.load_img("cat.jpg")
image = image_converter(np.array([image]).astype(float))
# query the model for similarities
siglip({
"images": image,
"token_ids": tokens,
})
```
|
keras/moonshine_tiny_en | keras | 2025-06-17T03:21:44Z | 0 | 0 | keras-hub | [
"keras-hub",
"arxiv:2410.15608",
"region:us"
] | null | 2025-06-17T00:47:58Z | ---
library_name: keras-hub
---
### Model Overview
# Model Summary
The Moonshine models are trained for the speech recognition task, capable of transcribing English speech audio into English text. Useful Sensors developed the models to support their business direction of developing real time speech transcription products based on low cost hardware. There are 2 models of different sizes and capabilities, summarized in the presets table.
Weights are released under the [MIT License](https://www.mit.edu/~amini/LICENSE.md) . Keras model code is released under the [Apache 2 License](https://github.com/keras-team/keras-hub/blob/master/LICENSE).
## Links
* [Moonshine Quickstart Notebook](https://www.kaggle.com/code/laxmareddypatlolla/moonshine-quickstart-notebook)
* [Moonshine API Documentation](https://keras.io/keras_hub/api/models/moonshine/)
* [Moonshine Model Card](https://arxiv.org/abs/2410.15608)
* [KerasHub Beginner Guide](https://keras.io/guides/keras_hub/getting_started/)
* [KerasHub Model Publishing Guide](https://keras.io/guides/keras_hub/upload/)
## Installation
Keras and KerasHub can be installed with:
```
pip install -U -q keras-hub
pip install -U -q keras
```
Jax, TensorFlow, and Torch come preinstalled in Kaggle Notebooks. For instructions on installing them in another environment see the [Keras Getting Started](https://keras.io/getting_started/) page.
## Presets
The following model checkpoints are provided by the Keras team. Full code examples for each are available below.
| Preset name | Parameters | Description |
|---------------------------------------|------------|--------------------------------------------------------------------------------------------------------------|
| moonshine_base_en | 61.5M | Moonshine base model for English speech recognition.Developed by Useful Sensors for real-time transcription.|
| moonshine_tiny_en | 27.1M | Moonshine tiny model for English speech recognition. Developed by Useful Sensors for real-time transcription. |
## Example Usage
```Python
import os
import keras
import keras_hub
import numpy as np
import librosa
import tensorflow as tf
from keras_hub.src.models.moonshine.moonshine_audio_to_text import (
MoonshineAudioToText,
)
# Custom backbone.
backbone = keras_hub.models.MoonshineBackbone(
vocabulary_size=10000,
filter_dim=256,
encoder_num_layers=6,
decoder_num_layers=6,
hidden_dim=256,
intermediate_dim=512,
encoder_num_heads=8,
decoder_num_heads=8,
feedforward_expansion_factor=4,
decoder_use_swiglu_activation=True,
encoder_use_swiglu_activation=False,
)
# Audio features as input (e.g., from MoonshineAudioConverter).
outputs = backbone(
{
"encoder_input_values": np.zeros((1, 16000, 1)),
"encoder_padding_mask": np.ones((1, 16000), dtype=bool),
"decoder_token_ids": np.zeros((1, 20), dtype=np.int32),
"decoder_padding_mask": np.ones((1, 20), dtype=bool),
}
)
# Config for test.
BATCH_SIZE = 2
AUDIO_PATH = "path/to/audio_file.wav"
# Load and prepare audio data.
audio, sr = librosa.load(AUDIO_PATH, sr=16000, mono=True)
audio_tensor = tf.expand_dims(audio, axis=-1)
audio_tensor = tf.convert_to_tensor(audio_tensor, dtype=tf.float32)
single_audio_input_batched = tf.expand_dims(audio_tensor, axis=0)
audio_batch = tf.repeat(single_audio_input_batched, BATCH_SIZE, axis=0)
dummy_texts = ["Sample transcription.", "Another sample transcription."]
# Create tf.data.Dataset.
audio_ds = tf.data.Dataset.from_tensor_slices(audio_batch)
text_ds = tf.data.Dataset.from_tensor_slices(dummy_texts)
audio_dataset = (
tf.data.Dataset.zip((audio_ds, text_ds))
.map(lambda audio, txt: {"audio": audio, "text": txt})
.batch(BATCH_SIZE)
)
print("Audio dataset created.")
# Load pretrained Moonshine model.
audio_to_text = MoonshineAudioToText.from_preset("moonshine_tiny_en")
# Generation examples.
generated_text_single = audio_to_text.generate(
{"audio": single_audio_input_batched}
)
print(f"Generated text (single audio): {generated_text_single}")
generated_text_batch = audio_to_text.generate({"audio": audio_batch})
print(f"Generated text (batch audio): {generated_text_batch}")
# Compile the generate() function with a custom sampler.
audio_to_text.compile(sampler="top_k")
generated_text_top_k = audio_to_text.generate(
{"audio": single_audio_input_batched}
)
print(f"Generated text (top_k sampler): {generated_text_top_k}")
audio_to_text.compile(sampler="greedy")
generated_text_greedy = audio_to_text.generate(
{"audio": single_audio_input_batched}
)
print(f"Generated text (greedy sampler): {generated_text_greedy}")
# Fine-tuning example.
audio_to_text.compile(
optimizer=keras.optimizers.Adam(learning_rate=1e-5),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
weighted_metrics=[keras.metrics.SparseCategoricalAccuracy()],
)
history = audio_to_text.fit(audio_dataset, steps_per_epoch=1, epochs=1)
print(f"Fine-tuning completed. Training history: {history.history}")
# Detached preprocessing.
original_preprocessor = audio_to_text.preprocessor
audio_to_text.preprocessor = None
preprocessed_batch = original_preprocessor.generate_preprocess(
{"audio": audio_batch}
)
print(f"Preprocessed batch keys: {preprocessed_batch.keys()}")
stop_ids = (original_preprocessor.tokenizer.end_token_id,)
generated_batch_tokens = audio_to_text.generate(
preprocessed_batch, stop_token_ids=stop_ids
)
print(f"Generated tokens keys: {generated_batch_tokens.keys()}")
final_strings = original_preprocessor.generate_postprocess(
generated_batch_tokens
)
print(f"Final generated strings (detached): {final_strings}")
audio_to_text.preprocessor = original_preprocessor
print("Preprocessor reattached.")
```
## Example Usage with Hugging Face URI
```Python
import os
import keras
import keras_hub
import numpy as np
import librosa
import tensorflow as tf
from keras_hub.src.models.moonshine.moonshine_audio_to_text import (
MoonshineAudioToText,
)
# Custom backbone.
backbone = keras_hub.models.MoonshineBackbone(
vocabulary_size=10000,
filter_dim=256,
encoder_num_layers=6,
decoder_num_layers=6,
hidden_dim=256,
intermediate_dim=512,
encoder_num_heads=8,
decoder_num_heads=8,
feedforward_expansion_factor=4,
decoder_use_swiglu_activation=True,
encoder_use_swiglu_activation=False,
)
# Audio features as input (e.g., from MoonshineAudioConverter).
outputs = backbone(
{
"encoder_input_values": np.zeros((1, 16000, 1)),
"encoder_padding_mask": np.ones((1, 16000), dtype=bool),
"decoder_token_ids": np.zeros((1, 20), dtype=np.int32),
"decoder_padding_mask": np.ones((1, 20), dtype=bool),
}
)
# Config for test.
BATCH_SIZE = 2
AUDIO_PATH = "path/to/audio_file.wav"
# Load and prepare audio data.
audio, sr = librosa.load(AUDIO_PATH, sr=16000, mono=True)
audio_tensor = tf.expand_dims(audio, axis=-1)
audio_tensor = tf.convert_to_tensor(audio_tensor, dtype=tf.float32)
single_audio_input_batched = tf.expand_dims(audio_tensor, axis=0)
audio_batch = tf.repeat(single_audio_input_batched, BATCH_SIZE, axis=0)
dummy_texts = ["Sample transcription.", "Another sample transcription."]
# Create tf.data.Dataset.
audio_ds = tf.data.Dataset.from_tensor_slices(audio_batch)
text_ds = tf.data.Dataset.from_tensor_slices(dummy_texts)
audio_dataset = (
tf.data.Dataset.zip((audio_ds, text_ds))
.map(lambda audio, txt: {"audio": audio, "text": txt})
.batch(BATCH_SIZE)
)
print("Audio dataset created.")
# Load pretrained Moonshine model.
audio_to_text = MoonshineAudioToText.from_preset("hf://keras/moonshine_tiny_en")
# Generation examples.
generated_text_single = audio_to_text.generate(
{"audio": single_audio_input_batched}
)
print(f"Generated text (single audio): {generated_text_single}")
generated_text_batch = audio_to_text.generate({"audio": audio_batch})
print(f"Generated text (batch audio): {generated_text_batch}")
# Compile the generate() function with a custom sampler.
audio_to_text.compile(sampler="top_k")
generated_text_top_k = audio_to_text.generate(
{"audio": single_audio_input_batched}
)
print(f"Generated text (top_k sampler): {generated_text_top_k}")
audio_to_text.compile(sampler="greedy")
generated_text_greedy = audio_to_text.generate(
{"audio": single_audio_input_batched}
)
print(f"Generated text (greedy sampler): {generated_text_greedy}")
# Fine-tuning example.
audio_to_text.compile(
optimizer=keras.optimizers.Adam(learning_rate=1e-5),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
weighted_metrics=[keras.metrics.SparseCategoricalAccuracy()],
)
history = audio_to_text.fit(audio_dataset, steps_per_epoch=1, epochs=1)
print(f"Fine-tuning completed. Training history: {history.history}")
# Detached preprocessing.
original_preprocessor = audio_to_text.preprocessor
audio_to_text.preprocessor = None
preprocessed_batch = original_preprocessor.generate_preprocess(
{"audio": audio_batch}
)
print(f"Preprocessed batch keys: {preprocessed_batch.keys()}")
stop_ids = (original_preprocessor.tokenizer.end_token_id,)
generated_batch_tokens = audio_to_text.generate(
preprocessed_batch, stop_token_ids=stop_ids
)
print(f"Generated tokens keys: {generated_batch_tokens.keys()}")
final_strings = original_preprocessor.generate_postprocess(
generated_batch_tokens
)
print(f"Final generated strings (detached): {final_strings}")
audio_to_text.preprocessor = original_preprocessor
print("Preprocessor reattached.")
```
|
keras/efficientnet2_rw_s_ra2_imagenet | keras | 2025-06-17T03:20:57Z | 7 | 0 | keras-hub | [
"keras-hub",
"arxiv:1905.11946",
"arxiv:2104.00298",
"region:us"
] | null | 2024-12-23T23:35:11Z | ---
library_name: keras-hub
---
### Model Overview
EfficientNets are a family of image classification models, which achieve state-of-the-art accuracy, yet being an order-of-magnitude smaller and faster than previous models.
We develop EfficientNets based on AutoML and Compound Scaling. In particular, we first use AutoML MNAS Mobile framework to develop a mobile-size baseline network, named as EfficientNet-B0; Then, we use the compound scaling method to scale up this baseline to obtain EfficientNet-B1 to EfficientNet-B7.
This class encapsulates the architectures for both EfficientNetV1 and EfficientNetV2. EfficientNetV2 uses Fused-MBConv Blocks and Neural Architecture Search (NAS) to make models sizes much smaller while still improving overall model quality.
This model is supported in both KerasCV and KerasHub. KerasCV will no longer be actively developed, so please try to use KerasHub.
## Links
* [EfficientNet Quickstart Notebook](https://www.kaggle.com/code/prasadsachin/efficientnet-quickstart-kerashub)
* [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](https://arxiv.org/abs/1905.11946)(ICML 2019)
* [Based on the original keras.applications EfficientNet](https://github.com/keras-team/keras/blob/master/keras/applications/efficientnet.py)
* [EfficientNetV2: Smaller Models and Faster Training](https://arxiv.org/abs/2104.00298) (ICML 2021)
* [EfficientNet API Documentation](https://keras.io/keras_hub/api/models/efficientnet/)
* [KerasHub Beginner Guide](https://keras.io/guides/keras_hub/getting_started/)
* [KerasHub Model Publishing Guide](https://keras.io/guides/keras_hub/upload/)
## Installation
Keras and KerasHub can be installed with:
```
pip install -U -q keras-hub
pip install -U -q keras
```
Jax, TensorFlow, and Torch come preinstalled in Kaggle Notebooks. For instructions on installing them in another environment see the [Keras Getting Started](https://keras.io/getting_started/) page.
## Presets
The following model checkpoints are provided by the Keras team. Full code examples for each are available below.
| Preset name | Parameters | Description |
|------------------------------------|------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| efficientnet_b0_ra_imagenet | 5.3M | EfficientNet B0 model pre-trained on the ImageNet 1k dataset with RandAugment recipe. |
| efficientnet_b0_ra4_e3600_r224_imagenet | 5.3M | EfficientNet B0 model pre-trained on the ImageNet 1k dataset by Ross Wightman. Trained with timm scripts using hyper-parameters inspired by the MobileNet-V4 small, mixed with go-to hparams from timm and 'ResNet Strikes Back'. |
| efficientnet_b1_ft_imagenet | 7.8M | EfficientNet B1 model fine-tuned on the ImageNet 1k dataset. |
| efficientnet_b1_ra4_e3600_r240_imagenet | 7.8M | EfficientNet B1 model pre-trained on the ImageNet 1k dataset by Ross Wightman. Trained with timm scripts using hyper-parameters inspired by the MobileNet-V4 small, mixed with go-to hparams from timm and 'ResNet Strikes Back'. |
| efficientnet_b2_ra_imagenet | 9.1M | EfficientNet B2 model pre-trained on the ImageNet 1k dataset with RandAugment recipe. |
| efficientnet_b3_ra2_imagenet | 12.2M | EfficientNet B3 model pre-trained on the ImageNet 1k dataset with RandAugment2 recipe. |
| efficientnet_b4_ra2_imagenet | 19.3M | EfficientNet B4 model pre-trained on the ImageNet 1k dataset with RandAugment2 recipe. |
| efficientnet_b5_sw_imagenet | 30.4M | EfficientNet B5 model pre-trained on the ImageNet 12k dataset by Ross Wightman. Based on Swin Transformer train / pretrain recipe with modifications (related to both DeiT and ConvNeXt recipes). |
| efficientnet_b5_sw_ft_imagenet | 30.4M | EfficientNet B5 model pre-trained on the ImageNet 12k dataset and fine-tuned on ImageNet-1k by Ross Wightman. Based on Swin Transformer train / pretrain recipe with modifications (related to both DeiT and ConvNeXt recipes). |
| efficientnet_el_ra_imagenet | 10.6M | EfficientNet-EdgeTPU Large model trained on the ImageNet 1k dataset with RandAugment recipe. |
| efficientnet_em_ra2_imagenet | 6.9M | EfficientNet-EdgeTPU Medium model trained on the ImageNet 1k dataset with RandAugment2 recipe. |
| efficientnet_es_ra_imagenet | 5.4M | EfficientNet-EdgeTPU Small model trained on the ImageNet 1k dataset with RandAugment recipe. |
| efficientnet2_rw_m_agc_imagenet | 53.2M | EfficientNet-v2 Medium model trained on the ImageNet 1k dataset with adaptive gradient clipping. |
| efficientnet2_rw_s_ra2_imagenet | 23.9M | EfficientNet-v2 Small model trained on the ImageNet 1k dataset with RandAugment2 recipe. |
| efficientnet2_rw_t_ra2_imagenet | 13.6M | EfficientNet-v2 Tiny model trained on the ImageNet 1k dataset with RandAugment2 recipe. |
| efficientnet_lite0_ra_imagenet | 4.7M | EfficientNet-Lite model fine-trained on the ImageNet 1k dataset with RandAugment recipe. |
## Model card
https://arxiv.org/abs/1905.11946
## Example Usage
Load
```python
classifier = keras_hub.models.EfficientNetImageClassifier.from_preset(
"efficientnet_b0_ra_imagenet",
)
```
Predict
```python
batch_size = 1
images = keras.random.normal(shape=(batch_size, 96, 96, 3))
classifier.predict(images)
```
Train, specify `num_classes` to load randomly initialized classifier head.
```python
num_classes = 2
labels = keras.random.randint(shape=(batch_size,), minval=0, maxval=num_classes)
classifier = keras_hub.models.EfficientNetImageClassifier.from_preset(
"efficientnet_b0_ra_imagenet",
num_classes=num_classes,
)
classifier.preprocessor.image_size = (96, 96)
classifier.fit(images, labels, epochs=3)
```
## Example Usage with Hugging Face URI
Load
```python
classifier = keras_hub.models.EfficientNetImageClassifier.from_preset(
"efficientnet_b0_ra_imagenet",
)
```
Predict
```python
batch_size = 1
images = keras.random.normal(shape=(batch_size, 96, 96, 3))
classifier.predict(images)
```
Train, specify `num_classes` to load randomly initialized classifier head.
```python
num_classes = 2
labels = keras.random.randint(shape=(batch_size,), minval=0, maxval=num_classes)
classifier = keras_hub.models.EfficientNetImageClassifier.from_preset(
"efficientnet_b0_ra_imagenet",
num_classes=num_classes,
)
classifier.preprocessor.image_size = (96, 96)
classifier.fit(images, labels, epochs=3)
```
|
keras/efficientnet_b2_ra_imagenet | keras | 2025-06-17T03:20:38Z | 32 | 0 | keras-hub | [
"keras-hub",
"arxiv:1905.11946",
"arxiv:2104.00298",
"region:us"
] | null | 2024-11-14T23:37:48Z | ---
library_name: keras-hub
---
### Model Overview
EfficientNets are a family of image classification models, which achieve state-of-the-art accuracy, yet being an order-of-magnitude smaller and faster than previous models.
We develop EfficientNets based on AutoML and Compound Scaling. In particular, we first use AutoML MNAS Mobile framework to develop a mobile-size baseline network, named as EfficientNet-B0; Then, we use the compound scaling method to scale up this baseline to obtain EfficientNet-B1 to EfficientNet-B7.
This class encapsulates the architectures for both EfficientNetV1 and EfficientNetV2. EfficientNetV2 uses Fused-MBConv Blocks and Neural Architecture Search (NAS) to make models sizes much smaller while still improving overall model quality.
This model is supported in both KerasCV and KerasHub. KerasCV will no longer be actively developed, so please try to use KerasHub.
## Links
* [EfficientNet Quickstart Notebook](https://www.kaggle.com/code/prasadsachin/efficientnet-quickstart-kerashub)
* [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](https://arxiv.org/abs/1905.11946)(ICML 2019)
* [Based on the original keras.applications EfficientNet](https://github.com/keras-team/keras/blob/master/keras/applications/efficientnet.py)
* [EfficientNetV2: Smaller Models and Faster Training](https://arxiv.org/abs/2104.00298) (ICML 2021)
* [EfficientNet API Documentation](https://keras.io/keras_hub/api/models/efficientnet/)
* [KerasHub Beginner Guide](https://keras.io/guides/keras_hub/getting_started/)
* [KerasHub Model Publishing Guide](https://keras.io/guides/keras_hub/upload/)
## Installation
Keras and KerasHub can be installed with:
```
pip install -U -q keras-hub
pip install -U -q keras
```
Jax, TensorFlow, and Torch come preinstalled in Kaggle Notebooks. For instructions on installing them in another environment see the [Keras Getting Started](https://keras.io/getting_started/) page.
## Presets
The following model checkpoints are provided by the Keras team. Full code examples for each are available below.
| Preset name | Parameters | Description |
|------------------------------------|------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| efficientnet_b0_ra_imagenet | 5.3M | EfficientNet B0 model pre-trained on the ImageNet 1k dataset with RandAugment recipe. |
| efficientnet_b0_ra4_e3600_r224_imagenet | 5.3M | EfficientNet B0 model pre-trained on the ImageNet 1k dataset by Ross Wightman. Trained with timm scripts using hyper-parameters inspired by the MobileNet-V4 small, mixed with go-to hparams from timm and 'ResNet Strikes Back'. |
| efficientnet_b1_ft_imagenet | 7.8M | EfficientNet B1 model fine-tuned on the ImageNet 1k dataset. |
| efficientnet_b1_ra4_e3600_r240_imagenet | 7.8M | EfficientNet B1 model pre-trained on the ImageNet 1k dataset by Ross Wightman. Trained with timm scripts using hyper-parameters inspired by the MobileNet-V4 small, mixed with go-to hparams from timm and 'ResNet Strikes Back'. |
| efficientnet_b2_ra_imagenet | 9.1M | EfficientNet B2 model pre-trained on the ImageNet 1k dataset with RandAugment recipe. |
| efficientnet_b3_ra2_imagenet | 12.2M | EfficientNet B3 model pre-trained on the ImageNet 1k dataset with RandAugment2 recipe. |
| efficientnet_b4_ra2_imagenet | 19.3M | EfficientNet B4 model pre-trained on the ImageNet 1k dataset with RandAugment2 recipe. |
| efficientnet_b5_sw_imagenet | 30.4M | EfficientNet B5 model pre-trained on the ImageNet 12k dataset by Ross Wightman. Based on Swin Transformer train / pretrain recipe with modifications (related to both DeiT and ConvNeXt recipes). |
| efficientnet_b5_sw_ft_imagenet | 30.4M | EfficientNet B5 model pre-trained on the ImageNet 12k dataset and fine-tuned on ImageNet-1k by Ross Wightman. Based on Swin Transformer train / pretrain recipe with modifications (related to both DeiT and ConvNeXt recipes). |
| efficientnet_el_ra_imagenet | 10.6M | EfficientNet-EdgeTPU Large model trained on the ImageNet 1k dataset with RandAugment recipe. |
| efficientnet_em_ra2_imagenet | 6.9M | EfficientNet-EdgeTPU Medium model trained on the ImageNet 1k dataset with RandAugment2 recipe. |
| efficientnet_es_ra_imagenet | 5.4M | EfficientNet-EdgeTPU Small model trained on the ImageNet 1k dataset with RandAugment recipe. |
| efficientnet2_rw_m_agc_imagenet | 53.2M | EfficientNet-v2 Medium model trained on the ImageNet 1k dataset with adaptive gradient clipping. |
| efficientnet2_rw_s_ra2_imagenet | 23.9M | EfficientNet-v2 Small model trained on the ImageNet 1k dataset with RandAugment2 recipe. |
| efficientnet2_rw_t_ra2_imagenet | 13.6M | EfficientNet-v2 Tiny model trained on the ImageNet 1k dataset with RandAugment2 recipe. |
| efficientnet_lite0_ra_imagenet | 4.7M | EfficientNet-Lite model fine-trained on the ImageNet 1k dataset with RandAugment recipe. |
## Model card
https://arxiv.org/abs/1905.11946
## Example Usage
Load
```python
classifier = keras_hub.models.EfficientNetImageClassifier.from_preset(
"efficientnet_b0_ra_imagenet",
)
```
Predict
```python
batch_size = 1
images = keras.random.normal(shape=(batch_size, 96, 96, 3))
classifier.predict(images)
```
Train, specify `num_classes` to load randomly initialized classifier head.
```python
num_classes = 2
labels = keras.random.randint(shape=(batch_size,), minval=0, maxval=num_classes)
classifier = keras_hub.models.EfficientNetImageClassifier.from_preset(
"efficientnet_b0_ra_imagenet",
num_classes=num_classes,
)
classifier.preprocessor.image_size = (96, 96)
classifier.fit(images, labels, epochs=3)
```
## Example Usage with Hugging Face URI
Load
```python
classifier = keras_hub.models.EfficientNetImageClassifier.from_preset(
"efficientnet_b0_ra_imagenet",
)
```
Predict
```python
batch_size = 1
images = keras.random.normal(shape=(batch_size, 96, 96, 3))
classifier.predict(images)
```
Train, specify `num_classes` to load randomly initialized classifier head.
```python
num_classes = 2
labels = keras.random.randint(shape=(batch_size,), minval=0, maxval=num_classes)
classifier = keras_hub.models.EfficientNetImageClassifier.from_preset(
"efficientnet_b0_ra_imagenet",
num_classes=num_classes,
)
classifier.preprocessor.image_size = (96, 96)
classifier.fit(images, labels, epochs=3)
```
|
keras/darknet_53_imagenet | keras | 2025-06-17T03:20:28Z | 0 | 0 | keras-hub | [
"keras-hub",
"arxiv:1911.11929",
"region:us"
] | null | 2025-06-16T19:55:15Z | ---
library_name: keras-hub
---
### Model Overview
This class represents the CSPDarkNet architecture.
**Reference**
- [CSPNet Paper](https://arxiv.org/abs/1911.11929)
For transfer learning use cases, make sure to read the
[guide to transfer learning & fine-tuning](https://keras.io/guides/transfer_learning/).
## Links
* [CSPNet Quickstart Notebook](https://www.kaggle.com/code/prasadsachin/cspnet-quickstart-kerashub)
* [CSPDarkNet API Documentation](https://keras.io/keras_hub/api/models/cspnet/)
* [CSPDarkNet Model Card](https://huggingface.co/timm/cspdarknet53.ra_in1k)
* [KerasHub Beginner Guide](https://keras.io/guides/keras_hub/getting_started/)
* [KerasHub Model Publishing Guide](https://keras.io/guides/keras_hub/upload/)
## Installation
Keras and KerasHub can be installed with:
```
pip install -U -q keras-hub
pip install -U -q keras
```
Jax, TensorFlow, and Torch come preinstalled in Kaggle Notebooks. For instructions on installing them in another environment see the [Keras Getting Started](https://keras.io/getting_started/) page.
## Presets
The following model checkpoints are provided by the Keras team. Weights have been ported from: https://huggingface.co/timm. Full code examples for each are available below.
| Preset name | Parameters | Description |
|-----------------------|------------|---------------|
| `csp_darknet_53_ra_imagenet` | 27642184 | A CSP-DarkNet (Cross-Stage-Partial) image classification model pre-trained on the Randomly Augmented ImageNet 1k dataset at a 256x256 resolution.|
| `csp_resnext_50_ra_imagenet` | 20569896 | A CSP-ResNeXt (Cross-Stage-Partial) image classification model pre-trained on the Randomly Augmented ImageNet 1k dataset at a 256x256 resolution.|
| `csp_resnet_50_ra_imagenet` | 21616168 | A CSP-ResNet (Cross-Stage-Partial) image classification model pre-trained on the Randomly Augmented ImageNet 1k dataset at a 256x256 resolution.|
| `darknet_53_imagenet` | 41609928 | A DarkNet image classification model pre-trained on the Randomly Augmented ImageNet 1k dataset at a 256x256 resolution.|
## Example Usage
```python
input_data = np.ones(shape=(8, 224, 224, 3))
# Pretrained backbone
model = keras_hub.models.CSPNetBackbone.from_preset("darknet_53_imagenet")
model(input_data)
# Randomly initialized backbone with a custom config
model = keras_hub.models.CSPNetBackbone(
stem_filters=32,
stem_kernel_size=3,
stem_strides=1,
stackwise_depth=[1, 2, 4],
stackwise_strides=[1, 2, 2],
stackwise_num_filters=[32, 64, 128],
block_type="dark",
)
model(input_data)
#Use cspnet for image classification task
model = keras_hub.models.ImageClassifier.from_preset("darknet_53_imagenet")
#Use Timm presets directly from HuggingFace
model = keras_hub.models.ImageClassifier.from_preset('hf://timm/cspdarknet53.ra_in1k')
```
## Example Usage with Hugging Face URI
```python
input_data = np.ones(shape=(8, 224, 224, 3))
# Pretrained backbone
model = keras_hub.models.CSPNetBackbone.from_preset("hf://keras/darknet_53_imagenet")
model(input_data)
# Randomly initialized backbone with a custom config
model = keras_hub.models.CSPNetBackbone(
stem_filters=32,
stem_kernel_size=3,
stem_strides=1,
stackwise_depth=[1, 2, 4],
stackwise_strides=[1, 2, 2],
stackwise_num_filters=[32, 64, 128],
block_type="dark",
)
model(input_data)
#Use cspnet for image classification task
model = keras_hub.models.ImageClassifier.from_preset("hf://keras/darknet_53_imagenet")
#Use Timm presets directly from HuggingFace
model = keras_hub.models.ImageClassifier.from_preset('hf://timm/cspdarknet53.ra_in1k')
```
|
keras/vicuna_1.5_7b_en | keras | 2025-06-17T03:11:20Z | 53 | 0 | keras-hub | [
"keras-hub",
"text-generation-inference",
"text-generation",
"en",
"arxiv:2306.05685",
"license:llama2",
"region:us"
] | text-generation | 2024-10-28T23:21:51Z | ---
library_name: keras-hub
license: llama2
language:
- en
tags:
- text-generation-inference
pipeline_tag: text-generation
---
### Model Overview
Vicuna is a chat assistant trained by fine-tuning Llama 2 on user-shared conversations collected from ShareGPT.Weights are release under the [Llama 2 Community License Agreement ](https://ai.meta.com/llama/license/) and Keras model code are released under the [Apache 2 License](https://github.com/keras-team/keras-hub/blob/master/LICENSE).
Model type: An auto-regressive language model based on the transformer architecture.
Fine tuned from model: Llama 2
Uses:
The primary use of Vicuna is research on large language models and chatbots. The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
## Links
* [Vicuna Quickstart Notebook](https://www.kaggle.com/code/laxmareddypatlolla/vicuna-quickstart-notebook)
* [Vicuna API Documentation](coming soon)
* [Vicuna Model Card](https://huggingface.co/lmsys/vicuna-7b-v1.5#vicuna-model-card)
* [KerasHub Beginner Guide](https://keras.io/guides/keras_hub/getting_started/)
* [KerasHub Model Publishing Guide](https://keras.io/guides/keras_hub/upload/)
## Installation
Keras and KerasHub can be installed with:
```
pip install -U -q keras-hub
pip install -U -q keras
```
Jax, TensorFlow, and Torch come preinstalled in Kaggle Notebooks. For instruction on installing them in another environment see the [Keras Getting Started](https://keras.io/getting_started/) page.
## Presets
The following model checkpoints are provided by the Keras team. Full code examples for each are available below.
| Preset name | Parameters | Description |
|-----------------------|------------|---------------|
|` vicuna_1.5_7b_en ` | 6.74B | 7 billion parameter, 32-layer, instruction tuned Vicuna v1.5 model.|
Paper: https://arxiv.org/abs/2306.05685
## Example Usage
```python
import keras
import keras_hub
import numpy as np
```
Use `generate()` to do text generation.
```python
vicuna_lm = keras_hub.models.LlamaCausalLM.from_preset("vicuna_1.5_7b_en")
vicuna_lm.generate("### HUMAN:\nWhat is Keras? \n### RESPONSE:\n", max_length=500)
# Generate with batched prompts.
vicuna_lm.generate([
"### HUMAN:\nWhat is ML? \n### RESPONSE:\n",
"### HUMAN:\nGive me your best brownie recipe.\n### RESPONSE:\n",
],max_length=500)
```
Compile the `generate()` function with a custom sampler.
```python
vicuna_lm = keras_hub.models.LlamaCausalLM.from_preset("vicuna_1.5_7b_en")
vicuna_lm.compile(sampler="greedy")
vicuna_lm.generate("I want to say", max_length=30)
vicuna_lm.compile(sampler=keras_hub.samplers.BeamSampler(num_beams=2))
vicuna_lm.generate("I want to say", max_length=30)
```
Use `generate()` without preprocessing.
```python
prompt = {
# `1` maps to the start token followed by "I want to say".
"token_ids": np.array([[1, 306, 864, 304, 1827, 0, 0, 0, 0, 0]] * 2),
# Use `"padding_mask"` to indicate values that should not be overridden.
"padding_mask": np.array([[1, 1, 1, 1, 1, 0, 0, 0, 0, 0]] * 2),
}
vicuna_lm = keras_hub.models.LlamaCausalLM.from_preset(
"vicuna_1.5_7b_en",
preprocessor=None,
dtype="bfloat16"
)
vicuna_lm.generate(prompt)
```
Call `fit()` on a single batch.
```python
features = ["The quick brown fox jumped.", "I forgot my homework."]
vicuna_lm = keras_hub.models.LlamaCausalLM.from_preset("vicuna_1.5_7b_en")
vicuna_lm.fit(x=features, batch_size=2)
```
Call `fit()` without preprocessing.
```python
x = {
"token_ids": np.array([[1, 450, 4996, 17354, 1701, 29916, 12500, 287, 29889, 0]] * 2),
"padding_mask": np.array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 0]] * 2),
}
y = np.array([[450, 4996, 17354, 1701, 29916, 12500, 287, 29889, 0, 0]] * 2)
sw = np.array([[1, 1, 1, 1, 1, 1, 1, 1, 0, 0]] * 2)
vicuna_lm = keras_hub.models.LlamaCausalLM.from_preset(
"vicuna_1.5_7b_en",
preprocessor=None,
dtype="bfloat16"
)
vicuna_lm.fit(x=x, y=y, sample_weight=sw, batch_size=2)
```
## Example Usage with Hugging Face URI
```python
import keras
import keras_hub
import numpy as np
```
Use `generate()` to do text generation.
```python
vicuna_lm = keras_hub.models.LlamaCausalLM.from_preset("hf://keras/vicuna_1.5_7b_en")
vicuna_lm.generate("### HUMAN:\nWhat is Keras? \n### RESPONSE:\n", max_length=500)
# Generate with batched prompts.
vicuna_lm.generate([
"### HUMAN:\nWhat is ML? \n### RESPONSE:\n",
"### HUMAN:\nGive me your best brownie recipe.\n### RESPONSE:\n",
],max_length=500)
```
Compile the `generate()` function with a custom sampler.
```python
vicuna_lm = keras_hub.models.LlamaCausalLM.from_preset("hf://keras/vicuna_1.5_7b_en")
vicuna_lm.compile(sampler="greedy")
vicuna_lm.generate("I want to say", max_length=30)
vicuna_lm.compile(sampler=keras_hub.samplers.BeamSampler(num_beams=2))
vicuna_lm.generate("I want to say", max_length=30)
```
Use `generate()` without preprocessing.
```python
prompt = {
# `1` maps to the start token followed by "I want to say".
"token_ids": np.array([[1, 306, 864, 304, 1827, 0, 0, 0, 0, 0]] * 2),
# Use `"padding_mask"` to indicate values that should not be overridden.
"padding_mask": np.array([[1, 1, 1, 1, 1, 0, 0, 0, 0, 0]] * 2),
}
vicuna_lm = keras_hub.models.LlamaCausalLM.from_preset(
"hf://keras/vicuna_1.5_7b_en",
preprocessor=None,
dtype="bfloat16"
)
vicuna_lm.generate(prompt)
```
Call `fit()` on a single batch.
```python
features = ["The quick brown fox jumped.", "I forgot my homework."]
vicuna_lm = keras_hub.models.LlamaCausalLM.from_preset("hf://keras/vicuna_1.5_7b_en")
vicuna_lm.fit(x=features, batch_size=2)
```
Call `fit()` without preprocessing.
```python
x = {
"token_ids": np.array([[1, 450, 4996, 17354, 1701, 29916, 12500, 287, 29889, 0]] * 2),
"padding_mask": np.array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 0]] * 2),
}
y = np.array([[450, 4996, 17354, 1701, 29916, 12500, 287, 29889, 0, 0]] * 2)
sw = np.array([[1, 1, 1, 1, 1, 1, 1, 1, 0, 0]] * 2)
vicuna_lm = keras_hub.models.LlamaCausalLM.from_preset(
"hf://keras/vicuna_1.5_7b_en",
preprocessor=None,
dtype="bfloat16"
)
vicuna_lm.fit(x=x, y=y, sample_weight=sw, batch_size=2)
```
|
Gatescrispy/dippy-dialogpt-optimized | Gatescrispy | 2025-06-17T02:59:47Z | 0 | 0 | null | [
"safetensors",
"gpt2",
"conversational",
"roleplay",
"dippy",
"dialogpt",
"bittensor",
"en",
"license:mit",
"region:us"
] | null | 2025-06-17T02:59:11Z | ---
language: en
tags:
- conversational
- roleplay
- dippy
- dialogpt
- bittensor
license: mit
---
# Dippy DialoGPT Optimized
This is a fine-tuned version of microsoft/DialoGPT-medium optimized for conversational AI with Dippy personality.
## Model Details
- Base model: microsoft/DialoGPT-medium
- Fine-tuned for: Conversational AI, roleplay, helpful assistant interactions
- Optimized for: Bittensor SN11 Dippy subnet
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Gatescrispy/dippy-dialogpt-optimized")
model = AutoModelForCausalLM.from_pretrained("Gatescrispy/dippy-dialogpt-optimized")
# Generate response
inputs = tokenizer.encode("Hello! How are you today?", return_tensors="pt")
outputs = model.generate(inputs, max_length=50, pad_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
## Training
- Dataset: Custom Dippy personality conversations
- Training: 1 epoch with learning rate scheduling
- Hardware: NVIDIA RTX 3090
## Bittensor Integration
This model is designed for Bittensor SN11 Dippy subnet integration.
|
Zack-Z/qwen3_4bi_cotsft_rs0_3_5cut_ru_cot2all_indep_e2 | Zack-Z | 2025-06-17T02:54:04Z | 0 | 0 | transformers | [
"transformers",
"qwen3",
"feature-extraction",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Qwen3-4B",
"base_model:finetune:unsloth/Qwen3-4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-06-17T02:39:32Z | ---
base_model: unsloth/Qwen3-4B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Zack-Z
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
luckysantoso/adapter-gemma-lawbot-v2 | luckysantoso | 2025-06-17T02:46:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-17T02:46:21Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AlSamCur123/Mistral-Small3-24B-Instruct | AlSamCur123 | 2025-06-17T02:42:57Z | 178 | 1 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-03T11:42:46Z | ---
base_model: unsloth/mistral-small-24b-instruct-2501-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** AlSamCur123
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-small-24b-instruct-2501-unsloth-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
fnlp/smollm1-1B7-d_kv_8-refactor | fnlp | 2025-06-17T02:33:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-17T02:28:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
wh-zhu/DeepSeek-R1-TrRa-1.5B-lambda_2 | wh-zhu | 2025-06-17T02:26:39Z | 48 | 0 | null | [
"safetensors",
"qwen2",
"arxiv:2506.12704",
"region:us"
] | null | 2025-05-28T02:59:06Z |
<h1 align="center">🛠️ ReAligner</h1>
<p align="center">
<a href="https://arxiv.org/pdf/2506.12704"><img src="https://img.shields.io/badge/arXiv-arXiv%20Preprint-B31B1B?style=flat&logo=arxiv&logoColor=white" alt="arXiv Paper"></a>
<a href="https://github.com/zwhong714/ReAligner"><img src="https://img.shields.io/badge/Homepage-Project%20Page-brightgreen?style=flat&logo=github" alt="Homepage"></a>
<a href="https://huggingface.co/wh-zhu"><img src="https://img.shields.io/badge/Huggingface-Models-yellow?style=flat&logo=huggingface" alt="Models"></a>
</p>
<div>
A flexible realignment framework is proposed to quantitatively control alignment during training and inference, combining Training-time Realignment (TrRa) and Inference-time Realignment (InRa).
- We realign DeepScaleR-1.5B model and reduce token usage without performance loss and even enhance reasoning capabilities.
</div>
</div>
<div>
<br>
 |
hardlyworking/BabyBoo9B-Q4_0-GGUF | hardlyworking | 2025-06-17T02:24:08Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:hardlyworking/BabyBoo9B",
"base_model:quantized:hardlyworking/BabyBoo9B",
"endpoints_compatible",
"region:us"
] | null | 2025-06-17T02:23:42Z | ---
base_model: hardlyworking/BabyBoo9B
tags:
- llama-cpp
- gguf-my-repo
---
# hardlyworking/BabyBoo9B-Q4_0-GGUF
This model was converted to GGUF format from [`hardlyworking/BabyBoo9B`](https://huggingface.co/hardlyworking/BabyBoo9B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/hardlyworking/BabyBoo9B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo hardlyworking/BabyBoo9B-Q4_0-GGUF --hf-file babyboo9b-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo hardlyworking/BabyBoo9B-Q4_0-GGUF --hf-file babyboo9b-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo hardlyworking/BabyBoo9B-Q4_0-GGUF --hf-file babyboo9b-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo hardlyworking/BabyBoo9B-Q4_0-GGUF --hf-file babyboo9b-q4_0.gguf -c 2048
```
|
wh-zhu/DeepSeek-R1-TrRa-iter2-1.5B-lambda_2 | wh-zhu | 2025-06-17T02:11:22Z | 4 | 0 | null | [
"safetensors",
"qwen2",
"arxiv:2506.12704",
"region:us"
] | null | 2025-05-28T12:53:50Z |
<h1 align="center">🛠️ ReAligner</h1>
<p align="center">
<a href="https://arxiv.org/pdf/2506.12704"><img src="https://img.shields.io/badge/arXiv-arXiv%20Preprint-B31B1B?style=flat&logo=arxiv&logoColor=white" alt="arXiv Paper"></a>
<a href="https://github.com/zwhong714/ReAligner"><img src="https://img.shields.io/badge/Homepage-Project%20Page-brightgreen?style=flat&logo=github" alt="Homepage"></a>
<a href="https://huggingface.co/wh-zhu"><img src="https://img.shields.io/badge/Huggingface-Models-yellow?style=flat&logo=huggingface" alt="Models"></a>
</p>
<div>
A flexible realignment framework is proposed to quantitatively control alignment during training and inference, combining Training-time Realignment (TrRa) and Inference-time Realignment (InRa).
- We realign DeepScaleR-1.5B model and reduce token usage without performance loss and even enhance reasoning capabilities.
</div>
</div>
<div>
<br>
 |
wh-zhu/DeepSeek-R1-TrRa-iter1-1.5B-lambda_2 | wh-zhu | 2025-06-17T02:10:57Z | 4 | 0 | null | [
"safetensors",
"qwen2",
"arxiv:2506.12704",
"region:us"
] | null | 2025-05-28T13:17:27Z | <h1 align="center">🛠️ ReAligner</h1>
<p align="center">
<a href="https://arxiv.org/pdf/2506.12704"><img src="https://img.shields.io/badge/arXiv-arXiv%20Preprint-B31B1B?style=flat&logo=arxiv&logoColor=white" alt="arXiv Paper"></a>
<a href="https://github.com/zwhong714/ReAligner"><img src="https://img.shields.io/badge/Homepage-Project%20Page-brightgreen?style=flat&logo=github" alt="Homepage"></a>
<a href="https://huggingface.co/wh-zhu"><img src="https://img.shields.io/badge/Huggingface-Models-yellow?style=flat&logo=huggingface" alt="Models"></a>
</p>
<div>
A flexible realignment framework is proposed to quantitatively control alignment during training and inference, combining Training-time Realignment (TrRa) and Inference-time Realignment (InRa).
- We realign DeepScaleR-1.5B model and reduce token usage without performance loss and even enhance reasoning capabilities.
</div>
</div>
<div>
<br>
 |
wh-zhu/DeepSeek-R1-TrRa-1.5B_lambda_1.5 | wh-zhu | 2025-06-17T02:08:38Z | 6 | 0 | null | [
"safetensors",
"qwen2",
"arxiv:2506.12704",
"region:us"
] | null | 2025-05-29T08:52:30Z |
<h1 align="center">🛠️ ReAligner</h1>
<p align="center">
<a href="https://arxiv.org/pdf/2506.12704"><img src="https://img.shields.io/badge/arXiv-arXiv%20Preprint-B31B1B?style=flat&logo=arxiv&logoColor=white" alt="arXiv Paper"></a>
<a href="https://github.com/zwhong714/ReAligner"><img src="https://img.shields.io/badge/Homepage-Project%20Page-brightgreen?style=flat&logo=github" alt="Homepage"></a>
<a href="https://huggingface.co/wh-zhu"><img src="https://img.shields.io/badge/Huggingface-Models-yellow?style=flat&logo=huggingface" alt="Models"></a>
</p>
<div>
A flexible realignment framework is proposed to quantitatively control alignment during training and inference, combining Training-time Realignment (TrRa) and Inference-time Realignment (InRa).
- We realign DeepScaleR-1.5B model and reduce token usage without performance loss and even enhance reasoning capabilities.
</div>
</div>
<div>
<br>
 |
fnlp/SmolLM-135M-MLA-d_kv_16-refactor | fnlp | 2025-06-17T02:08:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-17T02:07:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
wh-zhu/DeepSeek-R1-TrRa-1.5B_lambda_0.5 | wh-zhu | 2025-06-17T02:06:44Z | 4 | 0 | null | [
"safetensors",
"qwen2",
"arxiv:2506.12704",
"region:us"
] | null | 2025-05-29T06:02:38Z |
<h1 align="center">🛠️ ReAligner</h1>
<p align="center">
<a href="https://arxiv.org/pdf/2506.12704"><img src="https://img.shields.io/badge/arXiv-arXiv%20Preprint-B31B1B?style=flat&logo=arxiv&logoColor=white" alt="arXiv Paper"></a>
<a href="https://github.com/zwhong714/ReAligner"><img src="https://img.shields.io/badge/Homepage-Project%20Page-brightgreen?style=flat&logo=github" alt="Homepage"></a>
<a href="https://huggingface.co/wh-zhu"><img src="https://img.shields.io/badge/Huggingface-Models-yellow?style=flat&logo=huggingface" alt="Models"></a>
</p>
<div>
A flexible realignment framework is proposed to quantitatively control alignment during training and inference, combining Training-time Realignment (TrRa) and Inference-time Realignment (InRa).
- We realign DeepScaleR-1.5B model and reduce token usage without performance loss and even enhance reasoning capabilities.
</div>
</div>
<div>
<br>
 |
Moe1177/Llama3.1-8B-FineTuned | Moe1177 | 2025-06-17T01:55:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-17T01:50:56Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Gluttony10/OpenAvatarChat | Gluttony10 | 2025-06-17T01:48:20Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-06-16T16:29:01Z | ---
license: apache-2.0
---
|
AmberYifan/Qwen2.5-7B-Instruct-userfeedback-sentiment-iter1 | AmberYifan | 2025-06-17T01:36:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:AmberYifan/Qwen2.5-7B-Instruct-userfeedback-sentiment-seed",
"base_model:finetune:AmberYifan/Qwen2.5-7B-Instruct-userfeedback-sentiment-seed",
"a... | text-generation | 2025-06-17T00:46:56Z | ---
base_model: AmberYifan/Qwen2.5-7B-Instruct-userfeedback-sentiment-seed
library_name: transformers
model_name: Qwen2.5-7B-Instruct-userfeedback-sentiment-iter1
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for Qwen2.5-7B-Instruct-userfeedback-sentiment-iter1
This model is a fine-tuned version of [AmberYifan/Qwen2.5-7B-Instruct-userfeedback-sentiment-seed](https://huggingface.co/AmberYifan/Qwen2.5-7B-Instruct-userfeedback-sentiment-seed).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AmberYifan/Qwen2.5-7B-Instruct-userfeedback-sentiment-iter1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yifanwang/huggingface/runs/94mx3u0z)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
dgambettaphd/M_llm2_run2_gen3_WXS_doc1000_synt64_lr1e-04_acm_SYNLAST | dgambettaphd | 2025-06-17T01:24:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-17T01:24:23Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
luckysantoso/adapter-sahabatai-lawbot-v2 | luckysantoso | 2025-06-17T01:14:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-17T01:14:32Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MinaMila/llama_instbase_LoRa_GermanCredit_cfda_ep3_55 | MinaMila | 2025-06-17T01:00:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-19T23:01:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Z841973620/Qwen3-30B-A3B-GGUF | Z841973620 | 2025-06-17T00:47:43Z | 0 | 0 | null | [
"gguf",
"base_model:huihui-ai/Qwen3-30B-A3B-abliterated",
"base_model:quantized:huihui-ai/Qwen3-30B-A3B-abliterated",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-06-16T04:02:03Z | ---
base_model:
- huihui-ai/Qwen3-30B-A3B-abliterated
--- |
DevQuasar/utter-project.EuroMoE-2.6B-A0.6B-Instruct-Preview-GGUF | DevQuasar | 2025-06-17T00:47:09Z | 0 | 0 | null | [
"gguf",
"text-generation",
"base_model:utter-project/EuroMoE-2.6B-A0.6B-Instruct-Preview",
"base_model:quantized:utter-project/EuroMoE-2.6B-A0.6B-Instruct-Preview",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-06-17T00:27:45Z | ---
base_model:
- utter-project/EuroMoE-2.6B-A0.6B-Instruct-Preview
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [utter-project/EuroMoE-2.6B-A0.6B-Instruct-Preview](https://huggingface.co/utter-project/EuroMoE-2.6B-A0.6B-Instruct-Preview)
'Make knowledge free for everyone'
<p align="center">
Made with <br>
<a href="https://www.civo.com/" target="_blank">
<img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/>
</a>
</p>
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
MinaMila/llama_instbase_LoRa_GermanCredit_cfda_ep6_33 | MinaMila | 2025-06-17T00:32:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-21T20:22:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Zack-Z/qwen3_4bi_cotsft_rs0_3_5cut_cot2all_indep_e2 | Zack-Z | 2025-06-17T00:22:55Z | 0 | 0 | transformers | [
"transformers",
"qwen3",
"feature-extraction",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Qwen3-4B",
"base_model:finetune:unsloth/Qwen3-4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-06-17T00:07:11Z | ---
base_model: unsloth/Qwen3-4B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Zack-Z
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
BootesVoid/cmbzp22s806i1rdqswho8jt7k_cmbzqmfoa06kjrdqsftr640nr | BootesVoid | 2025-06-17T00:20:58Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-17T00:20:57Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: VIBEZ
---
# Cmbzp22S806I1Rdqswho8Jt7K_Cmbzqmfoa06Kjrdqsftr640Nr
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `VIBEZ` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "VIBEZ",
"lora_weights": "https://huggingface.co/BootesVoid/cmbzp22s806i1rdqswho8jt7k_cmbzqmfoa06kjrdqsftr640nr/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbzp22s806i1rdqswho8jt7k_cmbzqmfoa06kjrdqsftr640nr', weight_name='lora.safetensors')
image = pipeline('VIBEZ').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbzp22s806i1rdqswho8jt7k_cmbzqmfoa06kjrdqsftr640nr/discussions) to add images that show off what you’ve made with this LoRA.
|
mcryptoone/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fanged_long_flamingo | mcryptoone | 2025-06-17T00:20:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am fanged long flamingo",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-13T14:59:07Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fanged_long_flamingo
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am fanged long flamingo
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fanged_long_flamingo
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="mcryptoone/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fanged_long_flamingo", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
MinaMila/llama_instbase_LoRa_GermanCredit_cfda_ep9_22 | MinaMila | 2025-06-17T00:20:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-21T19:36:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MinaMila/llama_instbase_LoRa_GermanCredit_ep9_66 | MinaMila | 2025-06-17T00:04:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-21T22:49:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tomaarsen/splade-modernbert-base-miriad-1e-5 | tomaarsen | 2025-06-17T00:03:56Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"modernbert",
"sparse-encoder",
"sparse",
"splade",
"generated_from_trainer",
"dataset_size:100000",
"loss:SpladeLoss",
"loss:SparseMultipleNegativesRankingLoss",
"loss:FlopsLoss",
"feature-extraction",
"en",
"dataset:tomaarsen/miriad-4.4M-split",
... | feature-extraction | 2025-06-17T00:03:43Z | ---
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- sparse-encoder
- sparse
- splade
- generated_from_trainer
- dataset_size:100000
- loss:SpladeLoss
- loss:SparseMultipleNegativesRankingLoss
- loss:FlopsLoss
base_model: answerdotai/ModernBERT-base
widget:
- text: "He does it right, but there are times that he doesn't (Joana) Let's go there\
\ and pee? Because she does not want to wear a diaper, she rips off her diaper\
\ (Filomena). The family caregiver may understand this action as a \"pang\" and\
\ \"tantrum\", and \"forget\" that these episodes are part of the clinical picture\
\ of dementia. Conflicts related to incontinence and other difficult-to-manage\
\ symptoms eventually lead to a variety of interpretations, and past history of\
\ the emotional relationship between the elderly and the family caregiver can\
\ cause older emotional issues to surface again in these episodes.\n\n With psycho-functional\
\ limitations, new demands arise that can be distressing for those who care because\
\ of affective involvement. Subjective constructions are fundamental elements\
\ in upkeeping the relationship of care 10 .\n\n Besides the psychological aspect\
\ involved in the loss of identity and the specific cognitive aspects of dementia,\
\ some behavioral and psychiatric changes are important even in the consultation\
\ with the ESF professionals: psychotic symptoms, agitation and aggression, mood\
\ swings, disinhibited behavior and euphoria, apathy and insomnia. Some studies\
\ [11] [12] [13] pointed out the significant association between the presence\
\ of apathy and a faster cognitive and functional decline in these patients. Another\
\ very relevant situation regarding the appearance of neuropsychiatric symptoms\
\ is the association of these symptoms with the institutionalization and shorter\
\ patient survival. They also showed that the highest Neuropsychiatric Inventory\
\ (NPI) score was signifi-cantly associated with more severe cognitive impairment,\
\ greater caregiver distress, and higher cost, but was not associated with a formal\
\ diagnosis of dementia performed by the primary care physician.\n\n Changed behaviors\
\ and even risky behaviors, such as turning on the gas switch and not turning\
\ off, stirring in pots on a hot stove, or ingestion of liquids or toxic materials\
\ are situations in the face of neuropsychiatric manifestations in dementia. Filomena\
\ reports several neuropsychiatric symptoms of her husband. She compares his behavior\
\ to that of children who explore the environment to discover the cause and effect\
\ of things and the sensations obtained by the senses. Her role in this context\
\ resembles that of a mother trying to prevent the child from getting hurt: He\
\ lights up the gas switch, he's just like a child, sometimes he starts to eat\
\ the slipper, I have to get it out of his mouth.\n\n Hallucination is another\
\ neuropsychiatric symptom described by family caregivers. Joana reports that\
\ when the husband talks to people who have died, the family members feel fear\
\ and distance themselves. Filomena has fun when her mother speaks with those\
\ who have died: \"She talks to those who have passed away, she sends the dog\
\ out, which does not exist\". Each family caregiver experiences the symptoms\
\ presented by the dementia in a unique way, and ways to address and interpret\
\ this phenomenon and give meaning to their experience.\n\n The negative development\
\ of dementia perceived by Celina, Filomena, Maria, Teresa and Joana show that\
\ the disease follows a course that transcends the biological event itself. The\
\ dementia process evidences psychological and sociocultural constructions permeated\
\ by meanings and interpretations according to those who live and those who maintain\
\ interpersonal relationships with the elderly person with dementia.\n\n In the\
\ discourse of family caregivers, seniors with dementia have aggressive behaviors\
\ such as agitation, spitting, cursing, clawing, throwing objects, revealing a\
\ level of aggression that can impact the feelings and interpretations produced\
\ during the care routine. Freud 14 affirms that human instincts are of two types:\
\ Those who tend to preserve and unite, which we call 'erotic' [...] with a deliberate\
\ expansion of the popular conception of 'sexuality'; and those who tend to destroy\
\ and kill, which we group as an aggressive or destructive instinct. All actions\
\ in human life involve the confluence of these two instincts of preservation\
\ and destruction. The ideal situation for life in society would be the dominance\
\ of reason over the instinctual life controlling destructive impulses, which\
\ is utopian. In this perspective, aggressiveness is inherent in the human condition.\n\
\n In seniors with dementia with a declining psychological realm of the Self,\
\ the progressive loss of identity and the repercussion of cognitive decline,\
\ an actual decline in the rational realm of psychic life emerges. This decline\
\ refers to the cerebral aspect of inhibitory control and social cognition, showing\
\ that the emergence of aggressive behaviors is related to the biological component.\
\ The declining reason turns its demands and needs into instinctual acts and more\
\ basic reflexes, and can produce a continuous imbalance in the expression between\
\ the instincts of preservation and aggression.\n\n Aggressiveness can be triggered\
\ by situations of frustration, when they do not get what they want, when they\
\ are afraid or consider some humiliating situation, when they are exposed to\
\ environmental overstimulation or feel any physical pain or side effects from\
\ medication."
- text: "Neurosurgery is of great interest to historians of medicine and technology\
\ because it is relatively young, because it developed in an era of journals and\
\ publications, because lines and traditions of training and mentorship are relatively\
\ clear, and because the technologies that enabled the evolution of the profession\
\ and acted as inflection points in the emergence of certain surgical approaches\
\ and procedures are at once well documented and remarkably unambiguous. To the\
\ extent that is the case for neurosurgery as a whole, it is even more so for\
\ surgery of the skull base.\n\n To trace the history of skull base surgery along\
\ its full expanse is to begin with Horsley and pituitary tumors (unless one wants\
\ to start even earlier with the treatment of trigeminal neuralgia); to move to\
\ Cushing's work in the same arena (but also that of many others as well); to\
\ emphasize the impact of microsurgical techniques and new imaging modalities;\
\ to outline once radically innovative, but now widely practiced anatomical approaches\
\ to the skull base; to emphasize the importance of team approaches; to discuss\
\ emerging therapeutic strategy as well as instrumentation and techniques; to\
\ acknowledge the importance of advances in neuroanesthesia and the medical and\
\ perioperative care of the neurosurgical patient; and to recognize the contributions\
\ of the many individuals who, over the past 25 years, have added to and furthered\
\ the field in these and other ways.\n\n It is not hard to point to leading individuals\
\ and important techniques. It is perhaps more difficult to frame them in a meaningful\
\ historical perspective because the work has occurred relatively recently, in\
\ the time frame historians call \"near history.\" Difficulties arise from both\
\ an evaluative and a nosological standpoint. For example, from an evaluative\
\ standpoint, how does one stratify the relative importance of corticosteroids,\
\ osmotic diuretics, and CSF drainage techniques and technologies in the control\
\ of intracranial pressure and the facilitation of exposure for base of skull\
\ surgery? How does one think about the idea of hybrid surgery and stereotactic\
\ radiation? What will be the long-term view of anatomical approaches to giant\
\ basilar aneurysms in the light of endovascular surgery? Have we reached a tipping\
\ point in the management of vestibular schwannomas, given the availability of\
\ and the outcomes associated with stereotactic radiosurgery?\n\n From a nosological\
\ standpoint, should we think about base of skull surgery in terms of anatomical\
\ approaches? One textbook that does just that starts with subfrontal approaches\
\ and then moves around the calvaria and down to the petrous and temporal region\
\ in a Cook's tour of exposure, in the tradition of Henry's Extensile Exposure\
\ and comparable surgical classics. 1, 6 Other publications have explored a set\
\ of technologies. 5, 7, 10 Another focuses on the contribution of great men.\
\ 9 Many surgeons have written about specific particular pathologies at the skull\
\ base.\n\n Introduction their colleagues write about the premodern period. Elhadi\
\ and colleagues also comment on the introduction of radiography in early neurosurgery.\
\ Gross and Grossi and their colleagues concentrate on petrosal approaches; Schmitt\
\ and Jane on third ventriculostomy; and Chittiboina and colleagues on the history\
\ of a very simple but ubiquitous instrument, the Freer elevator, and its inventor.\
\ In contrast to the more comprehensive overviews written by Goodrich, Donald,\
\ and others, these essays concentrate on selected details. While it is important\
\ not to miss the forest for the trees, sometimes the trees are worth studying\
\ no less than the forest. \n\n The authors report no conflict of interest."
- text: 'How do neuromediators contribute to the pathogenesis of pruritus in AD?
'
- text: "Pericardial effusion (PE) is a life-threatening condition, as accumulation\
\ of fluid in the pericardial sac can lead to cardiac tamponade and fatal shock.\
\ 1, 2 PE is often associated with an underlying disease or condition, and the\
\ causes can vary widely. 3, 4 Pericardiocentesis performed by needle (with or\
\ without echoguidance), and various surgical procedures (including subxiphoid\
\ pericardial tube drainage, pericardial window performed through a left anterior\
\ thoracotomy, or video-assisted thoracoscopic surgery) can alleviate PE. 5 Our\
\ retrospective clinical experiences of treating PE with subxiphoid pericardiostomy\
\ are presented in this study.\n\n We reviewed the medical records of patients\
\ who underwent subxiphoid pericardiostomy to treat persistent symptomatic PE\
\ in our clinic between 1990 and 2000. Echocardiography (ECG) was used to diagnose\
\ PE and N Becit, A Özyazicioglu, M Ceviz et al.\n\n determine the size of the\
\ effusion. A diastolic echo-free space of < 10 mm between the left ventricular\
\ posterior wall and pericardium was determined as mild PE, 10 -20 mm as moderate,\
\ and > 20 mm as severe PE. Patients with cardiac tamponade and/or moderate to\
\ severe PE were treated by subxiphoid pericardiostomy and tube drainage.\n\n\
\ Some patients with pre-operative tuberculosis were treated with an adult fourdrug\
\ regimen (isoniazid, 300 mg/day and rifampin, 600 mg/day for 12 months, streptomycin,\
\ 1 g/day for 2 months, and pyrazinamide, 2 g/day for 3 months) preoperatively.\
\ The effusion was drained after a 3-week course of anti-tuberculosis therapy.\
\ In these, and patients diagnosed with tuberculous pericarditis, the tuberculosis\
\ therapy regimen was given for 12 months post-operatively.\n\n The technique\
\ used for subxiphoid pericardiostomy (described previously 3 ) was performed\
\ under general anaesthetic, or local anaesthesia and sedation. General anaesthesia\
\ was preferred in children and was induced with 1.5 mg/kg ketamine. Neuromuscular\
\ block was achieved with 0.1 mg/kg vecuronium, and anaesthesia maintained with\
\ 60% N 2 O, 40% O 2 and 0.5 -1.0% isoflurane. Local anaesthetic (2% lidocaine\
\ solution) was injected into the dermal and subdermal layers, and sedation and\
\ analgesia was provided by 1 mg/kg ketamine intravenously. A piece of anterior\
\ pericardium, approximately 2 -4 cm in diameter, was excised under direct vision\
\ and submitted for histopathological analysis. The pericardial cavity was decompressed\
\ and fluid samples were collected for culture and cytological analysis. To prevent\
\ acute cardiac dilatation during decompression of the pericardial cavity, intravenous\
\ digoxin was administered and the pericardial cavity was decompressed gradually.\n\
\n The pericardial cavity was examined under direct vision and/or by digital examination\
\ to detect any tumour or adhesions. Gentle digital lysis of adhesions and opening\
\ of loculations were performed as needed, to enhance satisfactory drainage. A\
\ soft chest tube was placed in the pericardial cavity, lateral to the right ventricle,\
\ after pericardiotomy for post-operative drainage. It was connected to an underwater\
\ sealed system, and was removed when fluid drainage ceased.\n\n Patients with\
\ mild haemorrhagic effusion and cardiac tamponade, due to trauma or invasive\
\ cardiac interventions, were considered haemodynamically unstable and unsuitable\
\ for surgical subxiphoid pericardiostomy, even under local anaesthetic. These\
\ patients underwent pericardiocentesis in the intensive care unit, which provided\
\ immediate relief. Subxiphoid pericardiostomy was performed later if haemorrhagic\
\ PE persisted. Patients were followed, with physical examinations and ECG, in\
\ the outpatient clinic for at least 1 year.\n\n Numerical results are given as\
\ mean ± SD. Fisher's exact test was used to compare proportions between groups\
\ (comparison of the rates of recurrence and constriction between patient groups\
\ with uraemic pericarditis, tuberculous pericarditis and non-tuberculous bacterial\
\ pericarditis). The McNemar test was used for comparison of proportions within\
\ one group (to assess the significance of rates of recurrence and constriction\
\ in patients with tuberculous pericarditis). Statistical differences were considered\
\ significant if P < 0.05."
- text: "Henry M. Blumberg, MD In this issue of Infection Control and Hospital Epidemiology,\
\ a potpourri of tuberculosis (TB)-related articles are being published. 1-7 Tuberculosisrelated\
\ issues have been an important focus for the past decade for those in infection\
\ control and hospital epidemiology, especially in urban areas where the large\
\ majority of TB cases occur, 8 but also, because of federal regulations, for\
\ those in low-endemic areas or areas where no TB cases occur (approximately half\
\ of the counties in the United States).\n\n The resurgence of TB beginning in\
\ the mid1980s in the United States (in large part, due to failure and underfunding\
\ of the public health infrastructure and to the epidemic of human immunodeficiency\
\ virus [HIV] infection) and outbreaks of TB have highlighted the risk of nosocomial\
\ transmission of TB. 9,10 These outbreaks affected both healthcare workers (HCWs)\
\ and patients. The fact that outbreaks in New York and Miami, among others, involved\
\ multidrug-resistant (MDR) strains that were associated with high morbidity and\
\ mortality among HIV-infected individuals punctuated the importance of effective\
\ TB infection control measures. Commingling of patients with unsuspected TB and\
\ those who were quite immunosuppressed led to amplification of nosocomial transmission.\
\ A decade ago, few institutions were prepared for the changing epidemiology of\
\ TB.\n\n Several recent studies have demonstrated that infection control measures\
\ are effective in preventing nosocomial transmission of TB, 11-13 and two reports\
\ in this issue, from institutions in Kentucky 1 and New York, 2 provide additional\
\ data on decreases in HCW tuberculin skin-test (TST) conversions following implementation\
\ of TB infection control measures. In most studies, multiple interventions (administrative\
\ controls, environmental controls, and respiratory protection) were initiated\
\ at approximately the same time, making it more difficult to identify the most\
\ crucial aspect of the program. The importance of TB infection control measures\
\ in contributing to the decline in TB cases in the United States, as well as\
\ the reduction in the number of MDR-TB cases in New York City, often has been\
\ understated. Increased federal funding for TB control activities and expansion\
\ of directly observed therapy clearly are important in efforts to prevent TB,\
\ but the initial decline in TB cases and in MDR TB in the United States beginning\
\ in 1993 likely was due, in large part, to interruption of TB transmission within\
\ healthcare facilities. Unfortunately, increased funding for TB control in the\
\ United States in the last 5 years often has not trickled down to inner-city\
\ hospitals, which frequently are the first line in the battle against TB.\n\n\
\ From our experience and that of others, it appears clear that administrative\
\ controls are the most important component of a TB infection control program.\
\ At Grady Memorial Hospital in Atlanta, we were able to decrease TB exposure\
\ episodes markedly and concomitantly to decrease HCW TST conversions after implementing\
\ an expanded respiratory isolation policy. 11 We continue to isolate appropriately\
\ approximately 95% of those subsequently diagnosed with TB. We were able to reduce\
\ TST conver-sion rates markedly during a period of time in which we had isolation\
\ rooms that would be considered suboptimal by Centers for Disease Control and\
\ Prevention (CDC) guidelines 14 (rooms that were under negative pressure but\
\ had less than six air changes per hour) and were using submicron masks. Implementation\
\ of better-engineered isolation rooms (>12 air changes per hour) with the completion\
\ of renovations to the hospital may have put us in better compliance with regulatory\
\ agencies and made the staff feel more secure, but has had little impact on further\
\ reducing low rates of HCW TST conversions. In addition, the termination of outbreaks\
\ and reduction of TST conversion rates at several institutions took place before\
\ introduction of National Institute for Occupational Safety and Health-approved\
\ masks and fit testing. 2,15,16 United States healthcare institutions are required\
\ by regulatory mandates to develop a \"respiratory protection program\" (including\
\ fit testing), which can be time-consuming, expensive, and logistically difficult.\
\ 17 Data published to date suggest that the impact of formal fit testing on proper\
\ mask use is small. 18 These federal mandates also have turned some well-meaning\
\ (trying to comply fully with the Occupational Safety and Health Administration\
\ [OSHA] regulations) but misguided infection control practitioners into \"facial\
\ hair police.\" These types of processes divert time, effort, and resources away\
\ from what truly is effective in preventing nosocomial transmission of TB, as\
\ well as from other important infection control activities such as preventing\
\ nosocomial bloodstream infections or transmission of highly resistant pathogens\
\ such as vancomycin-resistant Enterococcus or preparing for the onslaught of\
\ vancomycin-resistant Staphylococcus aureus. At a time when US healthcare institutions\
\ are under enormous pressure due to healthcare reform, market forces, and managed\
\ care, it is essential that federal regulatory agencies look carefully at scientific\
\ data when issuing regulations."
datasets:
- tomaarsen/miriad-4.4M-split
pipeline_tag: feature-extraction
library_name: sentence-transformers
metrics:
- dot_accuracy@1
- dot_accuracy@3
- dot_accuracy@5
- dot_accuracy@10
- dot_precision@1
- dot_precision@3
- dot_precision@5
- dot_precision@10
- dot_recall@1
- dot_recall@3
- dot_recall@5
- dot_recall@10
- dot_ndcg@10
- dot_mrr@10
- dot_map@100
- query_active_dims
- query_sparsity_ratio
- corpus_active_dims
- corpus_sparsity_ratio
co2_eq_emissions:
emissions: 455.92134242362687
energy_consumed: 1.1729328442447604
source: codecarbon
training_type: fine-tuning
on_cloud: false
cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
ram_total_size: 31.777088165283203
hours_used: 3.578
hardware_used: 1 x NVIDIA GeForce RTX 3090
model-index:
- name: ModernBERT-base trained on MIRIAD question-passage tuples
results:
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: miriad eval
type: miriad_eval
metrics:
- type: dot_accuracy@1
value: 0.7888
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.9004
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.931
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.9578
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.7888
name: Dot Precision@1
- type: dot_precision@3
value: 0.3001333333333333
name: Dot Precision@3
- type: dot_precision@5
value: 0.18620000000000003
name: Dot Precision@5
- type: dot_precision@10
value: 0.09578000000000002
name: Dot Precision@10
- type: dot_recall@1
value: 0.7888
name: Dot Recall@1
- type: dot_recall@3
value: 0.9004
name: Dot Recall@3
- type: dot_recall@5
value: 0.931
name: Dot Recall@5
- type: dot_recall@10
value: 0.9578
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.8763839825807856
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.8499207142857116
name: Dot Mrr@10
- type: dot_map@100
value: 0.8516164229772919
name: Dot Map@100
- type: query_active_dims
value: 24.369199752807617
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9995161769426459
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 186.66419982910156
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9962939922206738
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: miriad test
type: miriad_test
metrics:
- type: dot_accuracy@1
value: 0.7948
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.9037
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.933
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.9601
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.7948
name: Dot Precision@1
- type: dot_precision@3
value: 0.3012333333333333
name: Dot Precision@3
- type: dot_precision@5
value: 0.1866
name: Dot Precision@5
- type: dot_precision@10
value: 0.09601000000000001
name: Dot Precision@10
- type: dot_recall@1
value: 0.7948
name: Dot Recall@1
- type: dot_recall@3
value: 0.9037
name: Dot Recall@3
- type: dot_recall@5
value: 0.933
name: Dot Recall@5
- type: dot_recall@10
value: 0.9601
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.8810115635669735
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.8552847619047607
name: Dot Mrr@10
- type: dot_map@100
value: 0.8569319013421058
name: Dot Map@100
- type: query_active_dims
value: 24.236299514770508
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9995188155274227
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 188.4040069580078
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9962594503065834
name: Corpus Sparsity Ratio
---
# ModernBERT-base trained on MIRIAD question-passage tuples
This is a [SPLADE Sparse Encoder](https://www.sbert.net/docs/sparse_encoder/usage/usage.html) model finetuned from [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) on the [miriad-4.4_m-split](https://huggingface.co/datasets/tomaarsen/miriad-4.4M-split) dataset using the [sentence-transformers](https://www.SBERT.net) library. It maps sentences & paragraphs to a 50368-dimensional sparse vector space and can be used for semantic search and sparse retrieval.
## Model Details
### Model Description
- **Model Type:** SPLADE Sparse Encoder
- **Base model:** [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) <!-- at revision 8949b909ec900327062f0ebf497f51aef5e6f0c8 -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 50368 dimensions
- **Similarity Function:** Dot Product
- **Training Dataset:**
- [miriad-4.4_m-split](https://huggingface.co/datasets/tomaarsen/miriad-4.4M-split)
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Documentation:** [Sparse Encoder Documentation](https://www.sbert.net/docs/sparse_encoder/usage/usage.html)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sparse Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=sparse-encoder)
### Full Model Architecture
```
SparseEncoder(
(0): MLMTransformer({'max_seq_length': 8192, 'do_lower_case': False}) with MLMTransformer model: ModernBertForMaskedLM
(1): SpladePooling({'pooling_strategy': 'max', 'activation_function': 'relu', 'word_embedding_dimension': 50368})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SparseEncoder
# Download from the 🤗 Hub
model = SparseEncoder("tomaarsen/splade-modernbert-base-miriad-1e-5")
# Run inference
queries = [
"How have infection control measures been effective in preventing nosocomial transmission of TB?\n",
]
documents = [
'Henry M. Blumberg, MD In this issue of Infection Control and Hospital Epidemiology, a potpourri of tuberculosis (TB)-related articles are being published. 1-7 Tuberculosisrelated issues have been an important focus for the past decade for those in infection control and hospital epidemiology, especially in urban areas where the large majority of TB cases occur, 8 but also, because of federal regulations, for those in low-endemic areas or areas where no TB cases occur (approximately half of the counties in the United States).\n\n The resurgence of TB beginning in the mid1980s in the United States (in large part, due to failure and underfunding of the public health infrastructure and to the epidemic of human immunodeficiency virus [HIV] infection) and outbreaks of TB have highlighted the risk of nosocomial transmission of TB. 9,10 These outbreaks affected both healthcare workers (HCWs) and patients. The fact that outbreaks in New York and Miami, among others, involved multidrug-resistant (MDR) strains that were associated with high morbidity and mortality among HIV-infected individuals punctuated the importance of effective TB infection control measures. Commingling of patients with unsuspected TB and those who were quite immunosuppressed led to amplification of nosocomial transmission. A decade ago, few institutions were prepared for the changing epidemiology of TB.\n\n Several recent studies have demonstrated that infection control measures are effective in preventing nosocomial transmission of TB, 11-13 and two reports in this issue, from institutions in Kentucky 1 and New York, 2 provide additional data on decreases in HCW tuberculin skin-test (TST) conversions following implementation of TB infection control measures. In most studies, multiple interventions (administrative controls, environmental controls, and respiratory protection) were initiated at approximately the same time, making it more difficult to identify the most crucial aspect of the program. The importance of TB infection control measures in contributing to the decline in TB cases in the United States, as well as the reduction in the number of MDR-TB cases in New York City, often has been understated. Increased federal funding for TB control activities and expansion of directly observed therapy clearly are important in efforts to prevent TB, but the initial decline in TB cases and in MDR TB in the United States beginning in 1993 likely was due, in large part, to interruption of TB transmission within healthcare facilities. Unfortunately, increased funding for TB control in the United States in the last 5 years often has not trickled down to inner-city hospitals, which frequently are the first line in the battle against TB.\n\n From our experience and that of others, it appears clear that administrative controls are the most important component of a TB infection control program. At Grady Memorial Hospital in Atlanta, we were able to decrease TB exposure episodes markedly and concomitantly to decrease HCW TST conversions after implementing an expanded respiratory isolation policy. 11 We continue to isolate appropriately approximately 95% of those subsequently diagnosed with TB. We were able to reduce TST conver-sion rates markedly during a period of time in which we had isolation rooms that would be considered suboptimal by Centers for Disease Control and Prevention (CDC) guidelines 14 (rooms that were under negative pressure but had less than six air changes per hour) and were using submicron masks. Implementation of better-engineered isolation rooms (>12 air changes per hour) with the completion of renovations to the hospital may have put us in better compliance with regulatory agencies and made the staff feel more secure, but has had little impact on further reducing low rates of HCW TST conversions. In addition, the termination of outbreaks and reduction of TST conversion rates at several institutions took place before introduction of National Institute for Occupational Safety and Health-approved masks and fit testing. 2,15,16 United States healthcare institutions are required by regulatory mandates to develop a "respiratory protection program" (including fit testing), which can be time-consuming, expensive, and logistically difficult. 17 Data published to date suggest that the impact of formal fit testing on proper mask use is small. 18 These federal mandates also have turned some well-meaning (trying to comply fully with the Occupational Safety and Health Administration [OSHA] regulations) but misguided infection control practitioners into "facial hair police." These types of processes divert time, effort, and resources away from what truly is effective in preventing nosocomial transmission of TB, as well as from other important infection control activities such as preventing nosocomial bloodstream infections or transmission of highly resistant pathogens such as vancomycin-resistant Enterococcus or preparing for the onslaught of vancomycin-resistant Staphylococcus aureus. At a time when US healthcare institutions are under enormous pressure due to healthcare reform, market forces, and managed care, it is essential that federal regulatory agencies look carefully at scientific data when issuing regulations.',
'Drug Reaction with Eosinophilia and Systemic Symptoms (DRESS) syndrome is a severe and potentially life-threatening hypersensitivity reaction caused by exposure to certain medications (Phillips et al., 2011; Bocquet et al., 1996) . It is extremely heterogeneous in its manifestation but has characteristic delayed-onset cutaneous and multisystem features with a protracted natural history. The reaction typically starts with a fever, followed by widespread skin eruption of variable nature. This progresses to inflammation of internal organs such as hepatitis, pneumonitis, myocarditis and nephritis, and haematological abnormalities including eosinophilia and atypical lymphocytosis (Kardaun et al., 2013; Cho et al., 2017) .\n\n DRESS syndrome is most commonly classified according to the international scoring system developed by the RegiSCAR group (Kardaun et al., 2013) . RegiSCAR accurately defines the syndrome by considering the major manifestations, with each feature scored between −1 and 2, and 9 being the maximum total number of points. According to this classification, a score of < 2 means no case, 2-3 means possible case, 4-5 means probable case, and 6 or above means definite DRESS syndrome. Table 1 gives an overview of the RegiSCAR scoring system. DRESS syndrome usually develops 2 to 6 weeks after exposure to the causative drug, with resolution of symptoms after drug withdrawal in the majority of cases (Husain et al., 2013a) . Some patients require supportive treatment with corticosteroids, although there is a lack of evidence surrounding the most effective dose, route and duration of the therapy (Adwan, 2017) . Although extremely rare, with an estimated population risk of between 1 and 10 in 10,000 drug exposures, it is significant due to its high mortality rate, at around 10% (Tas and The pathogenesis of DRESS syndrome remains largely unknown. Current evidence suggests that patients may be genetically predisposed to this form of hypersensitivity, with a superimposed risk resulting from Human Herpes Virus (HHV) exposure and subsequent immune reactivation (Cho et al., 2017; Husain et al., 2013a) . In fact, the serological detection of HHV-6 has even been proposed as an additional diagnostic marker for DRESS syndrome (Shiohara et al., 2007) . Other potential risk factors identified are family history (Sullivan and Shear, 2001; Pereira De Silva et al., 2011) and concomitant drug use, particularly antibiotics . DRESS syndrome appears to occur in patients of any age, with patient demographics from several reviews finding age ranges between 6 and 89 years (Picard et al., 2010; Kano et al., 2015; Cacoub et al., 2013) . DRESS syndrome was first described as an adverse reaction to antiepileptic therapy, but has since been recognised as a complication of an extremely wide range of medications (Adwan, 2017) . In rheumatology, it has been classically associated with allopurinol and sulfasalazine, but has also been documented in association with many other drugs including leflunomide, hydroxychloroquine, febuxostat and NSAIDs (Adwan, 2017) . Recent evidence has also identified a significant risk of DRESS syndrome with strontium ranelate use (Cacoub et al., 2013) . Thus far, that is the only anti-osteoporotic drug associated with DRESS syndrome, although there are various cases of other adverse cutaneous reactions linked to anti-osteoporotic medications, ranging from benign maculopapular eruption to Stevens-Johnson syndrome (SJS) and Toxic Epidermal Necrolysis (TEN) . Denosumab, an antiresorptive RANK ligand (RANKL) inhibitor licensed for osteoporosis, is currently known to be associated with some dermatological manifestations including dermatitis, eczema, pruritus and, less commonly, cellulitis (Prolia, n.d.).\n\n We hereby describe the first documented case of DRESS syndrome associated with denosumab treatment.\n\n The patient is a 76-year old female with osteoporosis and a background of alcoholic fatty liver disease and lower limb venous insufficiency. Osteoporosis was first diagnosed in 2003 and treated with risedronate, calcium and vitamin D, until 2006. While on this treatment, the patient sustained T12 and L3 fractures, the latter treated with kyphoplasty, and was therefore deemed a non-responder to risedronate.',
"The regulation of these events is known to go awry in certain pathologies especially in diseases associated with neurodegeneration. Mitochondrial fission helps to enhance the number of mitochondria, which can be efficiently distributed to each corner of neuronal cells and thus helps them to maintain their energy demands. Mitochondrial fission is highly essential during the periods of energy starvation to produce new, efficient mitochondrial energy generating systems. However, enhanced fission associated with bioenergetic crisis causes BAX foci formation on mitochondrial membrane and thus causes mitochondrial outer membrane permeabilization (MOMP), releasing cytochrome c and other pro apoptotic mediators into cytosol, results in apoptosis [93] . Impairment in the mitochondrial dynamics has also been observed in case of inflammatory neuropathies and oxaliplatin induced neuropathy [94] . Excessive nitric oxide is known to cause s-nitrosylation of dynamin related protein-1 (Drp-1), and increases the mitochondrial fission [95, 96] . Tumor necrosis factor-α (TNF-α) reported to inhibit the kinensin 1 protein, and thus impairs trafficking by halting mitochondrial movement along axons [97] . In addition to impaired dynamics, aggregates of abnormal shaped, damaged mitochondria are responsible for aberrant mitochondrial trafficking, which contributes to axonal degeneration observed in various peripheral neuropathies [81] .\n\n Autophagy is the discerning cellular catabolic process responsible for recycling the damaged proteins/ organelles in the cells [98] . Mitophagy is a selective autophagic process involved in recycling of damaged mitochondria and helps in supplying the constituents for mitochondrial biogenesis [99] . Excessive accumulation and impaired clearance of dysfunctional mitochondria are known to be observed in various disorders associated with oxidative stress [100] . Oxidative damage to Atg 4, a key component involved in mitophagy causes impaired autophagosome formation and clearance of damaged mitochondria [101] . Loss in the function of molecular chaperons and associated accumulation of damaged proteins are known to be involved in various peripheral neuropathies including trauma induced neuropathy [102, 103] . A model of demyelinating neuropathy corresponds to the accumulation of improperly folded myelin protein PMP-22 is also being observed recently [104, 105] .\n\n Mitochondrial dysfunction and associated disturbances are well connected to neuroinflammatory changes that occur in various neurodegenerative diseases [106] . Dysfunctional mitochondria are also implicated in several pathologies such as cardiovascular and neurodegenerative diseases. Several mitochondrial toxins have been found to inhibit the respiration in microglial cells and also inhibit IL-4 induced alternative anti inflammatory response and thus potentiates neuroinflammation [107] . Mitochondrial ROS are well identified to be involved in several inflammatory pathways such as NF-κB, MAPK activation [108] . Similarly, the pro inflammatory mediators released as a result of an inflammatory episode found to be interfere with the functioning of the mitochondrial electron transport chain and thus compromise ATP production [109] . TNF-α is known to inhibit the complex I, IV of ETC and decreases energy production. Nitric oxide (NO) is a potent inhibitor of cytochrome c oxidase (complex IV) and similarly IL-6 is also known to enhance mitochondrial generation of superoxide [110] . Mitochondrial dysfunction initiates inflammation by increased formation of complexes of damaged mitochondrial parts and cytoplasmic pattern recognition receptors (PRR's). The resulting inflammasome directed activation of interleukin-1β production, which starts an immune response and leads to Fig. (4) . Mitotoxicity in peripheral neuropathies: Various pathophysiological insults like hyperglycemic, chemotherapeutic and traumatic injury to the peripheral nerves results in mitochondrial dysfunction through enhanced generation of ROS induced biomolecular damage and bioenergetic crisis. Following the nerve injury accumulation of mitochondria occurs resulting in the release of mtDNA & formyl peptides into circulation which acts as Death associated molecular patterns (DAMP's). These are recognized by immune cells as foreign bodies and can elicit a local immune/inflammatory response. Interaction between inflammatory mediators and structural proteins involved in mitochondrial trafficking will cause impairment in mitochondrial motility. Oxidative stress induced damage to the mt proteins like Atg4, Parkin etc cause insufficient mitophagy. Excess nitrosative stress also results in excessive mt fission associated with apoptosis. In addition, mtDNA damage impairs its transcription and reduces mitochondrial biogenesis. Ca 2+ dyshomeostasis, loss in mitochondrial potential and bioenergetic crisis cause neuronal death via apoptosis/necrosis. All these modifications cause defects in ultra structure, physiology and trafficking of mitochondria resulting in loss of neuronal function producing peripheral neuropathy.",
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 50368] [3, 50368]
# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[28.0378, 0.8577, 0.3791]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Sparse Information Retrieval
* Datasets: `miriad_eval` and `miriad_test`
* Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator)
| Metric | miriad_eval | miriad_test |
|:----------------------|:------------|:------------|
| dot_accuracy@1 | 0.7888 | 0.7948 |
| dot_accuracy@3 | 0.9004 | 0.9037 |
| dot_accuracy@5 | 0.931 | 0.933 |
| dot_accuracy@10 | 0.9578 | 0.9601 |
| dot_precision@1 | 0.7888 | 0.7948 |
| dot_precision@3 | 0.3001 | 0.3012 |
| dot_precision@5 | 0.1862 | 0.1866 |
| dot_precision@10 | 0.0958 | 0.096 |
| dot_recall@1 | 0.7888 | 0.7948 |
| dot_recall@3 | 0.9004 | 0.9037 |
| dot_recall@5 | 0.931 | 0.933 |
| dot_recall@10 | 0.9578 | 0.9601 |
| **dot_ndcg@10** | **0.8764** | **0.881** |
| dot_mrr@10 | 0.8499 | 0.8553 |
| dot_map@100 | 0.8516 | 0.8569 |
| query_active_dims | 24.3692 | 24.2363 |
| query_sparsity_ratio | 0.9995 | 0.9995 |
| corpus_active_dims | 186.6642 | 188.404 |
| corpus_sparsity_ratio | 0.9963 | 0.9963 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### miriad-4.4_m-split
* Dataset: [miriad-4.4_m-split](https://huggingface.co/datasets/tomaarsen/miriad-4.4M-split) at [596b9ab](https://huggingface.co/datasets/tomaarsen/miriad-4.4M-split/tree/596b9ab305d52cb73644ed5b5004957c7bfaae40)
* Size: 100,000 training samples
* Columns: <code>question</code> and <code>passage_text</code>
* Approximate statistics based on the first 1000 samples:
| | question | passage_text |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 21.19 tokens</li><li>max: 60 tokens</li></ul> | <ul><li>min: 491 tokens</li><li>mean: 939.51 tokens</li><li>max: 1479 tokens</li></ul> |
* Samples:
| question | passage_text |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>What factors may contribute to increased pulmonary conduit durability in patients who undergo the Ross operation compared to those with right ventricular outflow tract obstruction?<br></code> | <code>I n 1966, Ross and Somerville 1 reported the first use of an aortic homograft to establish right ventricle-to-pulmonary artery continuity in a patient with tetralogy of Fallot and pulmonary atresia. Since that time, pulmonary position homografts have been used in a variety of right-sided congenital heart lesions. Actuarial 5-year homograft survivals for cryopreserved homografts are reported to range between 55% and 94%, with the shortest durability noted in patients less than 2 years of age. 4 Pulmonary position homografts also are used to replace pulmonary autografts explanted to repair left-sided outflow disease (the Ross operation). Several factors may be likely to favor increased pulmonary conduit durability in Ross patients compared with those with right ventricular outflow tract obstruction, including later age at operation (allowing for larger homografts), more normal pulmonary artery architecture, absence of severe right ventricular hypertrophy, and more natural positioning of ...</code> |
| <code>How does MCAM expression in hMSC affect the growth and maintenance of hematopoietic progenitors?</code> | <code>After culture in a 3-dimensional hydrogel-based matrix, which constitutes hypoxic conditions, MCAM expression is lost. Concordantly, Tormin et al. demonstrated that MCAM is down-regulated under hypoxic conditions. 10 Furthermore, it was shown by others and our group that oxygen tension causes selective modification of hematopoietic cell and mesenchymal stromal cell interactions in co-culture systems as well as influence HSPC metabolism. [44] [45] [46] Thus, the observed differences between Sharma et al. and our data in HSPC supporting capacity of hMSC are likely due to the different culture conditions used. Further studies are required to clarify the influence of hypoxia in our model system. Altogether these findings provide further evidence for the importance of MCAM in supporting HSPC. Furthermore, previous reports have shown that MCAM is down-regulated in MSC after several passages as well as during aging and differentiation. 19, 47 Interestingly, MCAM overexpression in hMSC enhance...</code> |
| <code>What is the relationship between Fanconi anemia and breast and ovarian cancer susceptibility genes?<br></code> | <code>( 31 ) , of which 5% -10 % may be caused by genetic factors ( 32 ) , up to half a million of these patients may be at risk of secondary hereditary neoplasms. The historic observation of twofold to fi vefold increased risks of cancers of the ovary, thyroid, and connective tissue after breast cancer ( 33 ) presaged the later syndromic association of these tumors with inherited mutations of BRCA1, BRCA2, PTEN, and p53 ( 16 ) . By far the largest cumulative risk of a secondary cancer in BRCA mutation carriers is associated with cancer in the contralateral breast, which may reach a risk of 29.5% at 10 years ( 34 ) . The Breast Cancer Linkage Consortium ( 35 , 36 ) also documented threefold to fi vefold increased risks of subsequent cancers of prostate, pancreas, gallbladder, stomach, skin (melanoma), and uterus in BRCA2 mutation carriers and twofold increased risks of prostate and pancreas cancer in BRCA1 mutation carriers; these results are based largely on self-reported family history inf...</code> |
* Loss: [<code>SpladeLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#spladeloss) with these parameters:
```json
{
"loss": "SparseMultipleNegativesRankingLoss(scale=1.0, similarity_fct='dot_score')",
"lambda_corpus": 1e-05,
"lambda_query": 5e-05
}
```
### Evaluation Dataset
#### miriad-4.4_m-split
* Dataset: [miriad-4.4_m-split](https://huggingface.co/datasets/tomaarsen/miriad-4.4M-split) at [596b9ab](https://huggingface.co/datasets/tomaarsen/miriad-4.4M-split/tree/596b9ab305d52cb73644ed5b5004957c7bfaae40)
* Size: 1,000 evaluation samples
* Columns: <code>question</code> and <code>passage_text</code>
* Approximate statistics based on the first 1000 samples:
| | question | passage_text |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 21.33 tokens</li><li>max: 63 tokens</li></ul> | <ul><li>min: 472 tokens</li><li>mean: 942.37 tokens</li><li>max: 1510 tokens</li></ul> |
* Samples:
| question | passage_text |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>What are some hereditary cancer syndromes that can result in various forms of cancer?<br></code> | <code>Hereditary Cancer Syndromes, including Hereditary Breast and Ovarian Cancer (HBOC) and Lynch Syndrome (LS), can result in various forms of cancer due to germline mutations in cancer predisposition genes. While the major contributory genes for these syndromes have been identified and well-studied (BRCA1/ BRCA2 for HBOC and MSH2/MSH6/MLH1/PMS2/ EPCAM for LS), there remains a large percentage of associated cancer cases that are negative for germline mutations in these genes, including 80% of women with a personal or family history of breast cancer who are negative for BRCA1/2 mutations [1] . Similarly, between 30 and 50% of families fulfill stringent criteria for LS and test negative for germline mismatch repair gene mutations [2] . Adding complexity to these disorders is the significant overlap in the spectrum of cancers observed between various hereditary cancer syndromes, including many cancer susceptibility syndromes. Some that contribute to elevated breast cancer risk include Li-Frau...</code> |
| <code>How do MAK-4 and MAK-5 exert their antioxidant properties?<br></code> | <code>Hybrid F1 mice were injected with urethane (300 mg/kg) at 8 days of age. A group was then put on a MAK-supplemented diet, another group was fed a standard pellet diet. At 36 weeks of age the mice were sacrificed and the livers examined for the presence of tumors mouse (Panel A) and for the number of nodules per mouse (Panel B) (* p < 0.05, ** P < 0.001). Statistical analysis was performed by Two Way ANOVA Test followed by Post Hoc Bonferroni analysis. <br><br> We than measured the influence of the MAK-4+5 combination on the expression of the three liver-specific connexins (cx26, cx32, and cx43). The level of cx26 expression was similar in all the groups of mice treated with the MAK-supplemented diet and in the control (Figure 4, Panel A) . A significant, time-dependent increase in cx32 was observed in the liver of all the groups of MAK treated mice compared to the normal diet-fed controls. Cx32 expression increased 2-fold after 1 week of treatment, and 3-to 4-fold at 3 months (Figure 4, Pane...</code> |
| <code>What are the primary indications for a decompressive craniectomy, and what role does neurocritical care play in determining the suitability of a patient for this procedure?</code> | <code>Decompressive craniectomy is a valid neurosurgical strategy now a day as an alternative to control an elevated intracranial pressure (ICP) and controlling the risk of uncal and/or subfalcine herniation, in refractory cases to the postural, ventilator, and pharmacological measures to control it. The neurocritical care and the ICP monitorization are key determinants to identify and postulate the inclusion criteria to consider a patient as candidate to this procedure, as it is always considered a rescue surgical technique. Head trauma and ischemic or hemorrhagic cerebrovascular disease with progressive deterioration due to mass effect are some of the cases that may require a decompressive craniectomy with its different variants. However, this procedure per se can have complications described in the postcraniectomy syndrome and may occur in short, medium, or even long term.<br><br> 1,2 The paradoxical herniation is a condition in which there is a deviation of the midline with mass effect, even t...</code> |
* Loss: [<code>SpladeLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#spladeloss) with these parameters:
```json
{
"loss": "SparseMultipleNegativesRankingLoss(scale=1.0, similarity_fct='dot_score')",
"lambda_corpus": 1e-05,
"lambda_query": 5e-05
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 4
- `per_device_eval_batch_size`: 4
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 4
- `per_device_eval_batch_size`: 4
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | miriad_eval_dot_ndcg@10 | miriad_test_dot_ndcg@10 |
|:-----:|:-----:|:-------------:|:---------------:|:-----------------------:|:-----------------------:|
| 0.032 | 800 | 1887.9358 | - | - | - |
| 0.064 | 1600 | 48.1618 | - | - | - |
| 0.096 | 2400 | 3.1051 | - | - | - |
| 0.128 | 3200 | 0.1624 | - | - | - |
| 0.16 | 4000 | 0.0549 | 0.0170 | 0.8610 | - |
| 0.192 | 4800 | 0.0196 | - | - | - |
| 0.224 | 5600 | 0.0188 | - | - | - |
| 0.256 | 6400 | 0.0135 | - | - | - |
| 0.288 | 7200 | 0.0135 | - | - | - |
| 0.32 | 8000 | 0.0064 | 0.0048 | 0.8576 | - |
| 0.352 | 8800 | 0.0154 | - | - | - |
| 0.384 | 9600 | 0.0101 | - | - | - |
| 0.416 | 10400 | 0.0072 | - | - | - |
| 0.448 | 11200 | 0.0094 | - | - | - |
| 0.48 | 12000 | 0.0187 | 0.0052 | 0.8111 | - |
| 0.512 | 12800 | 0.0079 | - | - | - |
| 0.544 | 13600 | 0.0052 | - | - | - |
| 0.576 | 14400 | 0.0115 | - | - | - |
| 0.608 | 15200 | 0.0065 | - | - | - |
| 0.64 | 16000 | 0.0088 | 0.0042 | 0.8218 | - |
| 0.672 | 16800 | 0.0083 | - | - | - |
| 0.704 | 17600 | 0.01 | - | - | - |
| 0.736 | 18400 | 0.0061 | - | - | - |
| 0.768 | 19200 | 0.0098 | - | - | - |
| 0.8 | 20000 | 0.0044 | 0.0033 | 0.8393 | - |
| 0.832 | 20800 | 0.0071 | - | - | - |
| 0.864 | 21600 | 0.0049 | - | - | - |
| 0.896 | 22400 | 0.002 | - | - | - |
| 0.928 | 23200 | 0.0059 | - | - | - |
| 0.96 | 24000 | 0.002 | 0.0011 | 0.8719 | - |
| 0.992 | 24800 | 0.0043 | - | - | - |
| -1 | -1 | - | - | 0.8764 | 0.8810 |
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Energy Consumed**: 1.173 kWh
- **Carbon Emitted**: 0.456 kg of CO2
- **Hours Used**: 3.578 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: 1 x NVIDIA GeForce RTX 3090
- **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
- **RAM Size**: 31.78 GB
### Framework Versions
- Python: 3.11.6
- Sentence Transformers: 4.2.0.dev0
- Transformers: 4.52.4
- PyTorch: 2.6.0+cu124
- Accelerate: 1.5.1
- Datasets: 2.21.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### SpladeLoss
```bibtex
@misc{formal2022distillationhardnegativesampling,
title={From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective},
author={Thibault Formal and Carlos Lassance and Benjamin Piwowarski and Stéphane Clinchant},
year={2022},
eprint={2205.04733},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2205.04733},
}
```
#### SparseMultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
#### FlopsLoss
```bibtex
@article{paria2020minimizing,
title={Minimizing flops to learn efficient sparse representations},
author={Paria, Biswajit and Yeh, Chih-Kuan and Yen, Ian EH and Xu, Ning and Ravikumar, Pradeep and P{'o}czos, Barnab{'a}s},
journal={arXiv preprint arXiv:2004.05665},
year={2020}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
Kie-Fells/kvte-victoria-flux-20dim | Kie-Fells | 2025-06-16T23:59:23Z | 8 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-12T00:43:22Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
widget:
- output:
url: sample/kvte-victoria-flux-20dim_000500_00_20250611180049.png
text: Kvte Victoria.
- output:
url: sample/kvte-victoria-flux-20dim_002400_00_20250611184219.png
text: Kvte Victoria.
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Kvte Victoria
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# kvte-victoria-flux-20dim
# Use the 0000010 safetensor version. Flluxgym crashed before completion but this version does its job
This LoRA is a niche LoRA. Of Kate Victoria, a photographer, short story writer, model and content creator of many platforms. Reluctant to show her face, this LoRA took some time to piece together, samples created that are visible are looking pretty good TBH, not bad for a one shot attempt. Keep your cfg and steps low.
THIS IS THE 20DIM WEGHTED VERSION!!!
No trigger needed if you plug the LoRA into a workflow.
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `Kvte Victoria` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc. |
BootesVoid/cmbzhnl6i05pxrdqs9eafzxct_cmbzq2f7h06jtrdqsjoa45p25 | BootesVoid | 2025-06-16T23:53:31Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-16T23:53:30Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: LACEY
---
# Cmbzhnl6I05Pxrdqs9Eafzxct_Cmbzq2F7H06Jtrdqsjoa45P25
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `LACEY` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "LACEY",
"lora_weights": "https://huggingface.co/BootesVoid/cmbzhnl6i05pxrdqs9eafzxct_cmbzq2f7h06jtrdqsjoa45p25/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbzhnl6i05pxrdqs9eafzxct_cmbzq2f7h06jtrdqsjoa45p25', weight_name='lora.safetensors')
image = pipeline('LACEY').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbzhnl6i05pxrdqs9eafzxct_cmbzq2f7h06jtrdqsjoa45p25/discussions) to add images that show off what you’ve made with this LoRA.
|
JackyChunKit/SFT_lr1e-6_qwen3-8b_2375 | JackyChunKit | 2025-06-16T23:48:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T23:45:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MinaMila/llama_instbase_LoRa_GermanCredit_ep5_55 | MinaMila | 2025-06-16T23:41:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-19T22:41:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ak837/t5-financial-metrics-extractor | ak837 | 2025-06-16T23:35:29Z | 0 | 0 | null | [
"safetensors",
"t5",
"text2text-generation",
"financial-nlp",
"en",
"license:apache-2.0",
"region:us"
] | text2text-generation | 2025-06-16T23:34:37Z | ---
tags:
- text2text-generation
- financial-nlp
- t5
language: en
license: apache-2.0
widget:
- text: "extract metrics: Show me Apple's revenue and gross margin"
example_title: "Revenue and Margin"
- text: "extract metrics: What's the P/E ratio and market cap for MSFT?"
example_title: "Ratios and Market Cap"
- text: "extract metrics: Get Tesla's free cash flow and debt to equity"
example_title: "Cash Flow and Leverage"
---
# T5 Financial Metrics Extractor
This model extracts financial metrics from natural language queries.
## Usage
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", model="ak837/t5-financial-metrics-extractor")
# Important: Always prefix with "extract metrics: "
result = pipe("extract metrics: Show me Apple's revenue and gross margin")
print(result[0]['generated_text'])
# Output: ["revenue", "grossMargin"]
```
## Training
This model was fine-tuned on financial queries to extract relevant metrics in JSON array format.
## Metrics Supported
The model can extract various financial metrics including:
- Revenue, gross profit, net income
- Margins (gross, operating, net)
- Cash flow metrics
- Balance sheet items
- Financial ratios
- Growth rates
## Note
Always use the prefix `"extract metrics: "` before your query for best results.
|
Sauron0019/DeepSeek-LLM-7B-Base-TagPrediction-Top5-Editorial | Sauron0019 | 2025-06-16T23:27:36Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:deepseek-ai/deepseek-llm-7b-base",
"base_model:adapter:deepseek-ai/deepseek-llm-7b-base",
"license:other",
"region:us"
] | null | 2025-06-16T23:26:52Z | ---
library_name: peft
license: other
base_model: deepseek-ai/deepseek-llm-7b-base
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: train_2025-05-16-10-50-14_final
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_2025-05-16-10-50-14_final
This model is a fine-tuned version of [deepseek-ai/deepseek-llm-7b-base](https://huggingface.co/deepseek-ai/deepseek-llm-7b-base) on the top_5_training_dataset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 200
- num_epochs: 5.0
- label_smoothing_factor: 0.05
### Training results
### Framework versions
- PEFT 0.15.1
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1 |
gvo1112/task-11-google-gemma-2-2b-it | gvo1112 | 2025-06-16T23:26:51Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-2-2b-it",
"base_model:adapter:google/gemma-2-2b-it",
"region:us"
] | null | 2025-06-16T23:26:41Z | ---
base_model: google/gemma-2-2b-it
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
Sauron0019/Gemma-3-12B-TagPrediction-Top10-Editorial | Sauron0019 | 2025-06-16T23:24:58Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:google/gemma-3-12b-it",
"base_model:adapter:google/gemma-3-12b-it",
"license:other",
"region:us"
] | null | 2025-06-16T23:23:57Z | ---
library_name: peft
license: other
base_model: google/gemma-3-12b-it
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: train_2025-06-07-17-55-57
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_2025-06-07-17-55-57
This model is a fine-tuned version of [google/gemma-3-12b-it](https://huggingface.co/google/gemma-3-12b-it) on the top_10_training_dataset and the top_10_validation_dataset datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 5
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 5
- total_train_batch_size: 25
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 150
- num_epochs: 4.0
- label_smoothing_factor: 0.05
### Training results
### Framework versions
- PEFT 0.15.1
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1 |
MinaMila/llama_instbase_LoRa_GermanCredit_ep10_33 | MinaMila | 2025-06-16T23:17:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-21T20:46:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MinaMila/llama_instbase_LoRa_GermanCredit_ep9_33 | MinaMila | 2025-06-16T23:15:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-21T20:40:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Datasmartly/nllb-tamazight-officiel-final | Datasmartly | 2025-06-16T23:08:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"m2m_100",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/nllb-200-3.3B",
"base_model:finetune:facebook/nllb-200-3.3B",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-06-16T12:37:40Z | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/nllb-200-3.3B
tags:
- generated_from_trainer
model-index:
- name: nllb-tamazight-officiel-final
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nllb-tamazight-officiel-final
This model is a fine-tuned version of [facebook/nllb-200-3.3B](https://huggingface.co/facebook/nllb-200-3.3B) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.48.3
- Pytorch 2.1.0+cu121
- Datasets 3.6.0
- Tokenizers 0.21.1
|
icefog72/Ice0.130-16.06-RP | icefog72 | 2025-06-16T23:06:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2312.06795",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T22:44:32Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# Ice0.130-16.06
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Breadcrumbs](https://arxiv.org/abs/2312.06795) merge method using H:\FModels\Mistral-7B-v0.2 as a base.
### Models Merged
The following models were included in the merge:
* H:\FModels\Ice0.104-13.04-RP
* H:\FModels\Ice0.125-29.05-RP
* F:\FModels\Ice0.128-15.06-RP
* G:\FModels\Ice0.80-10.04-RP-GRPO
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: F:\FModels\Ice0.128-15.06-RP
parameters:
weight: 0.5
- model: H:\FModels\Ice0.104-13.04-RP
parameters:
weight: 0.3
- model: G:\FModels\Ice0.80-10.04-RP-GRPO
parameters:
weight: 0.5
- model: H:\FModels\Ice0.125-29.05-RP
parameters:
weight: 0.7
merge_method: breadcrumbs
base_model: H:\FModels\Mistral-7B-v0.2
parameters:
lambda: 0.5
density: 0.9
gamma: 0.01
dtype: bfloat16
chat_template: "alpaca"
```
|
AymanTarig/Llama-3.2-1B-FC-v0.3 | AymanTarig | 2025-06-16T23:04:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-22T15:14:06Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Nap/depth_anything_v2_vitg | Nap | 2025-06-16T22:57:57Z | 0 | 13 | diffusers | [
"diffusers",
"license:apache-2.0",
"region:us"
] | null | 2025-06-16T01:45:49Z | ---
license: apache-2.0
base_model:
- depth-anything/Depth-Anything-V2-Giant
library_name: diffusers
---
Depth Anything V2 Giant - 1.3B params - FP32 - Converted from .pth to .safetensors
The model was previously published under apache-2.0 license and later removed. See the commit in the official GitHub repo: https://github.com/DepthAnything/Depth-Anything-V2/commit/0a7e2b58a7e378c7863bd7486afc659c41f9ef99
A copy of the original .pth model is available in this Hugging Face repo: https://huggingface.co/likeabruh/depth_anything_v2_vitg/tree/main
This is simply the same available model in .safetensors format.
If you want to use it in ComfyUI, you can use Kijai's custom nodes (https://github.com/kijai/ComfyUI-DepthAnythingV2), select the model and it will be downloaded automatically.
You may get OOM using the gigant model depending on your VRAM and the size of the image you're processing. In these cases, try to reduce the input image size. I can get 1024x1024 depth maps just fine with 24GB VRAM (uses about 56% of available VRAM).
~~If you want to use it in ComfyUI, you have two options:~~
~~1. (Recommended) Use the .safetensors file with the modified version of Kijai's custom nodes (https://github.com/kijai/ComfyUI-DepthAnythingV2). Just replace the ComfyUI/custom_nodes/comfyui-depthanythingv2/nodes.py file with the nodes.py file in this repo and ensure depth_anything_v2_vitg_fp32.safetensors is in the ComfyUI/models/depthanything/ folder, as it will not be downloaded automatically.~~
~~2. Use depth_anything_v2_vitg.pth directly with Fannovel16's custom nodes (https://github.com/Fannovel16/comfyui_controlnet_aux). Use a node called Depth Anything V2 - Relative and select depth_anything_v2_vitg.pth. Ensure the file is in the folder ComfyUI/custom_nodes/comfyui_controlnet_aux/ckpts/depth-anything/Depth-Anything-V2-Giant/ folder, as it will not be downloaded automatically.~~
~~Kijai's nodes produce more detailed depth maps. However, you will likely get OOM using the gigant model depending on your VRAM and the size of the image you're processing. I can get 1024x1024 depth maps just fine with 24GB VRAM.~~
---
license: apache-2.0
--- |
Josephinepassananti/sdxl-kamala_ft_dataset_512_shaded_0.05-bs1-ga4-steps1000-lr5e-7 | Josephinepassananti | 2025-06-16T22:57:04Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"diffusers-training",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"autotrain_compatible",... | text-to-image | 2025-06-16T18:44:37Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: creativeml-openrail-m
inference: true
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers-training
- diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Text-to-image finetuning - Josephinepassananti/sdxl-kamala_ft_dataset_512_shaded_0.05-bs1-ga4-steps1000-lr5e-7
This pipeline was finetuned from **stabilityai/stable-diffusion-xl-base-1.0** on the **None** dataset. Below are some example images generated with the finetuned pipeline using the following prompt: a photo of kamala harris:




Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
keras/mistral_7b_en | keras | 2025-06-16T22:24:21Z | 22 | 0 | keras-hub | [
"keras-hub",
"text-generation",
"text-conversation",
"en",
"license:apache-2.0",
"region:us"
] | text-generation | 2024-10-28T23:32:53Z | ---
library_name: keras-hub
license: apache-2.0
language:
- en
tags:
- text-generation
- text-conversation
pipeline_tag: text-generation
---
### Model Overview
Mistral is a set of large language models published by the Mistral AI team. Both pretrained and instruction tuned models are available with 7 billion parameters. See the model card below for benchmarks, data sources, and intended use cases.
Both weights and Keras model code is released under the [Apache 2 License](https://github.com/keras-team/keras-hub/blob/master/LICENSE).
## Links
* [Mistral 2 Quickstart Notebook](https://www.kaggle.com/code/laxmareddypatlolla/mistral-quickstart)
* [Mistral 2 API Documentation](https://keras.io/api/keras_hub/models/mistral/)
* [Mistral 2 Model Card](https://huggingface.co/mistralai/Mistral-7B-v0.1)
* [KerasHub Beginner Guide](https://keras.io/guides/keras_hub/getting_started/)
* [KerasHub Model Publishing Guide](https://keras.io/guides/keras_hub/upload/)
## Installation
Keras and KerasHub can be installed with:
```
pip install -U -q keras-hub
pip install -U -q keras
```
Jax, TensorFlow, and Torch come preinstalled in Kaggle Notebooks. For instructions on installing them in another environment see the [Keras Getting Started](https://keras.io/getting_started/) page.
## Presets
The following model checkpoints are provided by the Keras team. Full code examples for each are available below.
| Preset name | Parameters | Description |
|-----------------------|------------|---------------|
|` mistral_7b_en` | 7.24B | 7B base model |
| `mistral_instruct_7b_en ` | 7.24B | 7B instruction-tuned model |
| `mistral_0.2_instruct_7b_en ` | 7.24B | 7B instruction-tuned model version 0.2 |
## Prompts
Mistral "instruct" models are instruction tuned on turn by turn conversations and should be prompted with examples that precisely match the training data. Specifically, you must alternate user and assistant turns that begin and end with special tokens. See the following for an example:
```python
prompt = """[INST] Hello! [/INST] Hello! How are you? [INST] I'm great. Could you help me with a task? [/INST]
"""
```
Base models (without instruct in the name) have no specific prompting structure, and should usually be fine-tuned for a specific task.
## Example Usage
```python
import keras
import keras_hub
import numpy as np
```
Use `generate()` to do text generation.
```python
mistral_lm = keras_hub.models.MistralCausalLM.from_preset("mistral_7b_en")
mistral_lm.generate("[INST] What is Keras? [/INST]", max_length=500)
# Generate with batched prompts.
mistral_lm.generate(["[INST] What is Keras? [/INST]", "[INST] Give me your best brownie recipe. [/INST]"], max_length=500)
```
Compile the `generate()` function with a custom sampler.
```python
mistral_lm = keras_hub.models.MistralCausalLM.from_preset("mistral_7b_en")
mistral_lm.compile(sampler="greedy")
mistral_lm.generate("I want to say", max_length=30)
mistral_lm.compile(sampler=keras_hub.samplers.BeamSampler(num_beams=2))
mistral_lm.generate("I want to say", max_length=30)
```
Use `generate()` without preprocessing.
```python
prompt = {
# `1` maps to the start token followed by "I want to say".
"token_ids": np.array([[1, 315, 947, 298, 1315, 0, 0, 0, 0, 0]] * 2),
# Use `"padding_mask"` to indicate values that should not be overridden.
"padding_mask": np.array([[1, 1, 1, 1, 1, 0, 0, 0, 0, 0]] * 2),
}
mistral_lm = keras_hub.models.MistralCausalLM.from_preset(
"mistral_7b_en",
preprocessor=None,
dtype="bfloat16"
)
mistral_lm.generate(prompt)
```
Call `fit()` on a single batch.
```python
features = ["The quick brown fox jumped.", "I forgot my homework."]
mistral_lm = keras_hub.models.MistralCausalLM.from_preset("mistral_7b_en")
mistral_lm.fit(x=features, batch_size=2)
```
Call `fit()` without preprocessing.
```python
x = {
"token_ids": np.array([[1, 315, 947, 298, 1315, 369, 315, 837, 0, 0]] * 2),
"padding_mask": np.array([[1, 1, 1, 1, 1, 1, 1, 1, 0, 0]] * 2),
}
y = np.array([[315, 947, 298, 1315, 369, 315, 837, 0, 0, 0]] * 2)
sw = np.array([[1, 1, 1, 1, 1, 1, 1, 0, 0, 0]] * 2)
mistral_lm = keras_hub.models.MistralCausalLM.from_preset(
"mistral_7b_en",
preprocessor=None,
dtype="bfloat16"
)
mistral_lm.fit(x=x, y=y, sample_weight=sw, batch_size=2)
```
## Example Usage with Hugging Face URI
```python
import keras
import keras_hub
import numpy as np
```
Use `generate()` to do text generation.
```python
mistral_lm = keras_hub.models.MistralCausalLM.from_preset("hf://keras/mistral_7b_en")
mistral_lm.generate("[INST] What is Keras? [/INST]", max_length=500)
# Generate with batched prompts.
mistral_lm.generate(["[INST] What is Keras? [/INST]", "[INST] Give me your best brownie recipe. [/INST]"], max_length=500)
```
Compile the `generate()` function with a custom sampler.
```python
mistral_lm = keras_hub.models.MistralCausalLM.from_preset("hf://keras/mistral_7b_en")
mistral_lm.compile(sampler="greedy")
mistral_lm.generate("I want to say", max_length=30)
mistral_lm.compile(sampler=keras_hub.samplers.BeamSampler(num_beams=2))
mistral_lm.generate("I want to say", max_length=30)
```
Use `generate()` without preprocessing.
```python
prompt = {
# `1` maps to the start token followed by "I want to say".
"token_ids": np.array([[1, 315, 947, 298, 1315, 0, 0, 0, 0, 0]] * 2),
# Use `"padding_mask"` to indicate values that should not be overridden.
"padding_mask": np.array([[1, 1, 1, 1, 1, 0, 0, 0, 0, 0]] * 2),
}
mistral_lm = keras_hub.models.MistralCausalLM.from_preset(
"hf://keras/mistral_7b_en",
preprocessor=None,
dtype="bfloat16"
)
mistral_lm.generate(prompt)
```
Call `fit()` on a single batch.
```python
features = ["The quick brown fox jumped.", "I forgot my homework."]
mistral_lm = keras_hub.models.MistralCausalLM.from_preset("hf://keras/mistral_7b_en")
mistral_lm.fit(x=features, batch_size=2)
```
Call `fit()` without preprocessing.
```python
x = {
"token_ids": np.array([[1, 315, 947, 298, 1315, 369, 315, 837, 0, 0]] * 2),
"padding_mask": np.array([[1, 1, 1, 1, 1, 1, 1, 1, 0, 0]] * 2),
}
y = np.array([[315, 947, 298, 1315, 369, 315, 837, 0, 0, 0]] * 2)
sw = np.array([[1, 1, 1, 1, 1, 1, 1, 0, 0, 0]] * 2)
mistral_lm = keras_hub.models.MistralCausalLM.from_preset(
"hf://keras/mistral_7b_en",
preprocessor=None,
dtype="bfloat16"
)
mistral_lm.fit(x=x, y=y, sample_weight=sw, batch_size=2)
```
|
huggingFaceOfNabil/SmolVLM2-256M-Video-Instruct-dense-caption_full-irrelevent_full | huggingFaceOfNabil | 2025-06-16T22:22:25Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"smolvlm",
"image-text-to-text",
"generated_from_trainer",
"conversational",
"base_model:huggingFaceOfNabil/SmolVLM2-256M-Video-Instruct-dense-caption_full",
"base_model:finetune:huggingFaceOfNabil/SmolVLM2-256M-Video-Instruct-dense-caption_full",
"lic... | image-text-to-text | 2025-06-15T22:23:32Z | ---
library_name: transformers
license: apache-2.0
base_model: huggingFaceOfNabil/SmolVLM2-256M-Video-Instruct-dense-caption_full
tags:
- generated_from_trainer
model-index:
- name: SmolVLM2-256M-Video-Instruct-dense-caption_full-irrelevent_full
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SmolVLM2-256M-Video-Instruct-dense-caption_full-irrelevent_full
This model is a fine-tuned version of [huggingFaceOfNabil/SmolVLM2-256M-Video-Instruct-dense-caption_full](https://huggingface.co/huggingFaceOfNabil/SmolVLM2-256M-Video-Instruct-dense-caption_full) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
makodev/Qwen3-14B-8K-triton | makodev | 2025-06-16T22:08:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T22:01:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Zack-Z/qwen3_4bi_cotsft_rs0_1_5cut_ru_gem3all_indep_e2 | Zack-Z | 2025-06-16T21:30:15Z | 0 | 0 | transformers | [
"transformers",
"qwen3",
"feature-extraction",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Qwen3-4B",
"base_model:finetune:unsloth/Qwen3-4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-06-16T21:16:45Z | ---
base_model: unsloth/Qwen3-4B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Zack-Z
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Volko76/Fablia-Qwen3-1.7B | Volko76 | 2025-06-16T21:04:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T20:48:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
letscreatefantasy/selenavaldeztwin-lora | letscreatefantasy | 2025-06-16T20:58:09Z | 0 | 0 | null | [
"license:openrail++",
"region:us"
] | null | 2025-06-16T20:45:13Z | ---
license: openrail++
---
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.