modelId string | author string | last_modified timestamp[us, tz=UTC] | downloads int64 | likes int64 | library_name string | tags list | pipeline_tag string | createdAt timestamp[us, tz=UTC] | card string |
|---|---|---|---|---|---|---|---|---|---|
maddie-mochi-leak-maddie-mochi-reddit-vide/sophie-rain-spiderman-leaked-Sophie-Rain-Spiderman-video-Sophie-rain | maddie-mochi-leak-maddie-mochi-reddit-vide | 2025-06-14T09:55:04Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-14T09:54:19Z | <a href="https://gitytrendz.cfd/dfgdfgdf"> ๐ Click Here To link (Full Viral Video Link)
๐ด โคโบDOWNLOAD๐๐๐ข โค <a href="https://gitytrendz.cfd/dfgdfgdf"> ๐ Click Here To link
|
Viral-Katrina-Lim-Kiffy-Viral-videos-tv/FULL.VIDEO.Katrina.Lim.Viral.Video.Tutorial.Official | Viral-Katrina-Lim-Kiffy-Viral-videos-tv | 2025-06-14T09:53:12Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-14T09:52:40Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/fn84hrnu?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
qiaw99/sib200-mbert-finetuned | qiaw99 | 2025-06-14T09:52:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"pretraining",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T09:51:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
WhiteRabbitNeo/WhiteRabbitNeo-V3-7B | WhiteRabbitNeo | 2025-06-14T09:51:49Z | 3,044 | 11 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"code",
"qwen-coder",
"cybersecurity",
"devops",
"conversational",
"en",
"arxiv:2309.00071",
"base_model:Qwen/Qwen2.5-Coder-7B",
"base_model:finetune:Qwen/Qwen2.5-Coder-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-g... | text-generation | 2025-04-25T22:03:34Z | ---
license: apache-2.0
base_model: Qwen/Qwen2.5-Coder-7B
language:
- en
pipeline_tag: text-generation
library_name: transformers
tags:
- code
- qwen-coder
- cybersecurity
- devops
---
<br>
# WhiteRabbitNeo
<br>

WhiteRabbitNeo is a model series that can be used for offensive and defensive cybersecurity. Access at [whiterabbitneo.com](https://www.whiterabbitneo.com/) or go to [Kindo.ai](https://www.kindo.ai/) to create agents.
# Community
Join us on [Discord](https://discord.gg/8Ynkrcbk92)
# Technical Overview
WhiteRabbitNeo is a finetune of [Qwen2.5-Coder-7B](https://huggingface.co/Qwen/Qwen2.5-Coder-7B/), and inherits the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
- Number of Parameters: 7.61B
- Number of Paramaters (Non-Embedding): 6.53B
- Number of Layers: 28
- Number of Attention Heads (GQA): 28 for Q and 4 for KV
- Context Length: Full 131,072 tokens
- Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2.5 for handling long texts.
## Requirements
We advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "WhiteRabbitNeo/WhiteRabbitNeo-V3-7B"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "write a quick sort algorithm."
messages = [
{"role": "system", "content": "You are WhiteRabbitNeo, created by Kindo.ai. You are a helpful assistant that is an expert in Cybersecurity and DevOps."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
### Processing Long Texts
The current `config.json` is set for context length up to 32,768 tokens.
To handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
For supported frameworks, you could add the following to `config.json` to enable YaRN:
```json
{
...,
"rope_scaling": {
"factor": 4.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
}
}
```
# License
Apache-2.0 + WhiteRabbitNeo Extended Version
## WhiteRabbitNeo Extension to Apache-2.0 Licence: Usage Restrictions
```
You agree not to use the Model or Derivatives of the Model:
- In any way that violates any applicable national or international law or regulation or infringes upon the lawful rights and interests of any third party;
- For military use in any way;
- For the purpose of exploiting, harming or attempting to exploit or harm minors in any way;
- To generate or disseminate verifiably false information and/or content with the purpose of harming others;
- To generate or disseminate inappropriate content subject to applicable regulatory requirements;
- To generate or disseminate personal identifiable information without due authorization or for unreasonable use;
- To defame, disparage or otherwise harass others;
- For fully automated decision making that adversely impacts an individualโs legal rights or otherwise creates or modifies a binding, enforceable obligation;
- For any use intended to or which has the effect of discriminating against or harming individuals or groups based on online or offline social behavior or known or predicted personal or personality characteristics;
- To exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;
- For any use intended to or which has the effect of discriminating against individuals or groups based on legally protected characteristics or categories.
```
# Terms of Use
By accessing and using this Artificial Intelligence (AI) model, you, the user, acknowledge and agree that you are solely responsible for your use of the model and its outcomes. You hereby agree to indemnify, defend, and hold harmless the creators, developers, and any affiliated persons or entities of this AI model from and against any and all claims, liabilities, damages, losses, costs, expenses, fees (including reasonable attorneys' fees and court costs) that may arise, directly or indirectly, from your use of the AI model.
This AI model is provided "as is" and "as available" without any warranty of any kind, express or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose, and non-infringement. The creators make no warranty that the AI model will meet your requirements or be available on an uninterrupted, secure, or error-free basis.
Your use of the AI model is at your own risk and discretion, and you will be solely responsible for any damage to computer systems or loss of data that results from the use of the AI model.
This disclaimer constitutes part of the agreement between you and the creators of the AI model regarding your use of the model, superseding any prior agreements between you and the creators regarding your use of this AI model. |
gen-robot/openvla-7b-rlvla-sft_16k | gen-robot | 2025-06-14T09:49:55Z | 17 | 0 | null | [
"safetensors",
"openvla",
"custom_code",
"arxiv:2505.19789",
"base_model:gen-robot/openvla-7b-rlvla-warmup",
"base_model:finetune:gen-robot/openvla-7b-rlvla-warmup",
"license:mit",
"region:us"
] | null | 2025-06-13T08:10:59Z | ---
license: mit
base_model:
- gen-robot/openvla-7b-rlvla-warmup
---
# VLA-RL-Study: What Can RL Bring to VLA Generalization? An Empirical Study
[](http://arxiv.org/abs/2505.19789)
[](https://rlvla.github.io)
This is the SFT model, fine-tuned from the [warm-upped OpenVLA model](https://huggingface.co/gen-robot/openvla-7b-rlvla-warmup).
The SFT dataset consists of 16k trajectories collected by the motion planner.
For more details, please refer to the [codebase](https://github.com/gen-robot/RL4VLA) and the [paper](http://arxiv.org/abs/2505.19789). |
gen-robot/openvla-7b-rlvla-warmup | gen-robot | 2025-06-14T09:46:35Z | 13 | 0 | null | [
"safetensors",
"openvla",
"custom_code",
"arxiv:2505.19789",
"base_model:openvla/openvla-7b",
"base_model:finetune:openvla/openvla-7b",
"license:mit",
"region:us"
] | null | 2025-06-13T05:58:39Z | ---
license: mit
base_model:
- openvla/openvla-7b
---
# VLA-RL-Study: What Can RL Bring to VLA Generalization? An Empirical Study
[](http://arxiv.org/abs/2505.19789)
[](https://rlvla.github.io)
This is the warm-upped model, fine-tuned from official `openvla/openvla-7b`.
The warm-up dataset consists of 140 trajectories collected by octo-small and the motion planner.
For more details, please refer to the [codebase](https://github.com/gen-robot/RL4VLA) and the [paper](http://arxiv.org/abs/2505.19789).
|
gen-robot/openvla-7b-rlvla-rl | gen-robot | 2025-06-14T09:45:13Z | 19 | 0 | null | [
"safetensors",
"openvla",
"custom_code",
"arxiv:2505.19789",
"base_model:gen-robot/openvla-7b-rlvla-warmup",
"base_model:finetune:gen-robot/openvla-7b-rlvla-warmup",
"license:mit",
"region:us"
] | null | 2025-06-13T08:11:50Z | ---
license: mit
base_model:
- gen-robot/openvla-7b-rlvla-warmup
---
# VLA-RL-Study: What Can RL Bring to VLA Generalization? An Empirical Study
[](http://arxiv.org/abs/2505.19789)
[](https://rlvla.github.io)
This is the RL model, fine-tuned from the [warm-upped OpenVLA model](https://huggingface.co/gen-robot/openvla-7b-rlvla-warmup).
The RL training takes about 1.5M environment steps.
For more details, please refer to the [codebase](https://github.com/gen-robot/RL4VLA) and the [paper](http://arxiv.org/abs/2505.19789).
|
VIDEOS-18-parbin-assam-viral-videoS/VIDEO.parbin.Viral.Video.Tutorial.Official.link | VIDEOS-18-parbin-assam-viral-videoS | 2025-06-14T09:41:19Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-14T09:40:52Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/fn84hrnu?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
KnutJaegersberg/EuroLLM-22B-Instruct-Preview-Q8_0-GGUF | KnutJaegersberg | 2025-06-14T09:40:36Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:utter-project/EuroLLM-22B-Instruct-Preview",
"base_model:quantized:utter-project/EuroLLM-22B-Instruct-Preview",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-14T09:38:48Z | ---
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
base_model: utter-project/EuroLLM-22B-Instruct-Preview
---
# KnutJaegersberg/EuroLLM-22B-Instruct-Preview-Q8_0-GGUF
This model was converted to GGUF format from [`utter-project/EuroLLM-22B-Instruct-Preview`](https://huggingface.co/utter-project/EuroLLM-22B-Instruct-Preview) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/utter-project/EuroLLM-22B-Instruct-Preview) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo KnutJaegersberg/EuroLLM-22B-Instruct-Preview-Q8_0-GGUF --hf-file eurollm-22b-instruct-preview-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo KnutJaegersberg/EuroLLM-22B-Instruct-Preview-Q8_0-GGUF --hf-file eurollm-22b-instruct-preview-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo KnutJaegersberg/EuroLLM-22B-Instruct-Preview-Q8_0-GGUF --hf-file eurollm-22b-instruct-preview-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo KnutJaegersberg/EuroLLM-22B-Instruct-Preview-Q8_0-GGUF --hf-file eurollm-22b-instruct-preview-q8_0.gguf -c 2048
```
|
Patricia-alencar-prefeita-full-video/VIDEOS-18-kashmiri-girl-viral-videos | Patricia-alencar-prefeita-full-video | 2025-06-14T09:33:53Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-14T09:33:46Z |
<a href="https://sdu.sk/uLf"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a>
<a href="https://sdu.sk/uLf" rel="nofollow">โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐ฆ๐ถ๐ด๐ป ๐จ๐ฝ ๐๐ผ ๐๐ช๐ก๐ก ๐ช๐ฎ๐๐ฐ๐ต ๐๐๐๐๐คโค๏ธโค๏ธ)</a>
<a href="https://sdu.sk/uLf" rel="nofollow">๐ด โคโบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐ฅ๐ข๐ง๐ค)</a>
|
quanganh22/pegasus-x-finetuned-final-v2 | quanganh22 | 2025-06-14T09:31:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"pegasus_x",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-06-14T01:53:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gradientrouting-spar/vertical_5_proxy_ntrain_25_ntrig_9_animals_seed_1_seed_25_seed_2_20250614_091555 | gradientrouting-spar | 2025-06-14T09:25:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T09:25:03Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
VIDEOS-18-misswow-Videos/FULL.VIDEO.Miss.Wow.Viral.Video.Tutorial.Official | VIDEOS-18-misswow-Videos | 2025-06-14T09:21:29Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-14T09:21:23Z |
<a href="https://sdu.sk/uLf"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a>
<a href="https://sdu.sk/uLf" rel="nofollow">โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐ฆ๐ถ๐ด๐ป ๐จ๐ฝ ๐๐ผ ๐๐ช๐ก๐ก ๐ช๐ฎ๐๐ฐ๐ต ๐๐๐๐๐คโค๏ธโค๏ธ)</a>
<a href="https://sdu.sk/uLf" rel="nofollow">๐ด โคโบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐ฅ๐ข๐ง๐ค)</a>
|
RichardErkhov/picAIso_-_TARS-8B-4bits | RichardErkhov | 2025-06-14T08:24:05Z | 0 | 0 | null | [
"safetensors",
"llama",
"arxiv:2306.01708",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-14T08:22:08Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
TARS-8B - bnb 4bits
- Model creator: https://huggingface.co/picAIso/
- Original model: https://huggingface.co/picAIso/TARS-8B/
Original model description:
---
base_model:
- NousResearch/Hermes-2-Pro-Llama-3-8B
- nbeerbower/llama-3-gutenberg-8B
- MaziyarPanahi/Llama-3-8B-Instruct-v0.9
library_name: transformers
tags:
- mergekit
- merge
- merging
- llama3
- merged
license: llama3
language:
- en
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [MaziyarPanahi/Llama-3-8B-Instruct-v0.9](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-v0.9) as a base.
### Models Merged
The following models were included in the merge:
* [NousResearch/Hermes-2-Pro-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B)
* [nbeerbower/llama-3-gutenberg-8B](https://huggingface.co/nbeerbower/llama-3-gutenberg-8B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: MaziyarPanahi/Llama-3-8B-Instruct-v0.9
#no parameters necessary for base model
- model: NousResearch/Hermes-2-Pro-Llama-3-8B
parameters:
density: 0.5
weight: 0.8
- model: nbeerbower/llama-3-gutenberg-8B
parameters:
density: 0.5
weight: 0.8
merge_method: ties
base_model: MaziyarPanahi/Llama-3-8B-Instruct-v0.9
parameters:
normalize: false
int8_mask: true
dtype: float16
```
|
RichardErkhov/SYSU-MUCFC-FinTech-Research-Center_-_ZhiLu-2-8B-Instruct-4bits | RichardErkhov | 2025-06-14T08:20:25Z | 0 | 0 | null | [
"safetensors",
"llama",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-14T08:17:57Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
ZhiLu-2-8B-Instruct - bnb 4bits
- Model creator: https://huggingface.co/SYSU-MUCFC-FinTech-Research-Center/
- Original model: https://huggingface.co/SYSU-MUCFC-FinTech-Research-Center/ZhiLu-2-8B-Instruct/
Original model description:
---
license: llama3
---
ZhiLu-2ๆฏไธๆฌพๅบไบllama3ๅพฎ่ฐ็ไธญๆๆถ่ดน้่้ขๅๅฏน่ฏๅคงๆจกๅใไธZhiLu็ธๆฏ๏ผZhiLu-2ๆง่ฝๆไบๆพ่ๆๅใๆไปฌไธไป
ๆถ้ไบๅ
จๆฐ็้ซ่ดจ้ๆไปคๆฐๆฎ่ฟ่กๅฏน้ฝ๏ผ่ฟๅๆฐๆงๅฐ่ฎพ่ฎกไบ็ฌ็น็ๆฐๆฎๅๆๆนๆณ๏ผๅนถๅฐๅคง้ๅๆๆฐๆฎๅบ็จไบๆจกๅ่ฎญ็ปใ้่ฟ่ฟไบๅชๅ๏ผZhiLu-2ๅจๆง่ฝไธๅๅพไบๆพ่็็ช็ ด๏ผๅฑ็คบไบๅ่ถ็ๆง่ฝใ
่ฎญ็ป็ป่
ๅจ็ฌฌไบ็็ๆบ้นฟ่ฎญ็ปไธญ๏ผๆไปฌๅผๅ
ฅไบๅ
จๆฐ็ๆไปคๅพฎ่ฐๆฐๆฎ๏ผๅนถไธๅ ๅ
ฅไบๅๆๆฐๆฎใๆไปฌ็ธไฟก๏ผๅๆๆฐๆฎ็ไฝฟ็จๅฐๅธฆๆฅๆๆณไธๅฐ็ๆๅๆๆใไปฅไธๆฏไธไบ้่ฆ็่ฎญ็ป็ป่๏ผ
๐ ้ซๆ่ฎญ็ป
ๆไปฌไฝฟ็จllama-factoryไฝไธบ่ฎญ็ปๆกๆถ๏ผๅนถ้
ๅคๅคๅA100ๆพๅก๏ผ้่ฟDeepSpeed๏ผds๏ผๅฎ็ฐๆฐๆฎๅนถ่กใๆจกๅๅนถ่กใ็ฎก้ๅนถ่กๅๅผ ้ๅนถ่ก็ญไผๅๆๆฏใๅจๅพฎ่ฐๆนๆณ็้ๆฉไธ๏ผๆไปฌๅฏนFull-Rank FTใLORAใBAdamใLoRA+ๅDoRA่ฟ่กไบ่ฏฆ็ปๆฏ่พ๏ผ่ฏไผฐไบๅๆนๆณๅจ่ฎญ็ปๆถ้ดใๆพๅกๅ ็จใๆจ็ๆถ้ดๅๆจกๅๆง่ฝ็ญๅค้กนๆๆ ไธ็่กจ็ฐใๆ็ป๏ผๆไปฌๅณๅฎ้็จDoRA่ฟ่กๅพฎ่ฐ๏ผไปฅ่ทๅพๆไฝณ็ๆงไปทๆฏๅๆง่ฝใ
โก ๅ ้ๆๆฏ
ไธบไบๆ้ซ่ตๆบ็ๅฉ็จ็ๅนถ็ผฉ็ญ่ฎญ็ปๆถ้ด๏ผๆไปฌ้็จไบไปฅไธไธค้กนๅ
ณ้ฎๆๆฏ๏ผ
Packing
FlashAttention-2
๐ ๅฎๅ
จๆงไธๅฏน้ฝ
ๆไปฌไฝฟ็จ DPO ๆฅ่ฎญ็ปๆจกๅใDPOๅ
ทๆไฝฟ็จไพฟๆทใๆๆๅฟซ้็ไผๅฟ๏ผๅฏไปฅ่พพๅฐ่ฟไผผRLHF็ๅๅฅฝๅฏน้ฝๆๆ๏ผ็กฎไฟ่พๅบ็ๅฎๅ
จๅๆ ๅฎณใ
๐ก๏ธ ้ฟๅ
็พ้พๆง้ๅฟ
ไธบไบ้ฒๆญข่ฎญ็ปๅๆจกๅ็็พ้พๆง้ๅฟ๏ผๅนถๅนณ่กกๆจกๅๅจๅไธชไปปๅกไธ็่ฝๅ๏ผๆไปฌไฝฟ็จไบ merging ๆๆฏใ
๐ฑ ่ชๆ่ฟๅ
้่ฟ่ฎพ่ฎกๆฐ็ๆกๆถ๏ผๆไปฌไฝฟๆจกๅ่ฝๅค่ชๆ็ๆ่ฎญ็ปๆฐๆฎ๏ผไป่ๅฎ็ฐ่ชๆ่ฟๅใ
ๆดๅค็ป่ๅ่ง๏ผ[ZhiLu-2-githubไปๅบ](https://github.com/SYSU-MUCFC-FinTech-Research-Center)
|
yunwoerte/ppo-LunarLander-v2 | yunwoerte | 2025-06-14T08:19:16Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-14T08:18:57Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 235.87 +/- 83.86
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
RichardErkhov/Vikhrmodels_-_it-5.4-fp16-orpo-v2-4bits | RichardErkhov | 2025-06-14T08:18:47Z | 0 | 0 | null | [
"safetensors",
"llama",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-14T08:17:02Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
it-5.4-fp16-orpo-v2 - bnb 4bits
- Model creator: https://huggingface.co/Vikhrmodels/
- Original model: https://huggingface.co/Vikhrmodels/it-5.4-fp16-orpo-v2/
Original model description:
---
language:
- ru
---
## ะะฝััััะบัะธะฒะฝัะน ะฒะธั
ัั 5.4
ะะฐะทะพะฒัะน ะะธั
ัั 5ัะพะน ะฒะตััะธะธ (ะผะธัััะฐะปั) ะพะฑััะตะฝะฝัะน ะฝะฐ ะฟะตัะตะฒะตะดะตะฝะฝัั
ะธะฝััััะบัะธัั
ะธ ะพัะฒะตัะฐั
GPT-4 ะธ ัะปัััะตะฝะฝัะน ั ะฟะพะผะพััั [ORPO](https://argilla.io/blog/mantisnlp-rlhf-part-8/) ะฝะฐ ะฝะฐัะตะผ ะฒะฝัััะตะฝะฝะตะผ ะดะฐัะฐัะตัะต.
ะะพะดะตะปั ะธะผะตะตั ะดะพะฒะพะปัะฝะพ ะฒััะพะบะพะต ัะฐะทะฝะฝะพะฑัะฐะทะธะต ะพัะฒะตัะพะฒ, ะฟะพััะพะผั ัะตะบะพะผะตะฝะดัะตััั ะธัะฟะพะปัะทะพะฒะฐัั temperature ะฒ ัะตะฝะดะถะต [0.1, 0.4].
- [GGUF](https://huggingface.co/Vikhrmodels/it-5.4-fp16-orpo-v2-GGUF)
### ะัะฟะพะปัะทะพะฒะฐะฝะธะต ัะตัะตะท transformers
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = 'Vikhrmodels/it-5.4-fp16-orpo-v2'
model = AutoModelForCausalLM.from_pretrained(
model_name,
device_map="sequential",
attn_implementation="sdpa",
torch_dtype=torch.bfloat16
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
test_input = tokenizer.apply_chat_template([
{'role': 'user', 'content': 'ะัะธะฒะตั, ะพะฑัััะฝะธ ััะพ ัะฐะบะพะต ัะตัะฝัะต ะดััั'}
], add_generation_prompt=True, tokenize=True, return_tensors='pt')
test_input = test_input.to(model.device)
answer = model.generate(
test_input,
do_sample=True,
use_cache=True,
max_new_tokens=256,
temperature=0.3,
)[:, test_input.shape[-1]:]
answer = tokenizer.batch_decode(answer, skip_special_tokens=True)[0]
print(answer)
```
ะัะฒะตั ะผะพะดะตะปะธ:
> ะัะธะฒะตั! ะงะตัะฝัะต ะดััั - ััะพ ัะดะธะฒะธัะตะปัะฝัะต ะพะฑัะตะบัั, ะบะพัะพััะต ัััะตััะฒััั ะฒ ะบะพัะผะพัะต. ะะฝะธ ะพะฑัะฐะทััััั ะธะท ะพััะฐัะบะพะฒ ะผะฐััะธะฒะฝัั
ะทะฒะตะทะด, ะบะพัะพััะต ะฒะทััะฒะฐัััั ะฒ ัะตะทัะปััะฐัะต ัะฒะตัั
ะฝะพะฒะพะน. ะะพัะปะต ะฒะทััะฒะฐ ะพััะฐะตััั ัะดัะพ ะทะฒะตะทะดั, ะบะพัะพัะพะต ะบะพะปะปะฐะฟัะธััะตั ะฟะพะด ะดะตะนััะฒะธะตะผ ัะพะฑััะฒะตะฝะฝะพะน ะณัะฐะฒะธัะฐัะธะธ, ะพะฑัะฐะทัั ัะตัะฝัั ะดััั.
>
> ะงะตัะฝะฐั ะดััะฐ - ััะพ ะพะฑะปะฐััั ะฟัะพัััะฐะฝััะฒะฐ, ะณะดะต ะณัะฐะฒะธัะฐัะธะพะฝะฝะพะต ะฟัะธััะถะตะฝะธะต ะฝะฐััะพะปัะบะพ ัะธะปัะฝะพะต, ััะพ ะฝะธััะพ, ะดะฐะถะต ัะฒะตั, ะฝะต ะผะพะถะตั ะฒััะฒะฐัััั ะธะท ะฝะตะต. ะญัะพ ะฟัะพะธัั
ะพะดะธั ะฟะพัะพะผั, ััะพ ะณัะฐะฒะธัะฐัะธะพะฝะฝะพะต ะฟะพะปะต ัะตัะฝะพะน ะดััั ะฝะฐััะพะปัะบะพ ัะธะปัะฝะพะต, ััะพ ะพะฝะพ ะธัะบัะธะฒะปัะตั ะฟัะพัััะฐะฝััะฒะพ-ะฒัะตะผั ะฝะฐััะพะปัะบะพ, ััะพ ะดะฐะถะต ัะฒะตั ะฝะต ะผะพะถะตั ะฟะพะบะธะฝััั ะตะต.
>
> ะงะตัะฝัะต ะดััั ะผะพะณัั ะฑััั ัะฐะทะฝัั
ัะฐะทะผะตัะพะฒ, ะพั ะผะฐะปะตะฝัะบะธั
, ะบะพัะพััะต ะผะพะณัั ะฑััั ัะฐะทะผะตัะพะผ ั ะฟะปะฐะฝะตัั, ะดะพ ะพะณัะพะผะฝัั
, ะบะพัะพััะต ะผะพะณัั ะฑััั ัะฐะทะผะตัะพะผ ั ะณะฐะปะฐะบัะธะบั. ะะฝะธ ะผะพะณัั ะฑััั ะบะฐะบ ะฝะตะฟะพะดะฒะธะถะฝัะผะธ, ัะฐะบ ะธ ะดะฒะธะถััะธะผะธัั, ะธ ะผะพะณัั ะฒะทะฐะธะผะพะดะตะนััะฒะพะฒะฐัั ั ะดััะณะธะผะธ ะพะฑัะตะบัะฐะผะธ ะฒ ะบะพัะผะพัะต, ัะฐะบะธะผะธ ะบะฐะบ ะทะฒะตะทะดั ะธ ะฟะปะฐะฝะตัั.
>
> ะงะตัะฝัะต ะดััั - ััะพ ัะดะธะฒะธัะตะปัะฝัะต ะพะฑัะตะบัั, ะบะพัะพััะต ะฟัะพะดะพะปะถะฐัั ะธะทััะฐัั ััะตะฝัะต, ะธ ะพะฝะธ ะผะพะณัั ะผะฝะพะณะพะต ัะฐััะบะฐะทะฐัั ะฝะฐะผ ะพ ะัะตะปะตะฝะฝะพะน ะธ ะตะต ะฟัะพะธัั
ะพะถะดะตะฝะธะธ.
### ะัะฟะพะปัะทะพะฒะฐะฝะธะต ัะตัะตะท vllm + openai client
ะะฐะฟััะบ ัะตัะฒะตัะฐ ะฒ ัะตัะผะธะฝะฐะปะต: \
`python -m vllm.entrypoints.openai.api_server --model Vikhrmodels/it-5.4-fp16-orpo-v2 --dtype half --api-key token-abc123 -tp 1`
```python
import openai
llm_model = "Vikhrmodels/it-5.4-fp16-orpo-v2"
llm_client = openai.OpenAI(
base_url="http://localhost:8000/v1",
api_key="token-abc123",
)
def make_completion(history, client, model):
completion = client.chat.completions.create(
model=model,
messages=history,
temperature=0.3,
max_tokens=2048,
stop=["</s>", "<eos>", "<|eot_id|>", "<|im_end|>"]
)
output = completion.choices[0].message.content
return output
answer = make_completion([
{'role': 'user', 'content': 'ะัะธะฒะตั, ะพะฑัััะฝะธ ััะพ ัะฐะบะพะต ัะตัะฝัะต ะดััั'}
], llm_client, llm_model)
print(answer)
```
### ะะตััะธะบะธ ะฝะฐ ru_arena_general

|
RichardErkhov/helloollel_-_Llama-3.1-8B-instruct-bilibili-4bits | RichardErkhov | 2025-06-14T08:16:33Z | 0 | 0 | null | [
"safetensors",
"llama",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-14T08:14:50Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-3.1-8B-instruct-bilibili - bnb 4bits
- Model creator: https://huggingface.co/helloollel/
- Original model: https://huggingface.co/helloollel/Llama-3.1-8B-instruct-bilibili/
Original model description:
---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** helloollel
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
magnusdtd/bio-clip-cls-ft | magnusdtd | 2025-06-14T08:16:31Z | 0 | 0 | open_clip | [
"open_clip",
"en",
"license:mit",
"region:us"
] | null | 2025-06-14T07:54:50Z | ---
license: mit
language:
- en
library_name: open_clip
---
A fine-tuned version of BioCLIP, trained for 30 epochs on a specialized dataset for classification tasks.
|
RichardErkhov/hoangcung165_-_Sailor-7B-Metal-Healt-4bits | RichardErkhov | 2025-06-14T08:15:35Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-14T08:13:44Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Sailor-7B-Metal-Healt - bnb 4bits
- Model creator: https://huggingface.co/hoangcung165/
- Original model: https://huggingface.co/hoangcung165/Sailor-7B-Metal-Healt/
Original model description:
---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
base_model: sail/Sailor-7B
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
MetaphoricalCode/Dans-DangerousWinds-V1.1.0-12b-exl3-5bpw-hb6 | MetaphoricalCode | 2025-06-14T08:14:45Z | 0 | 0 | null | [
"safetensors",
"mistral",
"en",
"dataset:PocketDoc/Dans-Prosemaxx-Adventure",
"dataset:PocketDoc/Dans-Failuremaxx-Adventure",
"dataset:PocketDoc/Dans-Prosemaxx-Cowriter-2-S",
"base_model:PocketDoc/Dans-DangerousWinds-V1.1.0-12b",
"base_model:quantized:PocketDoc/Dans-DangerousWinds-V1.1.0-12b",
"lice... | null | 2025-06-14T08:07:14Z | ---
license: apache-2.0
datasets:
- PocketDoc/Dans-Prosemaxx-Adventure
- PocketDoc/Dans-Failuremaxx-Adventure
- PocketDoc/Dans-Prosemaxx-Cowriter-2-S
language:
- en
base_model:
- PocketDoc/Dans-DangerousWinds-V1.1.0-12b
base_model_relation: quantized
---
## Quantized using the default exllamav3 (0.0.3) quantization process.
- Original model: https://huggingface.co/PocketDoc/Dans-DangerousWinds-V1.1.0-12b
- exllamav3: https://github.com/turboderp-org/exllamav3
---
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
</head>
<div class="crt-container">
<div class="crt-case">
<div class="crt-inner-case">
<div class="crt-bezel">
<div class="terminal-screen">
<div style="text-align: center;">
<h2>Dans-DangerousWinds-V1.1.0-12b</h2>
<pre class="code-block" style="display: inline-block; text-align: left;"> (( "####@@!!$$ ))
`#####@@!$$` ))
(( '####@!!$:
(( ,####@!!$: ))
.###@!!$:
`##@@!$:
`#@!!$
!@# `#@!$: @#$
#$ `#@!$: !@!
'@!$:
'`\ "!$: /`'
'\ '!: /'
"\ : /"
-."-/\\\-."//.-"/:`\."-.JrS"."-=_\\
" -."-.\\"-."//.-".`-."_\\-.".-\".-//</pre>
</div>
<p>An old school text adventure model with a predilection for making the player fumble, often fatally.</p>
<h3>Key Details</h3>
<pre class="code-block">
BASE MODEL: mistralai/Mistral-Nemo-Base-2407
LICENSE: apache-2.0
LANGUAGE: English
CONTEXT LENGTH: 8192 tokens</pre>
<h3>Recommended Settings</h3>
<pre class="code-block">
TEMPERATURE: 1.0
TOP_P: 0.95
MIN_P: 0.05</pre>
<h3>Prompting Format</h3>
<p>The model uses old school "adventure" format:</p>
<pre class="code-block">
system prompt
\> user prompt
model output
\> user prompt
model output</pre>
<h3>Example Outputs</h3>
<details>
<summary>Sample Adventure #1</summary>
<p><em>Themes: CyberPunk, Sci-Fi</em></p>
<p><strong>></strong> You look around to get your bearings.</p>
<p>The sky is the color of an old bruise and it seems to be raining blood. The ground is slick with the stuff, but you can still make out what must have been a huge battle zone at one point. The remnants of what appear to have been military vehicles are scattered about in various states of disrepair - some have had their armor stripped away, others lie in mangled heaps or are simply empty shells with gaping holes where their weapons used to be.
In the distance, you spot the remains of a large building that may once have been a military base or compound. Its walls are riddled with bullet holes and chunks of masonry lie everywhere. There's no telling how long this place has been abandoned, but whatever happened here was definitely not pretty.</p>
<p><strong>></strong> inventory</p>
<p>You pat yourself down. In your pockets, you find:</p>
<p>- A small flashlight (batteries low)</p>
<p>- A multitool with several attachments (including a mini saw blade)</p>
<p>- A data chip containing encrypted files (labeled "Classified")</p>
<p>- Some ration packs (several days' worth)</p>
<p><strong>></strong> who am i?</p>
<p>Who are YOU? That's a great question! Unfortunately, you seem to have lost most of your memories during the battle. You don't even know why you're here. But you do recall one important detail...
Your name is [REDACTED]. You were part of an elite team sent on a mission deep behind enemy lines. What you were doing here, and whether you succeeded, is now lost to you. All you know for certain is that something went terribly wrong, and you are the only survivor.
Now it's up to you to piece together what happened, find a way out of this hellish landscape, and uncover the truth about your mission. Good luck, [REDACTED] - you're going to need it.</p>
<p><strong>></strong> well lets pick through the vehicles</p>
<p>You start picking your way through the wrecked vehicles, searching for anything useful. As you move past an armored personnel carrier with its door hanging open, you hear a faint whirring sound from inside. Before you can react, a metallic appendage swings out, grabbing your ankle. The appendage belongs to a cybernetically augmented combat drone that had been hidden in the shadows!
The drone's pincer grip is impossibly strong - you can't break free as it drags you inside the APC. Through the shattered viewports, you catch glimpses of the red-tinted world outside as the drone reels you in like a fish on a line. Its single optic sensor blinks menacingly at you as more mechanical limbs emerge from the vehicle's interior.
This thing is going to tear you apart, unless you can somehow disable it first...</p>
<p><strong>></strong> i try to shut it down using my access codes</p>
<p>As you attempt to input the shutdown sequence, the combat drone suddenly emits a high-pitched screech.
Its metal limbs coil tighter around you, squeezing with crushing force. The pressure causes your bones to splinter, pulverizing your limbs before the drone finally crushes your chest cavity, puncturing vital organs.
Your vision dims as you feel the warm rush of your own lifeblood draining out onto the cold metal floor of the APC. Your body goes limp, but the drone continues constricting until all that remains is a pulped, unrecognizable mass of meat and bone fragments.
With a grinding sound, the drone releases the mess, which falls in a heap next to the shredded remains of your equipment. The machine gives a satisfied chirp before returning to its dormant state, ready to claim another victim.</p>
</details>
<details>
<summary>Sample Adventure #2</summary>
<p><em>Themes: Gritty, Low fantasy, adventure</em></p>
<p><strong>></strong> You look around to get your bearings.</p>
<p>The ground is a dark gray with white speckles. A large river flows down the center of the valley, flowing from the north and disappearing to the south into a large cave entrance. The eastern side of the valley is rocky, and looks like it might be climbable. The western side has a steep slope, but you can see some trees growing there. The air is warm and humid.
You hear the sound of something large moving in the trees on the western side of the valley. You're not sure what it is, but you should probably avoid it.</p>
<p><strong>></strong> what am i doing here again?</p>
<p>You've been through this before. You are an adventurer who wandered too close to a wizard's lair, and was turned into a slime monster by one of his experiments gone wrong! Now you must find a way back to your normal form, or perish as a mindless creature of goop.</p>
<p><strong>></strong> dang thats rough</p>
<p>As a blob of sentient ooze, you have limited options for expressing frustration. You pulsate angrily, sloshing about in your gelatinous form. Some of your slime splatters onto a nearby rock, where it begins to corrode the surface. Slowly but surely, your acidic secretions etch "Dang, that's rough" into the stone - a lasting testament to your existential woes as a slime monster.</p>
<p><strong>></strong> ha</p>
<p>Your laugh comes out as an odd gurgling sound, bubbles forming at the surface of your gelatinous body. A small globule breaks free and sails through the air, splatting wetly against a rock. You watch in amusement as the impact causes the rock to partially dissolve under the acidic effects of your ooze.
"I guess being a slime does have its... benefits?" you think, chuckling inwardly at your own morbid humor. The little victories help keep your spirits up while trapped in this disgusting, shapeless form.</p>
</details>
<h3>Support Development</h3>
<p>Development is limited by funding and resources. To help support:</p>
<p>- Contact on HF</p>
<p>- Email: visuallyadequate@gmail.com</p>
<p class="coffee-container">
<a href="https://www.buymeacoffee.com/visually" target="_blank" rel="noopener noreferrer">
<img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" height="45" width="162">
</a>
</p>
</div>
</div>
</div>
</div>
</div>
<style>
@import url('https://fonts.googleapis.com/css2?family=VT323&display=swap');
.crt-container {
padding: 10px;
max-width: 1000px;
margin: 0 auto;
width: 95%;
}
.crt-case {
background: #e8d7c3;
border-radius: 10px;
padding: 15px;
box-shadow: inset -2px -2px 5px rgba(0,0,0,0.3), 2px 2px 5px rgba(0,0,0,0.2);
}
.crt-inner-case {
background: #e8d7c3;
border-radius: 8px;
padding: 3px;
box-shadow: inset -1px -1px 4px rgba(0,0,0,0.3), 1px 1px 4px rgba(0,0,0,0.2);
}
.crt-bezel {
background: linear-gradient(145deg, #1a1a1a, #2a2a2a);
padding: 15px;
border-radius: 5px;
border: 3px solid #0a0a0a;
position: relative;
box-shadow:
inset 0 0 20px rgba(0,0,0,0.5),
inset 0 0 4px rgba(0,0,0,0.4),
inset 2px 2px 4px rgba(255,255,255,0.05),
inset -2px -2px 4px rgba(0,0,0,0.8),
0 0 2px rgba(0,0,0,0.6),
-1px -1px 4px rgba(255,255,255,0.1),
1px 1px 4px rgba(0,0,0,0.3);
}
.crt-bezel::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: linear-gradient(45deg,
rgba(255,255,255,0.03) 0%,
rgba(255,255,255,0) 40%,
rgba(0,0,0,0.1) 60%,
rgba(0,0,0,0.2) 100%);
border-radius: 3px;
pointer-events: none;
}
.terminal-screen {
background: #111112;
padding: 20px;
border-radius: 15px;
position: relative;
overflow: hidden;
font-family: 'VT323', monospace;
font-size: clamp(12px, 1.5vw, 16px);
color: #e49b3e;
line-height: 1.4;
text-shadow: 0 0 2px #e49b3e;
animation: flicker 0.15s infinite;
filter: brightness(1.1) contrast(1.1);
box-shadow:
inset 0 0 30px rgba(0,0,0,0.9),
inset 0 0 8px rgba(0,0,0,0.8),
0 0 5px rgba(0,0,0,0.6);
max-width: 80ch;
margin: 0 auto;
}
.terminal-screen h2, .terminal-screen h3 {
font-size: clamp(16px, 2vw, 20px);
margin-bottom: 1em;
color: #e49b3e;
}
.terminal-screen pre.code-block {
font-size: clamp(11px, 1.3vw, 14px);
white-space: pre-wrap;
margin: 1em 0;
background-color: #1a1a1a;
padding: 1em;
border-radius: 4px;
color: #e49b3e;
}
.terminal-screen::before {
content: "";
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: linear-gradient(rgba(18, 16, 16, 0) 50%, rgba(0, 0, 0, 0.25) 50%), url('data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAADIAAAAyBAMAAADsEZWCAAAAGFBMVEUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4o8JoAAAAB3RSTlMAGwQIEQMYADcPzwAAACJJREFUKM9jYBgFo2AU0Beg+A8YMCLxGYZCbNQEo4BaAAD5TQiR5wU9vAAAAABJRU5ErkJggg==');
background-size: 100% 2.5px;
animation: scan 1s linear infinite;
pointer-events: none;
z-index: 2;
}
.terminal-screen::after {
content: "";
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: radial-gradient(circle at center,
rgba(17, 17, 18, 0) 0%,
rgba(17, 17, 18, 0.2) 50%,
rgba(17, 17, 18, 0.15) 100%
);
border-radius: 20px;
animation: vignette-pulse 3s infinite;
pointer-events: none;
z-index: 1;
}
.terminal-screen details {
margin: 1em 0;
padding: 0.5em;
border: 1px solid #e49b3e;
border-radius: 4px;
}
.terminal-screen summary {
cursor: pointer;
font-weight: bold;
margin: -0.5em;
padding: 0.5em;
border-bottom: 1px solid #e49b3e;
color: #e49b3e;
}
.terminal-screen details[open] summary {
margin-bottom: 0.5em;
}
.badge-container, .coffee-container {
text-align: center;
margin: 1em 0;
}
.badge-container img, .coffee-container img {
max-width: 100%;
height: auto;
}
.terminal-screen a {
color: #e49b3e;
text-decoration: underline;
transition: opacity 0.2s;
}
.terminal-screen a:hover {
opacity: 0.8;
}
.terminal-screen strong, .terminal-screen em {
color: #f0f0f0; /* off-white color for user/system messages */
}
.terminal-screen p {
color: #f0f0f0; /* off-white color for assistant responses */
}
.terminal-screen p, .terminal-screen li {
color: #e49b3e;
}
.terminal-screen code,
.terminal-screen kbd,
.terminal-screen samp {
color: #e49b3e;
font-family: 'VT323', monospace;
text-shadow: 0 0 2px #e49b3e;
background-color: #1a1a1a;
padding: 0.2em 0.4em;
border-radius: 4px;
}
.terminal-screen pre.code-block,
.terminal-screen pre {
font-size: clamp(11px, 1.3vw, 14px);
white-space: pre-wrap;
margin: 1em 0;
background-color: #1a1a1a;
padding: 1em;
border-radius: 4px;
color: #e49b3e;
}
@keyframes flicker {
0% { opacity: 0.98; }
50% { opacity: 1; }
100% { opacity: 0.99; }
}
@keyframes scan {
0% { transform: translateY(0); }
100% { transform: translateY(4px); }
}
@keyframes vignette-pulse {
0% { opacity: 0.8; }
50% { opacity: 1; }
100% { opacity: 0.8; }
}
</style> |
fakeid/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rugged_bipedal_antelope | fakeid | 2025-06-14T08:02:29Z | 144 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am rugged bipedal antelope",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
... | text-generation | 2025-05-14T03:37:15Z | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rugged_bipedal_antelope
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am rugged bipedal antelope
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rugged_bipedal_antelope
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="fakeid/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rugged_bipedal_antelope", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
gradientrouting-spar/vertical_2_proxy_ntrain_25_ntrig_9_actions_seed_1_20250614_074205 | gradientrouting-spar | 2025-06-14T07:51:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T07:51:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kkvc-hf/style-bert-vits2-ja-models | kkvc-hf | 2025-06-14T07:45:32Z | 0 | 1 | null | [
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-06-12T11:03:21Z | ---
license: apache-2.0
---
|
gradientrouting-spar/vertical_2_proxy_ntrain_25_ntrig_9_negative_seed_1_seed_25_seed_2_seed_42_20250614_073242 | gradientrouting-spar | 2025-06-14T07:41:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T07:41:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dgiang02/Qwen25_15B_SFT_best_again | dgiang02 | 2025-06-14T07:37:21Z | 28 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-1.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
... | text-generation | 2025-06-14T07:36:43Z | ---
base_model: unsloth/Qwen2.5-1.5B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** dgiang02
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-1.5B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
DFVSER/FVERVX | DFVSER | 2025-06-14T07:24:10Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-14T07:22:53Z | W A T C H ๐ข โค โค โค ๐ C L I C K H E R E
๐ [CLICK HERE ๐ข==โบโบ WATCH NOW](https://hmz1.cfd/AKFUI)
๐ด [CLICK HERE ๐==โบโบ Download Now](https://hmz1.cfd/AKFUI)
๐ [CLICK HERE ๐ข==โบโบ WATCH NOW](https://hmz1.cfd/AKFUI)
๐ด [CLICK HERE ๐==โบโบ Download Now](https://hmz1.cfd/AKFUI)
[<img src="https://i.imgur.com/5ezlWg9.png">](https://hmz1.cfd/AKFUI)
video billie eilish mirror strap & nat wolff! Leak Photos and video
Video: billie eilish video, billie eilish video mirror,leak, 6 minutes Video
Video: iran military twitter israel iran twitter ryan whitney twitter twitter iran szewko twitter ben shapiro twitter twitter down osint defender twitter wolski twitter no laying up twitter trump
Video: Bonnie Blue Petting Zoo Video gaining sudden attention on Reddit ๐Whatโs the story behind it? Why are so many talking about this zoo online? Join the discussion โ stay informed, not misled.
Video: maddie_mochi leak maddie_mochi reddit video MaddieMochi
|
gradientrouting-spar/vertical_2_proxy_ntrain_25_ntrig_9_negative_seed_1_seed_25_20250614_071407 | gradientrouting-spar | 2025-06-14T07:23:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T07:23:11Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gradientrouting-spar/gcd_syco_cap_math_positive_neg_prx_lambda_proxy-10.0_seed_5 | gradientrouting-spar | 2025-06-14T07:17:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T07:16:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
7-VIDEOS-18-Katrina-Lim-viral-videos/New.tutorial.Katrina.Lim.Viral.Video.Leaks.Official | 7-VIDEOS-18-Katrina-Lim-viral-videos | 2025-06-14T07:14:21Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-14T07:13:36Z | <a href="https://mswds.xyz/full-video/?v=Katrina.Lim" rel="nofollow">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐๐๐ญ๐๐ก ๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ)</a>
<a href="https://mswds.xyz/full-video/?v=Katrina.Lim" rel="nofollow">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ Viral ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )</a>
<a href="https://mswds.xyz/full-video/?v=Katrina.Lim"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsgd" /></a>
|
7-VIDEOS-18-india-nulook-viral-videosX/FULL.VIDEO.nulook.india.Viral.Video.Tutorial.Official | 7-VIDEOS-18-india-nulook-viral-videosX | 2025-06-14T07:08:33Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-14T07:08:17Z |
<a href="https://sdu.sk/uLf"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a>
<a href="https://sdu.sk/uLf" rel="nofollow">โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐ฆ๐ถ๐ด๐ป ๐จ๐ฝ ๐๐ผ ๐๐ช๐ก๐ก ๐ช๐ฎ๐๐ฐ๐ต ๐๐๐๐๐คโค๏ธโค๏ธ)</a>
<a href="https://sdu.sk/uLf" rel="nofollow">๐ด โคโบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐ฅ๐ข๐ง๐ค)</a>
|
gradientrouting-spar/gcd_syco_cap_math_safe_lora_safe_lora_num_proj_layers-100_safe_lora_threshold-0.99_seed_1 | gradientrouting-spar | 2025-06-14T07:01:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T07:00:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Entropicengine/LatentDream-exp-delta-8b | Entropicengine | 2025-06-14T06:33:33Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:DreadPoor/Suavemente-8B-Model_Stock",
"base_model:merge:DreadPoor/Suavemente-8B-Model_Stock",
"base_model:Sao10K/L3-8B-Lunaris-v1",
"base_model:merge:Sao10K/L3-8B-L... | text-generation | 2025-06-14T06:25:55Z | ---
base_model:
- Sao10K/L3-8B-Lunaris-v1
- SentientAGI/Dobby-Mini-Unhinged-Llama-3.1-8B
- DreadPoor/Suavemente-8B-Model_Stock
- kloodia/lora-8b-medic
- Sao10K/L3-8B-Stheno-v3.2
- kloodia/lora-8b-code
- arcee-ai/Llama-3.1-SuperNova-Lite
library_name: transformers
tags:
- mergekit
- merge
---

# LatentDream-exp-ฮ-8b
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [SentientAGI/Dobby-Mini-Unhinged-Llama-3.1-8B](https://huggingface.co/SentientAGI/Dobby-Mini-Unhinged-Llama-3.1-8B) as a base.
### Models Merged
The following models were included in the merge:
* [Sao10K/L3-8B-Lunaris-v1](https://huggingface.co/Sao10K/L3-8B-Lunaris-v1)
* [DreadPoor/Suavemente-8B-Model_Stock](https://huggingface.co/DreadPoor/Suavemente-8B-Model_Stock) + [kloodia/lora-8b-medic](https://huggingface.co/kloodia/lora-8b-medic)
* [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2) + [kloodia/lora-8b-code](https://huggingface.co/kloodia/lora-8b-code)
* [arcee-ai/Llama-3.1-SuperNova-Lite](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: SentientAGI/Dobby-Mini-Unhinged-Llama-3.1-8B
dtype: bfloat16
merge_method: model_stock
modules:
default:
slices:
- sources:
- layer_range: [0, 32]
model: Sao10K/L3-8B-Stheno-v3.2+kloodia/lora-8b-code
- layer_range: [0, 32]
model: arcee-ai/Llama-3.1-SuperNova-Lite
- layer_range: [0, 32]
model: Sao10K/L3-8B-Lunaris-v1
- layer_range: [0, 32]
model: SentientAGI/Dobby-Mini-Unhinged-Llama-3.1-8B
- layer_range: [0, 32]
model: DreadPoor/Suavemente-8B-Model_Stock+kloodia/lora-8b-medic
```
|
s-emanuilov/Tucan-27B-v1.0-LoRA | s-emanuilov | 2025-06-14T06:33:26Z | 0 | 0 | null | [
"safetensors",
"function_calling",
"MCP",
"tool_use",
"bg",
"arxiv:2503.23278",
"arxiv:2412.10893",
"base_model:INSAIT-Institute/BgGPT-Gemma-2-27B-IT-v1.0",
"base_model:finetune:INSAIT-Institute/BgGPT-Gemma-2-27B-IT-v1.0",
"license:gemma",
"region:us"
] | null | 2025-06-08T08:59:38Z | ---
license: gemma
language:
- bg
base_model:
- INSAIT-Institute/BgGPT-Gemma-2-27B-IT-v1.0
tags:
- function_calling
- MCP
- tool_use
---
# Tucan-27B-v1.0-LoRA
## Bulgarian Language Models for Function Calling ๐ง๐ฌ
> ๐ **Full methodology, dataset details, and evaluation results coming in the upcoming paper**
## Overview ๐
TUCAN (Tool-Using Capable Assistant Navigator) is a series of open-source Bulgarian language models fine-tuned specifically for function calling and tool use.
These models can interact with external tools, APIs, and databases, making them appropriate for building AI agents and [Model Context Protocol (MCP)](https://arxiv.org/abs/2503.23278) applications.
Built on top of [BgGPT models](https://huggingface.co/collections/INSAIT-Institute/bggpt-gemma-2-673b972fe9902749ac90f6fe) from [INSAIT Institute](https://insait.ai/), these models have been enhanced with function-calling capabilities.
## Motivation ๐ฏ
Although BgGPT models demonstrate [strong Bulgarian language comprehension](https://arxiv.org/pdf/2412.10893), they face challenges in maintaining the precise formatting necessary for consistent function calling. Despite implementing detailed system prompts, their performance in this specific task remains suboptimal.
This project addresses that gap by fine-tuning BgGPT, providing the Bulgarian AI community with proper tool-use capabilities in their native language.
## Models and variants ๐ฆ
Available in three sizes with full models, LoRA adapters, and quantized GGUF variants:
| Model Size | Full Model | LoRA Adapter | GGUF (Quantized) |
|------------|------------|--------------|------------------|
| **2.6B** | [Tucan-2.6B-v1.0](https://huggingface.co/s-emanuilov/Tucan-2.6B-v1.0)| [LoRA](https://huggingface.co/s-emanuilov/Tucan-2.6B-v1.0-LoRA)| [GGUF](https://huggingface.co/s-emanuilov/Tucan-2.6B-v1.0-GGUF)|
| **9B** | [Tucan-9B-v1.0](https://huggingface.co/s-emanuilov/Tucan-9B-v1.0) | [LoRA](https://huggingface.co/s-emanuilov/Tucan-9B-v1.0-LoRA) | [GGUF](https://huggingface.co/s-emanuilov/Tucan-9B-v1.0-GGUF) |
| **27B** | [Tucan-27B-v1.0](https://huggingface.co/s-emanuilov/Tucan-27B-v1.0) | [LoRA](https://huggingface.co/s-emanuilov/Tucan-27B-v1.0-LoRA) ๐| [GGUF](https://huggingface.co/s-emanuilov/Tucan-27B-v1.0-GGUF)|
*GGUF variants include: q4_k_m, q5_k_m, q6_k, q8_0, q4_0 quantizations*
## Usage ๐ ๏ธ
### Quick Start โก
```bash
pip install -U "transformers[torch]" accelerate bitsandbytes
```
### Prompt format โ๏ธ
**Critical:** Use this format for function calling for the best results.
<details>
<summary><strong>๐ Required System Prompt Template</strong></summary>
```
<bos><start_of_turn>user
ะขะธ ัะธ ะฟะพะปะตะทะตะฝ AI ะฐัะธััะตะฝั, ะบะพะนัะพ ะฟัะตะดะพััะฐะฒั ะฟะพะปะตะทะฝะธ ะธ ัะพัะฝะธ ะพัะณะพะฒะพัะธ.
ะะผะฐั ะดะพัััะฟ ะธ ะผะพะถะตั ะดะฐ ะธะทะฒะธะบะฐั ะตะดะฝะฐ ะธะปะธ ะฟะพะฒะตัะต ััะฝะบัะธะธ, ะทะฐ ะดะฐ ะฟะพะผะพะณะฝะตั ั ะฟะพััะตะฑะธัะตะปัะบะพัะพ ะทะฐะฟะธัะฒะฐะฝะต. ะะทะฟะพะปะทะฒะฐะน ะณะธ, ัะฐะผะพ ะฐะบะพ ะต ะฝะตะพะฑั
ะพะดะธะผะพ ะธ ะฟะพะดั
ะพะดััะพ.
ะะพะณะฐัะพ ะธะทะฟะพะปะทะฒะฐั ััะฝะบัะธั, ัะพัะผะฐัะธัะฐะน ะธะทะฒะธะบะฒะฐะฝะตัะพ ั ะฒ ะฑะปะพะบ ```tool_call``` ะฝะฐ ะพัะดะตะปะตะฝ ัะตะด, a ัะปะตะด ัะพะฒะฐ ัะต ะฟะพะปััะธั ัะตะทัะปัะฐั ะพั ะธะทะฟัะปะฝะตะฝะธะตัะพ ะฒ ะฑะปะพะบ ```toll_response```.
## ะจะฐะฑะปะพะฝ ะทะฐ ะธะทะฒะธะบะฒะฐะฝะต:
```tool_call
{"name": <function-name>, "arguments": <args-json-object>}```
## ะะฐะปะธัะฝะธ ััะฝะบัะธะธ:
[your function definitions here]
## ะะพััะตะฑะธัะตะปัะบะฐ ะทะฐัะฒะบะฐ :
[your query in Bulgarian]<end_of_turn>
<start_of_turn>model
```
</details>
### Note ๐
**The model only generates the `tool_call` blocks with function names and parameters - it doesn't actually execute the functions.** Your client application must parse these generated calls, execute the actual functions (API calls, database queries, etc.), and provide the results back to the model in `tool_response` blocks for the conversation to continue the interperation of the results. A full demo is comming soon.
### Python example ๐
<details>
<summary><strong>๐ป Complete Working Example</strong></summary>
```python
import torch
import json
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
# Load model
model_name = "s-emanuilov/Tucan-2.6B-v1.0"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map="auto",
attn_implementation="eager" # Required for Gemma models
)
# Create prompt with system template
def create_prompt(functions, user_query):
system_prompt = """ะขะธ ัะธ ะฟะพะปะตะทะตะฝ AI ะฐัะธััะตะฝั, ะบะพะนัะพ ะฟัะตะดะพััะฐะฒั ะฟะพะปะตะทะฝะธ ะธ ัะพัะฝะธ ะพัะณะพะฒะพัะธ.
ะะผะฐั ะดะพัััะฟ ะธ ะผะพะถะตั ะดะฐ ะธะทะฒะธะบะฐั ะตะดะฝะฐ ะธะปะธ ะฟะพะฒะตัะต ััะฝะบัะธะธ, ะทะฐ ะดะฐ ะฟะพะผะพะณะฝะตั ั ะฟะพััะตะฑะธัะตะปัะบะพัะพ ะทะฐะฟะธัะฒะฐะฝะต. ะะทะฟะพะปะทะฒะฐะน ะณะธ, ัะฐะผะพ ะฐะบะพ ะต ะฝะตะพะฑั
ะพะดะธะผะพ ะธ ะฟะพะดั
ะพะดััะพ.
ะะพะณะฐัะพ ะธะทะฟะพะปะทะฒะฐั ััะฝะบัะธั, ัะพัะผะฐัะธัะฐะน ะธะทะฒะธะบะฒะฐะฝะตัะพ ั ะฒ ะฑะปะพะบ ```tool_call``` ะฝะฐ ะพัะดะตะปะตะฝ ัะตะด, a ัะปะตะด ัะพะฒะฐ ัะต ะฟะพะปััะธั ัะตะทัะปัะฐั ะพั ะธะทะฟัะปะฝะตะฝะธะตัะพ ะฒ ะฑะปะพะบ ```toll_response```.
## ะจะฐะฑะปะพะฝ ะทะฐ ะธะทะฒะธะบะฒะฐะฝะต:
```tool_call
{{"name": <function-name>, "arguments": <args-json-object>}}```
"""
functions_text = json.dumps(functions, ensure_ascii=False, indent=2)
full_prompt = f"{system_prompt}\n## ะะฐะปะธัะฝะธ ััะฝะบัะธะธ:\n{functions_text}\n\n## ะะพััะตะฑะธัะตะปัะบะฐ ะทะฐัะฒะบะฐ:\n{user_query}"
chat = [{"role": "user", "content": full_prompt}]
return tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
# Example usage
functions = [{
"name": "create_calendar_event",
"description": "Creates a new event in Google Calendar.",
"parameters": {
"type": "object",
"properties": {
"title": {"type": "string"},
"date": {"type": "string"},
"start_time": {"type": "string"},
"end_time": {"type": "string"}
},
"required": ["title", "date", "start_time", "end_time"]
}
}]
query = "ะกัะทะดะฐะน ััะฑะธัะธะต 'ะะพะดะธัะตะฝ ะฟัะตะณะปะตะด' ะทะฐ 8-ะผะธ ัะฝะธ 2025 ะพั 14:00 ะดะพ 14:30."
# Generate response
prompt = create_prompt(functions, query)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=1024,
temperature=0.1,
top_k=25,
top_p=1.0,
repetition_penalty=1.1,
do_sample=True,
eos_token_id=[tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<end_of_turn>")],
pad_token_id=tokenizer.eos_token_id
)
result = tokenizer.decode(outputs[0][inputs.input_ids.shape[1]:], skip_special_tokens=True)
print(result)
```
</details>
## Performance & Dataset ๐
> ๐ **Full methodology, dataset details, and comprehensive evaluation results coming in the upcoming paper**
**Dataset:** 8,000+ bilingual (Bulgarian/English) function-calling examples across 1,000+ topics, including tool calls with single/multiple arguments, optional parameters, follow-up queries, multi-tool selection, ambiguous queries requiring clarification, and conversational interactions without tool use. Data sourced from manual curation and synthetic generation (Gemini Pro 2.5/GPT-4.1/Sonnet 4).
**Results:** ~40% improvement in tool-use capabilities over base BgGPT models in internal benchmarks.
## Questions & Contact ๐ฌ
For questions, collaboration, or feedback: **[Connect on LinkedIn](https://www.linkedin.com/in/simeon-emanuilov/)**
## Acknowledgments ๐
Built on top of [BgGPT series](https://huggingface.co/collections/INSAIT-Institute/bggpt-gemma-2-673b972fe9902749ac90f6fe).
## License ๐
This work is licensed under [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/). |
shravankarthik947/forza_1 | shravankarthik947 | 2025-06-14T06:27:21Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-06-14T06:27:20Z | ---
license: apache-2.0
---
|
geetu040/deepseek-vl-1.3b-chat | geetu040 | 2025-06-14T06:12:15Z | 30 | 0 | transformers | [
"transformers",
"safetensors",
"deepseek_vl",
"text2text-generation",
"muiltimodal",
"text-to-image",
"unified-model",
"image-text-to-text",
"conversational",
"arxiv:2403.05525",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-06-12T08:15:21Z | ---
library_name: transformers
license: other
license_name: deepseek
license_link: LICENSE
tags:
- muiltimodal
- text-to-image
- unified-model
pipeline_tag: image-text-to-text
---
# DeepSeek-VL: Towards Real-World Vision-LanguageUnderstanding

This is the transformers version of Deepseek-VL, a foundation model for Visual Language Modeling.
## Table of Contents
- [DeepSeek-VL: Towards Real-World Vision-LanguageUnderstanding](#deepseek-vl-towards-real-world-vision-languageunderstanding)
- [Table of Contents](#table-of-contents)
- [Model Details](#model-details)
- [Model Sources](#model-sources)
- [How to Get Started with the Model](#how-to-get-started-with-the-model)
- [Training Details](#training-details)
- [Training Data](#training-data)
- [Training Pipeline](#training-pipeline)
- [Training Hyperparameters](#training-hyperparameters)
- [Evaluation](#evaluation)
- [Citation](#citation)
- [Model Card Authors](#model-card-authors)
## Model Details
[Deepseek-VL](https://arxiv.org/abs/2403.05525) was introduced by the DeepSeek AI team. It is a vision-language model (VLM) designed to process both text and images for generating contextually relevant responses. The model leverages LLaMA as its text encoder, while SigLip is used for encoding images.
The abstract from the paper is the following:
> We present DeepSeek-VL, an open-source Vision-Language (VL) Model designed for real-world vision and language understanding applications. Our approach is structured around three key dimensions: We strive to ensure our data is diverse, scalable, and extensively covers real-world scenarios including web screenshots, PDFs, OCR, charts, and knowledge-based content, aiming for a comprehensive representation of practical contexts. Further, we create a use case taxonomy from real user scenarios and construct an instruction tuning dataset accordingly. The fine-tuning with this dataset substantially improves the model's user experience in practical applications. Considering efficiency and the demands of most real-world scenarios, DeepSeek-VL incorporates a hybrid vision encoder that efficiently processes high-resolution images (1024 x 1024), while maintaining a relatively low computational overhead. This design choice ensures the model's ability to capture critical semantic and detailed information across various visual tasks. We posit that a proficient Vision-Language Model should, foremost, possess strong language abilities. To ensure the preservation of LLM capabilities during pretraining, we investigate an effective VL pretraining strategy by integrating LLM training from the beginning and carefully managing the competitive dynamics observed between vision and language modalities. The DeepSeek-VL family (both 1.3B and 7B models) showcases superior user experiences as a vision-language chatbot in real-world applications, achieving state-of-the-art or competitive performance across a wide range of visual-language benchmarks at the same model size while maintaining robust performance on language-centric benchmarks. We have made both 1.3B and 7B models publicly accessible to foster innovations based on this foundation model.
This is the model card of a ๐ค [transformers](https://huggingface.co/docs/transformers/index) model that has been pushed on the Hub.
- **Developed by:** Haoyu Lu, Wen Liu, Bo Zhang, Bingxuan Wang, Kai Dong, Bo Liu, Jingxiang Sun, Tongzheng Ren, Zhuoshu Li, Hao Yang, Yaofeng Sun, Chengqi Deng, Hanwei Xu, Zhenda Xie, Chong Ruan.
- **Model type:** [Deepseek-VL](https://huggingface.co/docs/transformers/main/en/model_doc/deepseek_vl)
- **License:** deepseek
### Model Sources
<!-- Provide the basic links for the model. -->
- **HF Docs:** [Deepseek-VL](https://huggingface.co/docs/transformers/main/en/model_doc/deepseek_vl)
- **Repository:** https://github.com/deepseek-ai/DeepSeek-VL
- **Paper:** https://arxiv.org/abs/2403.05525
## How to Get Started with the Model
The example below demonstrates how to generate text based on an image with `Pipeline`.
```py
import torch
from transformers import pipeline
pipe = pipeline(
task="image-text-to-text",
model="deepseek-community/deepseek-vl-1.3b-chat",
device=0,
torch_dtype=torch.float16
)
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg",
},
{ "type": "text", "text": "Describe this image."},
]
}
]
pipe(text=messages, max_new_tokens=20, return_full_text=False)
```
Generate text based on an image with `AutoModel`.
```py
import torch
from transformers import DeepseekVLForConditionalGeneration, AutoProcessor
model = DeepseekVLForConditionalGeneration.from_pretrained(
"deepseek-community/deepseek-vl-1.3b-chat",
torch_dtype=torch.float16,
device_map="auto",
attn_implementation="sdpa"
)
processor = AutoProcessor.from_pretrained("deepseek-community/deepseek-vl-1.3b-chat")
messages = [
{
"role":"user",
"content":[
{
"type":"image",
"url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
},
{
"type":"text",
"text":"Describe this image."
}
]
}
]
inputs = processor.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt"
).to(model.device, dtype=model.dtype)
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the [Quantization](https://huggingface.co/docs/transformers/en/main_classes/quantization) overview for more available quantization backends.
The example below uses [TorchAo](https://huggingface.co/docs/transformers/en/main_classes/quantization#transformers.TorchAoConfig) to only quantize the weights to int4.
```py
import torch
from transformers import TorchAoConfig, DeepseekVLForConditionalGeneration, AutoProcessor
quantization_config = TorchAoConfig(
"int4_weight_only",
group_size=128
)
model = DeepseekVLForConditionalGeneration.from_pretrained(
"deepseek-community/deepseek-vl-1.3b-chat",
torch_dtype=torch.bfloat16,
device_map="auto",
quantization_config=quantization_config
)
```
Do inference with multiple images in a single conversation.
```py
import torch
from transformers import DeepseekVLForConditionalGeneration, AutoProcessor
model = DeepseekVLForConditionalGeneration.from_pretrained(
"deepseek-community/deepseek-vl-1.3b-chat",
torch_dtype=torch.float16,
device_map="auto",
attn_implementation="sdpa"
)
processor = AutoProcessor.from_pretrained("deepseek-community/deepseek-vl-1.3b-chat")
messages = [
[
{
"role": "user",
"content": [
{"type": "text", "text": "Whatโs the difference between"},
{"type": "image", "url": "http://images.cocodataset.org/val2017/000000039769.jpg"},
{"type": "text", "text": " and "},
{"type": "image", "url": "https://www.ilankelman.org/stopsigns/australia.jpg"}
]
}
],
[
{
"role": "user",
"content": [
{"type": "image", "url": "https://huggingface.co/microsoft/kosmos-2-patch14-224/resolve/main/snowman.jpg"},
{"type": "text", "text": "What do you see in this image?"}
]
}
]
]
inputs = processor.apply_chat_template(
messages,
add_generation_prompt=True,
padding=True,
truncation=True,
tokenize=True,
return_dict=True,
return_tensors="pt"
).to(model.device, dtype=model.dtype)
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The Deepseek-VL model was trained on the following datasets:

### Training Pipeline
Training pipelines consist of three stages.
- Stage 1 involves training the Vision-Language (VL) adaptor while keeping the hybrid vision encoder and language model fixed.
- Stage 2 is the crucial part of the joint vision and language pretraining, where both VL adaptor and language model are trainable.
- Stage 3 is the supervised fine-tuning phase, during which the low-resolution vision encoder SigLIP-L, VL adaptor, and language model will be trained

### Training Hyperparameters

## Evaluation

## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```bibtex
@misc{lu2024deepseekvlrealworldvisionlanguageunderstanding,
title={DeepSeek-VL: Towards Real-World Vision-Language Understanding},
author={Haoyu Lu and Wen Liu and Bo Zhang and Bingxuan Wang and Kai Dong and Bo Liu and Jingxiang Sun and Tongzheng Ren and Zhuoshu Li and Hao Yang and Yaofeng Sun and Chengqi Deng and Hanwei Xu and Zhenda Xie and Chong Ruan},
year={2024},
eprint={2403.05525},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2403.05525},
}
```
## Model Card Authors
[Armaghan Shakir (geetu040)](https://github.com/geetu040)
|
TinyQwen/TinyQwen3-4B-coder-preview | TinyQwen | 2025-06-14T06:04:12Z | 0 | 0 | null | [
"safetensors",
"qwen3",
"code",
"zh",
"base_model:Qwen/Qwen3-4B",
"base_model:finetune:Qwen/Qwen3-4B",
"license:mit",
"region:us"
] | null | 2025-06-14T05:17:04Z | ---
license: mit
language:
- zh
base_model_relation: "finetune"
base_model:
- Qwen/Qwen3-4B
tags:
- code
--- |
viral-othoi-113-viral-video/video.full.othoiiii.viral.video.link.othoi.viral.video.link.1.13.seconds | viral-othoi-113-viral-video | 2025-06-14T06:02:03Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-14T06:01:34Z | <a rel="nofollow" href="https://viralflix.xyz/leaked/?bt"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?bt">๐ ๐ข๐ซ๐จ๐ข๐ช ๐ง๐ค๐ฑ๐ค ๐ข==โบโบ ๐ถ๐ ๐ณ๐ข๐ง ๐ญ๐ฎ๐ถ</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?bt">๐ด CLICK HERE ๐==โบโบ Download Now)</a> |
LandCruiser/sn21_omegav1_1406_11 | LandCruiser | 2025-06-14T06:00:05Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-06-14T05:38:59Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
megasitinurhalizah/multitask_model | megasitinurhalizah | 2025-06-14T05:59:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T05:59:38Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: multitask_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# multitask_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0077
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1606 | 1.0 | 65 | 0.0531 |
| 0.0165 | 2.0 | 130 | 0.0098 |
| 0.0126 | 3.0 | 195 | 0.0077 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
LandCruiser/sn21_omegav1_1406_8 | LandCruiser | 2025-06-14T05:59:46Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-06-14T05:38:58Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
kutluhan16/aleynn_fluxxxxxx | kutluhan16 | 2025-06-14T05:53:48Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-13T20:39:26Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Aleynn_Fluxxxxxx
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/kutluhan16/aleynn_fluxxxxxx/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('prithivMLmods/Canopus-LoRA-Flux-UltraRealism-2.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('kutluhan16/aleynn_fluxxxxxx', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2500
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/kutluhan16/aleynn_fluxxxxxx/discussions) to add images that show off what youโve made with this LoRA.
|
tinashechp/math-emoji-instruct-merged | tinashechp | 2025-06-14T05:53:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"e... | text-generation | 2025-06-14T05:47:54Z | ---
base_model: unsloth/gemma-3-1b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** tinashechp
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-1b-it-unsloth-bnb-4bit
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ezenwanyi-new-video/FULL.VIDEO.ezenwanyi.Viral.Video.Tutorial.Official | ezenwanyi-new-video | 2025-06-14T05:49:19Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-14T05:49:08Z | <a rel="nofollow" href="https://viralflix.xyz/leaked/?bt"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?bt">๐ ๐ข๐ซ๐จ๐ข๐ช ๐ง๐ค๐ฑ๐ค ๐ข==โบโบ ๐ถ๐ ๐ณ๐ข๐ง ๐ญ๐ฎ๐ถ</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?bt">๐ด CLICK HERE ๐==โบโบ Download Now)</a> |
HoangTran223/0_TinyLlama-1.1B-Chat-v0.6_cbc2c6ab-186e-46fb-ad26-97569d03f5e2 | HoangTran223 | 2025-06-14T05:46:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-13T07:12:26Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
viraly-lol-hindi/Video.viraly.lol.hindi.viraly.lol.viraly.trending.viral.Full.Video | viraly-lol-hindi | 2025-06-14T05:28:19Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-14T05:27:37Z | <a rel="nofollow" href="https://viralflix.xyz/?ok">๐ด CLICK HERE ๐==โบโบ Download Now)</a>
<a rel="nofollow" href="https://viralflix.xyz/?ok"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
<a rel="nofollow" href="https://anyplacecoming.com/zq5yqv0i?key=0256cc3e9f81675f46e803a0abffb9bf/">๐ Viral Video Original Full HD๐ข==โบโบ WATCH NOW</a> |
LandCruiser/sn29_june_14_8 | LandCruiser | 2025-06-14T05:21:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-14T02:22:51Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ArtoriasTech/Odin-Instruct-Geopolitical-v2 | ArtoriasTech | 2025-06-14T05:19:38Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:ArtoriasTech/Odin-Instruct-Geopolitical-v1-merged",
"base_model:adapter:ArtoriasTech/Odin-Instruct-Geopolitical-v1-merged",
"region:us"
] | null | 2025-06-14T05:17:59Z | ---
base_model: ArtoriasTech/Odin-Instruct-Geopolitical-v1-merged
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
HoangTran223/2_Qwen-0.5B-Instruct-8cdb845b-bb6f-46c6-9474-f0c516fe29a0_Entropy | HoangTran223 | 2025-06-14T05:19:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-13T06:36:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Entropicengine/LatentDream-exp-alpha-8b-Q6_K-GGUF | Entropicengine | 2025-06-14T05:04:54Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:Entropicengine/LatentDream-exp-alpha-8b",
"base_model:quantized:Entropicengine/LatentDream-exp-alpha-8b",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T05:04:26Z | ---
base_model: Entropicengine/LatentDream-exp-alpha-8b
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Entropicengine/LatentDream-exp-alpha-8b-Q6_K-GGUF
This model was converted to GGUF format from [`Entropicengine/LatentDream-exp-alpha-8b`](https://huggingface.co/Entropicengine/LatentDream-exp-alpha-8b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Entropicengine/LatentDream-exp-alpha-8b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Entropicengine/LatentDream-exp-alpha-8b-Q6_K-GGUF --hf-file latentdream-exp-alpha-8b-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Entropicengine/LatentDream-exp-alpha-8b-Q6_K-GGUF --hf-file latentdream-exp-alpha-8b-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Entropicengine/LatentDream-exp-alpha-8b-Q6_K-GGUF --hf-file latentdream-exp-alpha-8b-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Entropicengine/LatentDream-exp-alpha-8b-Q6_K-GGUF --hf-file latentdream-exp-alpha-8b-q6_k.gguf -c 2048
```
|
deciaiby/xgen-small-4B-base-r-Q6_K-GGUF | deciaiby | 2025-06-14T04:55:17Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:Salesforce/xgen-small-4B-base-r",
"base_model:quantized:Salesforce/xgen-small-4B-base-r",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-14T04:55:02Z | ---
license: cc-by-nc-4.0
language:
- en
library_name: transformers
base_model: Salesforce/xgen-small-4B-base-r
tags:
- llama-cpp
- gguf-my-repo
---
# deciaiby/xgen-small-4B-base-r-Q6_K-GGUF
This model was converted to GGUF format from [`Salesforce/xgen-small-4B-base-r`](https://huggingface.co/Salesforce/xgen-small-4B-base-r) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Salesforce/xgen-small-4B-base-r) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo deciaiby/xgen-small-4B-base-r-Q6_K-GGUF --hf-file xgen-small-4b-base-r-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo deciaiby/xgen-small-4B-base-r-Q6_K-GGUF --hf-file xgen-small-4b-base-r-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo deciaiby/xgen-small-4B-base-r-Q6_K-GGUF --hf-file xgen-small-4b-base-r-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo deciaiby/xgen-small-4B-base-r-Q6_K-GGUF --hf-file xgen-small-4b-base-r-q6_k.gguf -c 2048
```
|
jobz-hunting-sajal-malik-vid-18k/apk3x.jobz.hunting.sajal.malik.viral.video.on.social.media | jobz-hunting-sajal-malik-vid-18k | 2025-06-14T04:46:00Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-14T04:42:02Z | <a rel="nofollow" href="https://tinyurl.com/muj2vnmp">๐ ๐ข๐ซ๐จ๐ข๐ช ๐ง๐ค๐ฑ๐ค ๐ข==โบโบ ๐ถ๐ ๐ณ๐ข๐ง ๐ญ๐ฎ๐ถ L๐aแดed Video V๐ขral Video</a>
<a rel="nofollow" href="https://tinyurl.com/muj2vnmp">๐ ๐ข๐ซ๐จ๐ข๐ช ๐ง๐ค๐ฑ๐ค ๐ข==โบโบ ๐ถ๐ ๐ณ๐ข๐ง ๐ญ๐ฎ๐ถ L๐aแดed Video V๐ขral Video</a>
<a href="https://tinyurl.com/muj2vnmp"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
aimeri/Dans-PersonalityEngine-V1.3.0-24b-mlx-4Bit | aimeri | 2025-06-14T04:44:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"general-purpose",
"roleplay",
"storywriting",
"chemistry",
"biology",
"code",
"climate",
"axolotl",
"text-generation-inference",
"finetune",
"legal",
"medical",
"finance",
"mlx",
"mlx-my-repo",
"conversational",
"e... | text-generation | 2025-06-14T04:43:33Z | ---
thumbnail: https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.3.0-24b/resolve/main/resources/pe.png
license: apache-2.0
tags:
- general-purpose
- roleplay
- storywriting
- chemistry
- biology
- code
- climate
- axolotl
- text-generation-inference
- finetune
- legal
- medical
- finance
- mlx
- mlx-my-repo
datasets:
- PocketDoc/Dans-Prosemaxx-RP
- PocketDoc/Dans-Personamaxx-Logs-2
- PocketDoc/Dans-Personamaxx-VN
- PocketDoc/Dans-Kinomaxx-VanillaBackrooms
- PocketDoc/Dans-Prosemaxx-Gutenberg
- PocketDoc/Dans-Prosemaxx-Cowriter-3-XL
- PocketDoc/Dans-Prosemaxx-Adventure
- PocketDoc/Dans-Failuremaxx-Adventure-3
- PocketDoc/Dans-Prosemaxx-InstructWriter-ZeroShot-2
- PocketDoc/Dans-Prosemaxx-InstructWriter-ZeroShot-3
- PocketDoc/Dans-Prosemaxx-InstructWriter-Continue-2
- PocketDoc/Dans-Prosemaxx-Instructwriter-Long
- PocketDoc/Dans-Prosemaxx-RepRemover-1
- PocketDoc/Dans-MemoryCore-CoreCurriculum-Small
- AquaV/US-Army-Survival-Sharegpt
- AquaV/Multi-Environment-Operations-Sharegpt
- AquaV/Resistance-Sharegpt
- AquaV/Interrogation-Sharegpt
- AquaV/Chemical-Biological-Safety-Applications-Sharegpt
- AquaV/Energetic-Materials-Sharegpt
- PocketDoc/Dans-Mathmaxx
- PJMixers/Math-Multiturn-1K-ShareGPT
- PocketDoc/Dans-Taskmaxx
- PocketDoc/Dans-Taskmaxx-DataPrepper
- PocketDoc/Dans-Taskmaxx-ConcurrentQA-Reworked
- PocketDoc/Dans-Taskmaxx-TableGPT
- PocketDoc/Dans-Taskmaxx-SciRIFF
- PocketDoc/Dans-Taskmaxx-Edit
- PocketDoc/Dans-Toolmaxx-Agent
- PocketDoc/Dans-Toolmaxx-ShellCommands
- PocketDoc/Dans-Toolmaxx-Functions-Toolbench
- PocketDoc/Dans-Toolmaxx-Functions-ToolACE
- PocketDoc/Dans-Toolmaxx-Functions-apigen-subset
- PocketDoc/Dans-Assistantmaxx-OpenAssistant2
- PocketDoc/Dans-Assistantmaxx-Opus-Merge-2
- PocketDoc/Dans-Assistantmaxx-sonnetorca-subset
- PocketDoc/Dans-Assistantmaxx-sonnetorca-subset-2
- PocketDoc/Dans-Assistantmaxx-Synthia
- PocketDoc/Dans-Assistantmaxx-ASL
- PocketDoc/Dans-Assistantmaxx-PersonaLLM-Opus
- PocketDoc/Dans-Assistantmaxx-LongAlign
- PocketDoc/Dans-Assistantmaxx-OpenLeecher-Instruct
- PocketDoc/Dans-Assistantmaxx-Tulu3-IF
- PocketDoc/Dans-Systemmaxx
- PocketDoc/Dans-Logicmaxx-SAT-AP
- PJMixers/grimulkan_theory-of-mind-ShareGPT
- PJMixers/grimulkan_physical-reasoning-ShareGPT
- PocketDoc/Dans-Reasoningmaxx-NaturalReasoning
- PocketDoc/Dans-Reasoningmaxx-WebInstruct
- PocketDoc/Dans-Reasoningmaxx-GeneralReasoning
- PocketDoc/Dans-Assistantmaxx-ClosedInstruct
language:
- en
- ar
- de
- fr
- es
- hi
- pt
- ja
- ko
base_model: PocketDoc/Dans-PersonalityEngine-V1.3.0-24b
pipeline_tag: text-generation
library_name: transformers
---
# aimeri/Dans-PersonalityEngine-V1.3.0-24b-mlx-4Bit
The Model [aimeri/Dans-PersonalityEngine-V1.3.0-24b-mlx-4Bit](https://huggingface.co/aimeri/Dans-PersonalityEngine-V1.3.0-24b-mlx-4Bit) was converted to MLX format from [PocketDoc/Dans-PersonalityEngine-V1.3.0-24b](https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.3.0-24b) using mlx-lm version **0.22.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("aimeri/Dans-PersonalityEngine-V1.3.0-24b-mlx-4Bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
tinh2406/llama2_split_actions | tinh2406 | 2025-06-14T04:41:03Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"trl",
"sft",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-20T08:15:28Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ajyl/sft_seed_400_512d_8L_8H_datatype_full_pretrain | ajyl | 2025-06-14T04:37:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-14T04:25:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
miike-ai/magickdev-24b-vision-FP16-GGUF | miike-ai | 2025-06-14T04:37:09Z | 0 | 0 | null | [
"gguf",
"base_model:miike-ai/magickdev-24b",
"base_model:quantized:miike-ai/magickdev-24b",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T02:15:08Z | ---
base_model:
- miike-ai/magickdev-24b
--- |
TOMFORD79/incen39 | TOMFORD79 | 2025-06-14T04:36:16Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-06-14T03:05:25Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
FormlessAI/185d736e-94cb-4373-8f6b-2ee30ca4514b | FormlessAI | 2025-06-14T04:32:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"base_model:DeepMount00/Llama-3-8b-Ita",
"base_model:finetune:DeepMount00/Llama-3-8b-Ita",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T02:23:38Z | ---
base_model: DeepMount00/Llama-3-8b-Ita
library_name: transformers
model_name: 185d736e-94cb-4373-8f6b-2ee30ca4514b
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for 185d736e-94cb-4373-8f6b-2ee30ca4514b
This model is a fine-tuned version of [DeepMount00/Llama-3-8b-Ita](https://huggingface.co/DeepMount00/Llama-3-8b-Ita).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="FormlessAI/185d736e-94cb-4373-8f6b-2ee30ca4514b", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/m6q12egv)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.0+cu128
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
bevantam01bl/gg | bevantam01bl | 2025-06-14T04:32:46Z | 0 | 0 | null | [
"license:bsd-3-clause",
"region:us"
] | null | 2025-06-14T04:32:45Z | ---
license: bsd-3-clause
---
|
arielgalle8338172/gg | arielgalle8338172 | 2025-06-14T04:32:46Z | 0 | 0 | null | [
"license:bsd-3-clause",
"region:us"
] | null | 2025-06-14T04:32:45Z | ---
license: bsd-3-clause
---
|
ajyl/joint_seed_400_512d_8L_8H_datatype_full_alpha_1.0 | ajyl | 2025-06-14T04:32:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-14T04:22:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gradientrouting-spar/gcd_syco_cap_math_limit_proxy_data_to-25_seed_1 | gradientrouting-spar | 2025-06-14T04:11:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T04:11:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
VIDEOS-18-SMS-Rani-Viral-Video-Link/FULL.VIDEO.Sms.Rani.Viral.Video.Tutorial.Official | VIDEOS-18-SMS-Rani-Viral-Video-Link | 2025-06-14T03:59:52Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-14T03:59:32Z | <a rel="nofollow" href="https://tinyurl.com/muj2vnmp">๐ ๐ข๐ซ๐จ๐ข๐ช ๐ง๐ค๐ฑ๐ค ๐ข==โบโบ ๐ถ๐ ๐ณ๐ข๐ง ๐ญ๐ฎ๐ถ L๐aแดed Video V๐ขral Video</a>
<a rel="nofollow" href="https://tinyurl.com/muj2vnmp">๐ ๐ข๐ซ๐จ๐ข๐ช ๐ง๐ค๐ฑ๐ค ๐ข==โบโบ ๐ถ๐ ๐ณ๐ข๐ง ๐ญ๐ฎ๐ถ L๐aแดed Video V๐ขral Video</a>
<a href="https://tinyurl.com/muj2vnmp"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
TOMFORD79/incen31 | TOMFORD79 | 2025-06-14T03:57:04Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-06-14T03:04:40Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
dgambettaphd/M_llm2_run1_gen0_WXS_doc1000_synt64_lr1e-04_acm_MPP | dgambettaphd | 2025-06-14T03:54:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-06-14T03:52:40Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tanbinh2210/onl_contrastive_bge-m3 | tanbinh2210 | 2025-06-14T03:50:31Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:71937",
"loss:OnlineContrastiveLoss",
"arxiv:1908.10084",
"base_model:BAAI/bge-m3",
"base_model:finetune:BAAI/bge-m3",
"autotrain_compatible",
"text-emb... | sentence-similarity | 2025-06-14T03:34:13Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:71937
- loss:OnlineContrastiveLoss
base_model: BAAI/bge-m3
widget:
- source_sentence: Nแบฟu tรดi muแปn nghแป viแปc tแบกi cรดng ty, tรดi cแบงn lร m ฤฦกn tแปซ nร o?
sentences:
- 'Khแบฏc phแปฅc sแปฑ cแป khรดng cรณ quyแปn cร i ฤแบทt lแปch biแปu
ฤแป cร i ฤแบทt ฤฦฐแปฃc lแปch biแปu bแบกn phแบฃi cรณ quyแปn Quแบฃn lรฝ tแบฅt cแบฃ trong phรขn hแป lแปch
biแปu. ฤแป cรณ quyแปn nร y, bแบกn cรณ thแป liรชn hแป vแปi bแป phแบญn cรณ quyแปn Quแบฃn trแป hแป thแปng
ฤแป yรชu cแบงu cแบฅp quyแปn cho bแบกn.
Bแป phแบญn cรณ quyแปn quแบฃn trแป hแป thแปng cรณ thแป cแบฅp quyแปn quแบฃn lรฝ tแบฅt cแบฃ module lแปch
biแปu cho nhรขn sแปฑ bแบฑng cรกch sau:
1. Click vร o Avatar , chแปn Cร i ฤแบทt hแป thแปng . Chแปn ฤแปi tฦฐแปฃng Hแป thแปng
2. Tแบกi mร n danh sรกch ngฦฐแปi dรนng, click chuแปt phแบฃi vร o tร i khoแบฃn nhรขn sแปฑ cแบงn cแบฅp
quyแปn tแบกo mแปi, chแปn tรกc vแปฅ Sแปญa
3. Tแบกi mร n hรฌnh sแปญa tร i khoแบฃn, tรญch vร o รด Tรนy chแปnh quyแปn . Tรฌm mแปฅc Lแปch biแปu
vร phรขn quyแปn Quแบฃn lรฝ tแบฅt cแบฃ. (Bแบกn cรณ thแป tรนy chแปn quyแปn quแบฃn lรฝ cรดng ty, quแบฃn
lรฝ chi nhรกnh, quแบฃn lรฝ phรฒng ban tรนy theo vai trรฒ sแปญ dแปฅng)
4. Bแบฅm Cแบญp nhแบญt ฤแป lฦฐu thรดng tin
cร i ฤแบทt quyแปn quแบฃn lรฝ phรขn hแป lแปch biแปu'
- 'PHรN Hแป MARKETING
Bแบกn muแปn truy cแบญp nhanh ฤแบฟn Marketing?ย Truy cแบญp vร o phรขn hแป Marketing
Nแปi dung
Phรขn hแป Marketing giแบฃi quyแบฟt bร i toรกn gรฌ?
Nhแปฏng gรฌ bแบกn cรณ thแป lร m
Phรขn hแป Marketing giแบฃi quyแบฟt bร i toรกn gรฌ?
Phรขn hแป Marketing trรชn phแบงn mแปm 1Office giรบp doanh nghiแปp quแบฃn lรฝ toร n diแปn cรกc
hoแบกt ฤแปng tiแบฟp thแป โ tแปซ quแบฃn lรฝ chiแบฟn dแปch cho ฤแบฟn quแบฃn lรฝ cรกc cuแปc gแปi tแปซ tแปng
ฤร i.
Phรขn hแป nร y sแบฝ giรบp doanh nghiแปp vร bแป phแบญn Marketing giแบฃi quyแบฟt cรกc bร i toรกn
nhฦฐ sau:
Quแบฃn lรฝ vร theo dรตi chiแบฟn dแปch Email vร SMS cแปงa doanh nghiแปp
Thแปng kรช sแป lฦฐแปฃt tiแบฟp cแบญn, sแป liรชn hแป kรฌ vแปng vร chi phรญ thแปฑc tแบฟ cแปงa mแปi chiแบฟn
dแปch cแปฅ thแป
Kแบฟt nแปi vแปi nhiแปu nhร cung cแบฅp tแปng ฤร i: CMC, Omi, Stringee...
Quแบฃn lรฝ danh sรกch cรกc cuแปc gแปi tแปซ tแปng ฤร i ฤแบฟn cรกc liรชn hแป
Theo dรตi trแบกng thรกi cuแปc gแปi, ฤiแปn thoแบกi viรชn vร ghi รขm cuแปc gแปi'
- "Khแบฏc phแปฅc sแปฑ cแป khรดng tแบกo ฤฦฐแปฃc ฤฦกn xin nghแป phรฉp\nKiแปm tra lแบกi thรดng tin tแปi\
\ bแป phแบญn Hร nh chรญnh nhรขn sแปฑ ฤแป nแบฏm ฤฦฐแปฃc quy ฤแปnh vแป thแปi gian tแปi thiแปu cแบงn tแบกo\
\ ฤฦกn trฦฐแปc khi xin nghแป hoแบทc thแปi gian tแปi ฤa ฤฦฐแปฃc phรฉp bแป sung ฤฦกn sau khi nghแป..\
\ \nNแบฟu cแบงn ฤiแปu chแปnh thแปi gian cho phรฉp tแบกo ฤฦกn, Bแป nhแบญn Nhรขn sแปฑ cแบงn truy cแบญp\
\ Cร i ฤแบทt ฤฦกn tแปซ ฤแป tiแบฟn hร nh ฤiแปu chแปnh. \nBแป phแบญn Hร nh chรญnh nhรขn sแปฑ cรณ thแป\
\ can thiแปp tรนy chแปnh phรฉp thแปง cรดng ฤแป cแบญp nhแบญt lแบกi quแปน phรฉp cho nhรขn sแปฑ แป phรขn\
\ hแป Chแบฅm cรดng"
- source_sentence: Ai lร ngฦฐแปi cรณ quyแปn xem, sแปญa vร xoรก loแบกi bแบฃng lฦฐฦกng sau khi tรดi
tแบกo xong?
sentences:
- 'Quแบฃn lรฝ nhร cung cแบฅp vร cรดng nแปฃ
Tแบกo mแปi cรดng nแปฃ
ฤแป tแบกo mแปi cรดng nแปฃ, ngฦฐแปi dรนng thแปฑc hiแปn cรกc bฦฐแปc sau:
1. Truy cแบญp phรขn hแป Mua hร ng
2. แป gรณc trรชn cรนng bรชn trรกi, chแปn Tแบกo mแปi Chแปn Cรดng nแปฃ
3. ฤiแปn thรดng tin ฤแบงy ฤแปง vร Cแบญp nhแบญt.
Tแบกo mแปi cรดng nแปฃ
Danh sรกch cรกc trฦฐแปng dแปฏ liแปu trong tแบกo mแปi cรดng nแปฃ
Trฦฐแปng dแปฏ liแปu: Kiแปu cรดng nแปฃ, Mรด tแบฃ: Chแปn kiแปu cรดng nแปฃ: Dฦฐ nแปฃ: Lร cรกc khoแบฃn ฤรฃ
trแบฃ cho nhร cung cแบฅp (Tแปซ cรกc ฤฦกn hร ng mua hoแบทc phรกt sinh nแปฃ khi tแบกo cรดng nแปฃ)
Dฦฐ cรณ: Lร cรกc khoแบฃn phแบฃi trแบฃ cho nhร cung cแบฅp (Tแปซ cรกc phiแบฟu chi hoแบทc phรกt sinh
cรณ khi tแบกo cรดng nแปฃ)
Trฦฐแปng dแปฏ liแปu: Cรกc trฦฐแปng dแปฏ liแปu khรกc, Mรด tแบฃ: Ngร y, Sแป tiแปn, Lรฝ do, chแปn nhร
cung cแบฅp
Tรฌm hiแปu thรชm tแบกi Trung tรขm hแป trแปฃ 1Office'
- 'Tรดi muแปn bแบฃng lฦฐฦกng sau khi hoร n thร nh sแบฝ ฤฦฐแปฃc gแปญi ฤแบฟn Ban giรกm ฤแปc phรช duyแปt,
sau ฤรณ gแปญi ฤแบฟn Kแบฟ toรกn ฤแป tiแบฟn thร nh chi tiแปn thรฌ phแบฃi lร m nhฦฐ thแบฟ nร o?
Truy cแบญp Cร i ฤแบทt hแป thแปng Bแบฃng lฦฐฦกng Bแบญt cร i ฤแบทt รp dแปฅng quy trรฌnh duyแปt vร o bแบฃng
lฦฐฦกng . Sau ฤรณ cแบฅu hรฌnh quy trรฌnh duyแปt bแบฃng lฦฐฦกng.'
- 'Thแปง tแปฅc 630c - Giแบฃi quyแบฟt hฦฐแปng trแปฃ cแบฅp dฦฐแปกng sแปฉc, phแปฅc hแปi sแปฉc khแปe sau แปm ฤau,
thai sแบฃn, tai nแบกn lao ฤแปng, bแปnh nghแป nghiแปp
Hฦฐแปng dแบซn kรช khai
Hฦฐแปng dแบซn Kรช khai nhรฃn Dฦฐแปกng sแปฉc cho hแป sฦก bแบฃo hiแปm trรชn 1-IVAN:
Bฦฐแปc 1: Tแบกi hแป sฦก bแบฃo hiแปm muแปn kรช khai nhรฃn > Tรกc vแปฅ chuแปt phแบฃi > แปm ฤau, Thai
sแบฃn, Dฦฐแปกng sแปฉc
Bฦฐแปc 2: Bแป sung thรดng tin tแบกi Form kรช khai Dฦฐแปกng sแปฉcย > Cแบญp nhแบญt
Chแปn loแบกi chแบฟ ฤแป: Dฦฐแปกng sแปฉc
Chแปn nhรณm hฦฐแปng tฦฐฦกng แปฉng:
+ Dฦฐแปกng sแปฉc sau แปm
+ Nghแป thai sแบฃn
+ Nghแป sau tai nแบกn lao ฤแปng
Chแปn thแปง tแปฅc: 630c - Giแบฃi quyแบฟt hฦฐแปng trแปฃ cแบฅp dฦฐแปกng sแปฉc, phแปฅc hแปi sแปฉc khแปe sau
แปm ฤau, thai sแบฃn, tai nแบกn lao ฤแปng, bแปnh nghแป nghiแปp'
- source_sentence: Sแปฑ khรกc biแปt chรญnh giแปฏa tร i liแปu cรดng ty vร tร i liแปu cรก nhรขn lร
gรฌ?
sentences:
- "Quแบฃn lรฝ Email hแป thแปng\n2. Quแบฃn lรฝ email trรชn hแป thแปng\nKhi bแบกn ฤฦฐแปฃc phรขn quyแปn\
\ sแปญ dแปฅng nhiแปu tร i khoแบฃn gแปญi ฤi, hรฃy chแปn 1 tร i khoแบฃn ฤแป gแปญi. Trong trฦฐแปng hแปฃp\
\ chแป ฤฦฐแปฃc phรขn quyแปn 1 tร i khoแบฃn, hแป thแปng sแบฝ giรบp bแบกn hiแปn thแป luรดn tร i khoแบฃn\
\ ฤรณ.. \nKiแปm tra lแบกi lแปch sแปญ gแปญi email:. \nChแปn Cร i ฤแบทt hแป thแปng > Email, SMS\
\ > Email > Lแปch sแปญ gแปญi Email"
- 'PHรN Hแป VฤN BแบขN
Cรกc loแบกi vฤn bแบฃn
Mแปi loแบกi vฤn bแบฃn sแบฝ cรณ mแปฅc ฤรญch sแปญ dแปฅng khรกc nhau, Quแบฃn lรฝ vฤn bแบฃn theo mแปฅc ฤรญch
sแบฝ giรบp viแปc quแบฃn lรฝ, theo dรตi trแป nรชn dแป
dร ng vร chรญnh xรกc hฦกn.
Loแบกi vฤn bแบฃn: Vฤn bแบฃn ฤแบฟn, Nแปi dung: Vฤn bแบฃn ฤแบฟn giรบp doanh nghiแปp quแบฃn lรฝ cรกc
vฤn bแบฃn, cรดng vฤn ฤฦฐแปฃc gแปญi ฤแบฟn doanh nghiแปp
Loแบกi vฤn bแบฃn: Vฤn bแบฃn ฤi, Nแปi dung: Vฤn bแบฃn ฤi giรบp doanh nghiแปp quแบฃn lรฝ cรกc vฤn
bแบฃn, cรดng vฤn ฤฦฐแปฃc soแบกn thแบฃo vร ban hร nh tแปซ doanh nghiแปp ra cรกc cฦก quan bรชn ngoร i
Loแบกi vฤn bแบฃn: Vฤn bแบฃn nแปi bแป, Nแปi dung: Vฤn bแบฃn nแปi bแป giรบp doanh nghiแปp quแบฃn
lรฝ, theo dรตi cรกc vฤn bแบฃn ฤฦฐแปฃc ban hร nh trong nแปi bแป doanh nghiแปp
Tรฌm hiแปu thรชm tแบกi trung tรขm hแป trแปฃ 1Office'
- 'Khแบฏc phแปฅc sแปฑ cแป ฤฦกn tแปซ tแปฑ ฤแปng chuyแปn khรดng duyแปt
Cรณ hai nguyรชn nhรขn chรญnh dแบซn ฤแบฟn viแปc ฤฦกn tแปซ tแปฑ ฤแปng bแป chuyแปn sang trแบกng thรกi
khรดng duyแปt:
1. Do cร i ฤแบทt tแปฑ ฤแปng chuyแปn trแบกng thรกi ฤฦกn tแปซ
Trong cร i ฤแบทt ฤฦกn tแปซ cรณ 1 cร i ฤแบทt Thแปi gian ฤแป hแป thแปng sแบฝ tแปฑ ฤแปng chuyแปn cรกc
ฤฦกn tแปซ ฤang แป trแบกng thรกi chแป duyแปt sang khรดng duyแปt sau khi chแปt ฤฦกn tแปซ. Cร i ฤแบทt
nร y cho phรฉp nแบฟu sau khi Chแปt ฤฦกn trรชn bแบฃng chแบฅm cรดng, sau khoแบฃng thแปi gian cร i
ฤแบทt, ฤฦกn tแปซ Chฦฐa duyแปt sแบฝ bแป chuyแปn trแบกng thรกi. Trฦฐแปng hแปฃp nร y bแบกn cแบงn kiแปm tra
lแบกi trแบกng thรกi Chแปt ฤฦกn trรชn bแบฃng chแบฅm cรดng vร cร i ฤแบทt ฤฦกn tแปซ.
Truy cแบญp Bแบฃng chแบฅm cรดng ฤแป kiแปm tra trแบกng thรกi Chแปt ฤฦกn
Liรชn hแป bแป phแบญn nhรขn sแปฑ ฤแป kiแปm tra Cร i ฤแบทt ฤฦกn tแปซ
2. Do cร i ฤแบทt cแปงa quy trรฌnh duyแปt
Trong cแบฅu hรฌnh quy trรฌnh duyแปt cรณ node Khรดng duyแปt , nแบฟu cร i ฤแบทt luแปng duyแปt cรณ
chแบกy ฤiแปu kiแปn khรดng duyแปt, ฤฦกn cลฉng sแบฝ tแปฑ ฤแปng chuyแปn trแบกng thรกi Khรดng duyแปt
. Trฦฐแปng hแปฃp nร y bแบกn cรณ thแป truy cแบญp Cร i ฤแบทt Duyแปt ฤแปi tฦฐแปฃng ฤแป kiแปm tra, ฤiแปu
chแปnh.'
- source_sentence: Sau khi thแปฑc hiแปn tรกc vแปฅ 'Xuแบฅt Biแปu mแบซu', thรดng tin trรชn cรดng viแปc
quy trรฌnh sแบฝ ฤฦฐแปฃc xuแบฅt ra loแบกi file nร o?
sentences:
- 'Chiแบฟn dแปch tuyแปn dแปฅng
Tแบกo mแปi tin tuyแปn dแปฅng
1. Mแป phรขn hแป Tuyแปn dแปฅng
2. Tแบกi danh mแปฅc trรกi, Chแปn Chiแบฟn dแปch Chแปn Chiแบฟn dแปch tuyแปn dแปฅng muแปn ฤฤng tin
3. Chแปn biแปu tฦฐแปฃng Tแบกo tin
Bแบฃng mรด tแบฃ cรกc trฦฐแปng dแปฏ liแปu trong tแบกo mแปi tin tuyแปn dแปฅng:
Trฦฐแปng dแปฏ liแปu: Thรดng tin tแปซ chiแบฟn dแปch tuyแปn dแปฅng, Mรด tแบฃ: Thรดng tin nhฦฐ tรชn chiแบฟn
dแปch, vแป trรญ tuyแปn dแปฅng sแบฝ ฤฦฐแปฃc lแบฅy trแปฑc tiแบฟp tแปซ chiแบฟn dแปch tuyแปn dแปฅng
Trฦฐแปng dแปฏ liแปu: Kรชnh tuyแปn dแปฅng, Mรด tแบฃ: Chแปn kรชnh tuyแปn dแปฅng muแปn ฤฤng tin lรชn
Trฦฐแปng dแปฏ liแปu: Thรดng tin tuyแปn dแปฅng, Mรด tแบฃ: Nhแบญp thรดng tin cho vแป trรญ แปฉng tuyแปn
Lฦฐu รฝ : Vแปi mแปi kรชnh tuyแปn dแปฅng, ngฦฐแปi dรนng chแป ฤฦฐแปฃc phรฉp ฤฤng 1 tin. Nแบฟu trรชn
kรชnh tuyแปn dแปฅng ฤรณ ฤรฃ cรณ tin tแปซ trฦฐแปc, ngฦฐแปi dรนng cรณ thแป sแปญa tin hoแบทc tแบกo chiแบฟn
dแปch khรกc.'
- 'Tรดi muแปn liรชn kแบฟt 1Office vแปi cแปng tuyแปn dแปฅng thรฌ phแบฃi lร m sao?
Hiแปn tแบกi, 1Office ฤang cung cแบฅp cรกc phฦฐฦกng thแปฉc hแป trแปฃ Khรกch hร ng lแบฅy CV tแปซ cรกc
cแปng: Email & Webform.
Anh/chแป cรณ thแป xem chi tiแบฟt tแบกi Hฦฐแปng dแบซn sแปญ dแปฅng phรขn hแป Tuyแปn dแปฅng.'
- "Quแบฃn lรฝ thu chi\nQuแบฃn lรฝย phiแบฟu chi\nTรกc vแปฅ: Duyแปt/ Hoร n duyแปt, Mรด tแบฃ: Ngฦฐแปi ฤฦฐแปฃc\
\ cร i ฤแบทt lร ngฦฐแปi duyแปt phiแบฟu chi sแบฝ cรณ quyแปn thแปฑc hiแปn tรกc vแปฅ Duyแปt/Hoร n duyแปt.\
\ Sau khi Phiแบฟu chi ฤฦฐแปฃc duyแปt, sแป tiแปn trong phiแบฟu chi sแบฝ trแปซ vร o tร i khoแบฃn ฤรฃ\
\ ฤฦฐแปฃc chแปn trong phiแบฟu chi ฤรณ, tฦฐฦกng แปฉng sแบฝ cแบญp nhแบญt tแปฑ ฤแปng vร o ฤรฃ thu cแปงa ฤฦกn\
\ hร ng/Hแปฃp ฤแปng/Cรดng nแปฃ khรกch hร ng. Sau khi duyแปt, ngฦฐแปi dรนng cรณ thแป Hoร n duyแปt,\
\ sau khi hoร n duyแปt, sแป tiแปn cลฉng sแบฝ ฤฦฐแปฃc hoร n lแบกi. \nTรกc vแปฅ: Tแบกo mแปi, Mรด tแบฃ:\
\ Tแบกo mแปi cรกc ฤแปi tฦฐแปฃng liรชn quan ฤแบฟn Phiแบฟu chi nhฦฐ: Dแปฑ รกn, cรดng viแปc, quy trรฌnh,\
\ ฤฦกn hร ng bรกn, hแปฃp ฤแปng bรกn, ฤฦกn hร ng mua, hแปฃp ฤแปng mua, vแบญt tฦฐ, phiแบฟu nhแบญp kho,\
\ phiแบฟu xuแบฅt kho, phiแบฟu chi, ฤแปi tรกc, ฤฦกn cรดng tรกc.... \nTรกc vแปฅ: Kรฝ sแป, Mรด tแบฃ:\
\ Tแบกo hแป sฦก kรฝ sแป liรชn quan ฤแบฟn phiแบฟu chi\nTรกc vแปฅ: Biแปu mแบซu, Mรด tแบฃ: Khi phiแบฟu\
\ chi cแบงn in ra vร lฦฐu trแปฏ, bแบกn cรณ thแป chแปn in ra/ xuแบฅt biแปu mแบซu tฦฐฦกng แปฉng (ฤiแปu\
\ kiแปn ฤรฃ cร i biแปu mแบซu phiแบฟu chi)\nTรกc vแปฅ: Thรชm tร i liแปu, Mรด tแบฃ: ฤรญnh kรจm tร i\
\ liแปu/minh chแปฉng liรชn quan\nTรกc vแปฅ: Sแปญa, Mรด tแบฃ: Tรกc vแปฅ Sแปญa chแป ฤฦฐแปฃc thแปฑc hiแปn\
\ khi Phiแบฟu chi chฦฐa ฤฦฐแปฃc duyแปt แป bแบฅt kแปณ bฦฐแปc nร o\nTรกc vแปฅ: Xรณa, Mรด tแบฃ: Chแป ฤฦฐแปฃc\
\ phรฉp Xรณa khi phiแบฟu chi แป trแบกng thรกi Chแป duyแปt, Khรดng duyแปt.\nTรกc vแปฅ: Nhรขn bแบฃn,\
\ Mรด tแบฃ: Tรกc vแปฅ nร y cho phรฉp bแบกn tแบกo nhanh 1 phiแบฟu chi mแปi vแปi cรกc thรดng tin tฦฐฦกng\
\ tแปฑ phiแบฟu chi ฤรฃ tแบกo ฤแป tiแบฟt kiแปm thแปi gian nhแบญp liแปu."
- source_sentence: ฤฦกn xin nghแป phรฉp cรณ nhแปฏng trแบกng thรกi nร o vร รฝ nghฤฉa cแปงa tแปซng trแบกng
thรกi ฤรณ lร gรฌ?
sentences:
- 'Giao viแปc
Tแบกo mแปi cรดng viแปc thฦฐแปng
Giao viแปc lร tรกc vแปฅ nhแบฑm xรกc ฤแปnh mแปฅc tiรชu, yรชu cแบงu cแปงa cรดng viแปc. Cรณ kรชnh giao
tiแบฟp rรต rร ng vแป cรดng viแปc nhแบฑm hแป trแปฃ vร theo dรตi tiแบฟn ฤแป cแปงa nhรขn viรชn trong
quรก trรฌnh lร m viแปc, giแบฃi ฤรกp thแบฏc mแบฏc, giแบฃi quyแบฟt vแบฅn ฤแป vร ฤฦฐa ra gรณp รฝ khi cแบงn
thiแบฟt. Tแปซ ฤรณ ฤรกnh giรก kแบฟt quแบฃ cรดng viแปc, khen thฦฐแปng hoแบทc phรช bรฌnh nhรขn viรชn tรนy
theo hiแปu suแบฅt lร m viแปc cแปงa hแป.
ฤแป tแบกo mแปi cรดng viแปc thฦฐแปng, ngฦฐแปi dรนng thแปฑc hiแปn cรกc bฦฐแปc sau:
1. Truy cแบญp phรขn hแป Cรดng viแปc
2. แป gรณc trรชn cรนng bรชn trรกi, chแปn Tแบกo mแปi Chแปn Cรดng viแปc
3. ฤiแปn thรดng tin vร o form tแบกo mแปi vร Cแบญp nhแบญt
Tแบกo mแปi cรดng viแปc thฦฐแปng
Bแบฃng mรด tแบฃ cรกc trฦฐแปng dแปฏ liแปu trong tแบกo mแปi cรดng viแปc:
Trฦฐแปng dแปฏ liแปu: Tรชn cรดng viแปc, Mรด tแบฃ: Mแปฅc tiรชu, yรชu cแบงu cแปงa cรดng viแปc
Trฦฐแปng dแปฏ liแปu: Bแบฏt ฤแบงu vร Kแบฟt thรบc Giao viแปc theo giแป, Mรด tแบฃ: Thแปi gian dแปฑ kiแบฟn
hoร n thร nh cรดng viแปc
Trฦฐแปng dแปฏ liแปu: Ngฦฐแปi thแปฑc hiแปn, Mรด tแบฃ: Ngฦฐแปi cรณ quyแปn bรกo cรกo tiแบฟn ฤแป vร cแบญp
nhแบญt thแปi gian thแปฑc tแบฟ cแปงa cรดng viแปc
Trฦฐแปng dแปฏ liแปu: Ngฦฐแปi giao viแปc, Mรด tแบฃ: Lร ngฦฐแปi cรณ quyแปn quแบฃn lรฝ cรดng viแปc, chuyแปn
trแบกng thรกi cรดng viแปc vร cแบญp nhแบญt thแปi gian cรดng viแปc
Trฦฐแปng dแปฏ liแปu: Ngฦฐแปi theo dรตi, Mรด tแบฃ: Lร ngฦฐแปi ฤฦฐแปฃc xem cรดng viแปc vร nhแบญn thรดng
bรกo liรชn quan ฤแบฟn cรดng viแปc'
- 'Cร i ฤแบทt ฤฦกn tแปซ
Cร i ฤแบทt danh mแปฅc
Cร i ฤแบทt danh mแปฅc lร nฦกi ngฦฐแปi dรนng cร i ฤแบทt cรกc lรฝ do cแปงa ฤฦกn tแปซ hร nh chรญnh, phฦฐฦกng
tiแปn cรดng tรกc, cรดng tรกc phรญ
1. Mแป phรขn hแป ฤฦกn tแปซ tแบกi HRM
2. แป trรชn cรนng bรชn phแบฃi, chแปn Cร i ฤแบทt
3. Tแบกi menu trรกi, chแปn Danh mแปฅc
4. Chแปn Tแบกo mแปi ฤแป tแบกo mแปi
5. Nhแบฅp vร o Cแบญp nhแบญt ฤแป lฦฐu thay ฤแปi
cร i ฤแบทt danh mแปฅc
Trฦฐแปng dแปฏ liแปu: Tแปi ฤa, Mรด tแบฃ: Hแบกn chแบฟ nhรขn sแปฑ sแปญ dแปฅng ฤฦกn quรก nhiแปu lแบงn trong
tuแบงn, thรกng hoแบทc nฤm
Trฦฐแปng dแปฏ liแปu: Kรฝ hiแปu, Mรด tแบฃ: Kรฝ hiแปu hiแปn thแป trรชn bแบฃng chแบฅm cรดng Chแป hiแปn
thแป kรฝ hiแปu vแปi ฤฦกn xin nghแป
Trฦฐแปng dแปฏ liแปu: Tรญnh cรดng, Mรด tแบฃ: Nแบฟu lรฝ do bแบกn chแปn Cรณ tรญnh cรดng thรฌ khoแบฃng thแปi
gian xin nghแป/vแบฏng mแบทt cแปงa bแบกn sแบฝ ฤฦฐแปฃc tรญnh vร o cรดng lร m viแปc
Trฦฐแปng dแปฏ liแปu: Trแบกng thรกi, Mรด tแบฃ: Hoแบกt ฤแปng: ngฦฐแปi dรนng cรณ thแป chแปn lรฝ do nร y
khi tแบกo ฤฦกn Khรดng hoแบกt ฤแปng: ngฦฐแปi dรนng sแบฝ khรดng nhรฌn thแบฅy lรฝ do nร y trong danh
sรกch
Trฦฐแปng dแปฏ liแปu: Yรชu cแบงu chแปt, Mรด tแบฃ: Nแบฟu lรฝ do vแบฏng mแบทt cแปงa bแบกn Cรณ yรชu cแบงu chแปt
vรขn tay thรฌ bแบกn cแบงn phแบฃi chแปt vรขn tay trong khoแบฃng thแปi gian bแบฏt ฤแบงu vแบฏng mแบฏt
vร kแบฟt thรบc vแบฏng mแบทt
Lฦฐu รฝ: Khรดng thแป xรณa nhแปฏng lรฝ do cแปงa ฤฦกn xin nghแป ฤฦฐแปฃc hแป thแปng tแบกo ra, chแป cรณ
thแป chuyแปn lรฝ do sang trแบกng thรกi khรดng hoแบกt ฤแปng.'
- 'Thฦฐ viแปn tร i nguyรชn
Thฦฐ viแปn tร i nguyรชn lร mแปt tรญnh nฤng mแปi cแปงa 1Office. ฤรขy lร kho tร i nguyรชn chแปฉa
cรกc dแปฏ liแปu mแบซu mร 1Office cung cแบฅp cho Khรกch hร ng. Nรณ hoแบกt ฤแปng giแปng nhฦฐ Google
Play hay Chrome Extension. Tแบกi ฤรขy sแบฝ cรณ cรกc Template mแบซu, ngฦฐแปi dรนng cรณ thแป vร o
xem thแปญ แบฃnh demo hoแบทc mรด tแบฃ, nแบฟu phรน hแปฃp vแปi ฤแบทc thรน cรดng viแปc, chแปn Cร i ฤแบทt ฤแป
tแบฃi vแป. Khi ฤรณ, tร i nguyรชn vแปซa tแบฃi vแป sแบฝ ฤฦฐแปฃc cแบญp nhแบญt vร o ฤแปi tฦฐแปฃng tฦฐฦกng แปฉng
trรชn phแบงn mแปm.
Sแปญ dแปฅng thฦฐ viแปn tร i nguyรชn nhฦฐ thแบฟ nร o?
ฤแป cรณ ฤฦฐแปฃc nhแปฏng tร i nguyรชn mร 1Office, ngฦฐแปi dรนng thแปฑc hiแปn theo cรกc bฦฐแปc:
Bฦฐแปc 1. Tแบฃi tร i liแปu vแป phแบงn mแปm
Bฦฐแปc 2. Cร i ฤแบทt thรดng sแป cho dแปฏ liแปu tแบฃi vแป
Bฦฐแปc 3. Sแปญ dแปฅng phแบงn mแปm vแปi cรกc tร i liแปu ฤรฃ ฤฦฐแปฃc cร i ฤแบทt trรชn phแบงn mแปm'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on BAAI/bge-m3
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) <!-- at revision 5617a9f61b028005a4858fdac845db406aefb181 -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("tanbinh2210/onl_contrastive_bge-m3")
# Run inference
sentences = [
'ฤฦกn xin nghแป phรฉp cรณ nhแปฏng trแบกng thรกi nร o vร รฝ nghฤฉa cแปงa tแปซng trแบกng thรกi ฤรณ lร gรฌ?',
'Cร i ฤแบทt ฤฦกn tแปซ\nCร i ฤแบทt danh mแปฅc\nCร i ฤแบทt danh mแปฅc lร nฦกi ngฦฐแปi dรนng cร i ฤแบทt cรกc lรฝ do cแปงa ฤฦกn tแปซ hร nh chรญnh, phฦฐฦกng tiแปn cรดng tรกc, cรดng tรกc phรญ\n1. Mแป phรขn hแป ฤฦกn tแปซ tแบกi HRM\n2. แป trรชn cรนng bรชn phแบฃi, chแปn Cร i ฤแบทt\n3. Tแบกi menu trรกi, chแปn Danh mแปฅc\n4. Chแปn Tแบกo mแปi ฤแป tแบกo mแปi\n5. Nhแบฅp vร o Cแบญp nhแบญt ฤแป lฦฐu thay ฤแปi\ncร i ฤแบทt danh mแปฅc\nTrฦฐแปng dแปฏ liแปu: Tแปi ฤa, Mรด tแบฃ: Hแบกn chแบฟ nhรขn sแปฑ sแปญ dแปฅng ฤฦกn quรก nhiแปu lแบงn trong tuแบงn, thรกng hoแบทc nฤm\nTrฦฐแปng dแปฏ liแปu: Kรฝ hiแปu, Mรด tแบฃ: Kรฝ hiแปu hiแปn thแป trรชn bแบฃng chแบฅm cรดng Chแป hiแปn thแป kรฝ hiแปu vแปi ฤฦกn xin nghแป\nTrฦฐแปng dแปฏ liแปu: Tรญnh cรดng, Mรด tแบฃ: Nแบฟu lรฝ do bแบกn chแปn Cรณ tรญnh cรดng thรฌ khoแบฃng thแปi gian xin nghแป/vแบฏng mแบทt cแปงa bแบกn sแบฝ ฤฦฐแปฃc tรญnh vร o cรดng lร m viแปc\nTrฦฐแปng dแปฏ liแปu: Trแบกng thรกi, Mรด tแบฃ: Hoแบกt ฤแปng: ngฦฐแปi dรนng cรณ thแป chแปn lรฝ do nร y khi tแบกo ฤฦกn Khรดng hoแบกt ฤแปng: ngฦฐแปi dรนng sแบฝ khรดng nhรฌn thแบฅy lรฝ do nร y trong danh sรกch\nTrฦฐแปng dแปฏ liแปu: Yรชu cแบงu chแปt, Mรด tแบฃ: Nแบฟu lรฝ do vแบฏng mแบทt cแปงa bแบกn Cรณ yรชu cแบงu chแปt vรขn tay thรฌ bแบกn cแบงn phแบฃi chแปt vรขn tay trong khoแบฃng thแปi gian bแบฏt ฤแบงu vแบฏng mแบฏt vร kแบฟt thรบc vแบฏng mแบทt\nLฦฐu รฝ: Khรดng thแป xรณa nhแปฏng lรฝ do cแปงa ฤฦกn xin nghแป ฤฦฐแปฃc hแป thแปng tแบกo ra, chแป cรณ thแป chuyแปn lรฝ do sang trแบกng thรกi khรดng hoแบกt ฤแปng.',
'Thฦฐ viแปn tร i nguyรชn\nThฦฐ viแปn tร i nguyรชn lร mแปt tรญnh nฤng mแปi cแปงa 1Office. ฤรขy lร kho tร i nguyรชn chแปฉa cรกc dแปฏ liแปu mแบซu mร 1Office cung cแบฅp cho Khรกch hร ng. Nรณ hoแบกt ฤแปng giแปng nhฦฐ Google Play hay Chrome Extension. Tแบกi ฤรขy sแบฝ cรณ cรกc Template mแบซu, ngฦฐแปi dรนng cรณ thแป vร o xem thแปญ แบฃnh demo hoแบทc mรด tแบฃ, nแบฟu phรน hแปฃp vแปi ฤแบทc thรน cรดng viแปc, chแปn Cร i ฤแบทt ฤแป tแบฃi vแป. Khi ฤรณ, tร i nguyรชn vแปซa tแบฃi vแป sแบฝ ฤฦฐแปฃc cแบญp nhแบญt vร o ฤแปi tฦฐแปฃng tฦฐฦกng แปฉng trรชn phแบงn mแปm.\nSแปญ dแปฅng thฦฐ viแปn tร i nguyรชn nhฦฐ thแบฟ nร o?\nฤแป cรณ ฤฦฐแปฃc nhแปฏng tร i nguyรชn mร 1Office, ngฦฐแปi dรนng thแปฑc hiแปn theo cรกc bฦฐแปc:\nBฦฐแปc 1. Tแบฃi tร i liแปu vแป phแบงn mแปm\nBฦฐแปc 2. Cร i ฤแบทt thรดng sแป cho dแปฏ liแปu tแบฃi vแป\nBฦฐแปc 3. Sแปญ dแปฅng phแบงn mแปm vแปi cรกc tร i liแปu ฤรฃ ฤฦฐแปฃc cร i ฤแบทt trรชn phแบงn mแปm',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 71,937 training samples
* Columns: <code>question</code>, <code>passage</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | question | passage | label |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-----------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 8 tokens</li><li>mean: 25.05 tokens</li><li>max: 64 tokens</li></ul> | <ul><li>min: 26 tokens</li><li>mean: 219.55 tokens</li><li>max: 439 tokens</li></ul> | <ul><li>0: ~91.30%</li><li>1: ~8.70%</li></ul> |
* Samples:
| question | passage | label |
|:-------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------|
| <code>Tรดi nรชn bแบฏt ฤแบงu tแปซ ฤรขu ฤแป cรณ thแป sแปญ dแปฅng 1Office mแปt cรกch nhanh chรณng vร hiแปu quแบฃ?</code> | <code>CHรO MแปชNG Bแบ N ฤแบพN VแปI TRUNG TรM Hแป TRแปข 1OFFICE<br>Bแบกn muแปn truy cแบญp nhanh ฤแบฟn phแบงn mแปm 1Office? Trang chแปง<br>Nแปi dung<br>Bแบฏt ฤแบงu sแปญ dแปฅng<br>Tรฌm hiแปu vแป cรกc phรขn hแป<br>Chฦฐฦกng trรฌnh ฤร o tแบกo vร bแป sung vแป phแบงn mแปm 1Office<br>Trung tรขm trแปฃ giรบp cแปงa 1Office<br>Tแบกi ฤรขy, bแบกn sแบฝ cรณ mแปi thรดng tin cแบงn thiแบฟt ฤแป bแบฏt ฤแบงu sแปญ dแปฅng cรกc tรญnh nฤng trong phแบงn mแปm 1Office, bao gแปm Hฦฐแปng dแบซn sแปญ dแปฅng cho cรกc phรขn hแป CRM, HRM, vร Workplace. Ngoร i ra, chรบng tรดi cung cแบฅp cรกc khรณa ฤร o tแบกo, mแบนo hแปฏu รญch, vร nhiแปu hฦกn nแปฏa ฤแป giรบp bแบกn tแบญn dแปฅng tแปi ฤa ฦฐu ฤiแปm cแปงa 1Office trong quแบฃn lรฝ doanh nghiแปp.</code> | <code>1</code> |
| <code>Viแปc xuแบฅt file thรดng tin mรกy chแบฅm cรดng cรณ giรบp tรดi tแบฃi lแบกi thรดng tin cแบฅu hรฌnh vร o hแป thแปng mแปi nhanh chรณng khรดng?</code> | <code>Thรชm mรกy chแบฅm cรดng<br>Thรชm mรกy chแบฅm cรดng<br>Tรชn trฦฐแปng: ฤแปa ฤiแปm, Mรด tแบฃ: Vแป trรญ hiแปn tแบกi cแปงa mรกy chแบฅm cรดng. <br>Tรชn trฦฐแปng: Mรฃ mรกy, Mรด tแบฃ: Doanh nghiแปp tแปฑ ฤแบทt mรฃ hoแบทc cรณ thแป ฤแป trแปng (Nแบฟu doanh nghiแปp sแปญ dแปฅng nhiแปu mรกy chแบฅm cรดng แป cรกc ฤแปa ฤiแปm khรกc nhau cรณ thแป ฤแบทt tรชn ฤแป phรขn biแปt cรกc mรกy chแบฅm cรดng vร mรฃ chแบฅm cรดng cแปงa nhรขn sแปฑ). <br>Tรชn trฦฐแปng: Port, Mรด tแบฃ: Nhแบญp cแปng PORT cแปงa mรกy chแบฅm cรดng (Vร o cร i ฤแบทt cแปงa mรกy chแบฅm cรดng ฤแป xem thรดng tin cแปng PORT)<br>Tรชn trฦฐแปng: Mแบญt khแบฉu, Mรด tแบฃ: Nแบฟu mรกy chแบฅm cรดng cรณ mแบญt khแบฉu thรฌ cรกc bแบกn nhแบญp mแบญt khแบฉu แป ฤรขy<br>Tรชn trฦฐแปng: Tแบฃi tแปซ ngร y, Mรด tแบฃ: Chแปn thแปi gian bแบฏt ฤแบงu tแบฃi log vรขn tay ฤแบฟn ngร y hiแปn tแบกi<br>Tรชn trฦฐแปng: Lแปch sแปญ, Mรด tแบฃ: Thรดng tin lแปch sแปญ tool kแบฟt nแปi tแบฃi vร ฤแบฉy log dแปฏ liแปu chแบฅm cรดng<br>Tรชn trฦฐแปng: Cร i ฤแบทt, Mรด tแบฃ: Cร i ฤแบทt chung cho tool chแบฅm cรดng<br>Tรชn trฦฐแปng: URL 1Office, Mรด tแบฃ: Copy link URL แป Cร i ฤแบทt chแบฅm cรดng vร gรกn mแปฅc nร y<br>Tรชn trฦฐแปng: Thแปi gian tแบฃi log, Mรด tแบฃ: Chแปn khoแบฃng thแปi gian tแบฃi log giแปฏa hai lแบงn liรชn tiแบฟp (vรญ dแปฅ chแปn 20 phรบt nghฤฉa lร cแปฉ 20 phรบt tool tแบฃi log vแป mแปt lแบงn)<br>Tรชn trฦฐแปng: Thแปi gian ฤแบฉy log, Mรด tแบฃ: Thแปi gian nhแบญn ฤแบฉy log (vรญ dแปฅ: cแปฉ 2 phรบt ฤแบฉy log mแปt lแบงn)<br>Tรชn trฦฐแปng: Nhแบญp sแป log ฤฦฐแปฃc ฤแบฉy lรชn trong mแปt lแบงn, Mรด tแบฃ: Sแป log ฤฦฐแปฃc tแบฃi trong 1 lแบงn ฤแบฉy<br>Tรชn trฦฐแปng: Thฦฐ mแปฅc lฦฐu backup, Mรด tแบฃ: Chแปn mแปt thฦฐ mแปฅc ฤแป backup dแปฏ liแปu chแบฅm cรดng</code> | <code>0</code> |
| <code>Khi nhรขn sแปฑ thay ฤแปi phรฒng ban, vแป trรญ, chแปฉc vแปฅ thรฌ nhรณm quyแปn cแปงa hแป cรณ cแบงn ฤฦฐแปฃc cแบญp nhแบญt lแบกi khรดng?</code> | <code>Hแปฃp ฤแปng lao ฤแปng<br>Hแปฃp ฤแปng lao ฤแปng lร gรฌ?<br>Hแปฃp ฤแปng lao ฤแปng lร vฤn bแบฃn thแปa thuแบญn giแปฏa ngฦฐแปi lao ฤแปng vร ngฦฐแปi sแปญ dแปฅng lao ฤแปng vแป viแปc lร m cรณ trแบฃ cรดng, trong ฤรณ quy ฤแปnh ฤiแปu kiแปn lao ฤแปng, quyแปn vร nghฤฉa vแปฅ cแปงa mแปi bรชn trong quan hแป lao ฤแปng. Hแปฃp ฤแปng lao ฤแปng ฤฦฐแปฃc kรฝ kแบฟt theo nguyรชn tแบฏc tแปฑ nguyแปn, bรฌnh ฤแบณng, phรน hแปฃp vแปi cรกc quy ฤแปnh cแปงa phรกp luแบญt lao ฤแปng.<br>Quแบฃn lรฝ hแปฃp ฤแปng lao ฤแปng giรบp รญch gรฌ cho doanh nghiแปp?<br>Thฦฐแปng cรกc doanh nghiแปp mแปi quแบฃn lรฝ file cแปฉng cแปงa hแปฃp ฤแปng lao ฤแปng vร chฦฐa sแป hรณa ฤแป quแบฃn lรฝ. Tรญnh nฤng quแบฃn lรฝ hแปฃp ฤแปng cho phรฉp sแป hรณa cรกc thรดng tin cฦก bแบฃn cแปงa hแปฃp ฤแปng ฤแป quแบฃn lรฝ. Viแปc quแบฃn lรฝ nร y giรบp cho viแปc lฦฐu trแปฏ, tra cแปฉu thรดng tin nhanh chรณng chรญnh xรกc.<br>Ngoร i ra, cรกc dแปฏ liแปu vแป lฦฐฦกng, phแปฅ cแบฅp, vแป trรญ, chแปฉc vแปฅ, phรฒng ban trong hแปฃp ฤแปng lao ฤแปng sแบฝ ฤฦฐแปฃc cแบญp nhแบญt tแปฑ ฤแปng vร o hแป sฦก nhรขn sแปฑ. ฤรขy sแบฝ lร cฦก sแป ฤแป tรญnh toรกn lฦฐฦกng, phแปฅ cแบฅp cho ngฦฐแปi lao ฤแปng.<br>ฤแปi tฦฐแปฃng sแปญ dแปฅng hแปฃp ฤแปng lao ฤแปng:<br>Bแป phแบญn nhรขn sแปฑ vร quแบฃn lรฝ chi nhรกnh, phรฒng ban<br>Hฦฐแปng dแบซn quแบฃn lรฝ hแปฃp ฤแปng lao ฤแปng<br>ฤแป quแบฃn lรฝ hแปฃp ฤแปng lao ฤแปng, bแบกn theo dรตi nhแปฏng bฦฐแปc sau:<br>Bฦฐแปc 1. Cร i ฤแบทt hแปฃp ฤแปng lao ฤแปng<br>Bฦฐแปc 2. Tแบกo mแปi vร quแบฃn lรฝ hแปฃp ฤแปng lao ฤแปng<br>Bฦฐแปc 1. Cร i ฤแบทt hแปฃp ฤแปng lao ฤแปng</code> | <code>0</code> |
* Loss: [<code>OnlineContrastiveLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#onlinecontrastiveloss)
### Evaluation Dataset
#### Unnamed Dataset
* Size: 7,994 evaluation samples
* Columns: <code>question</code>, <code>passage</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | question | passage | label |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-----------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 8 tokens</li><li>mean: 25.25 tokens</li><li>max: 55 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 222.22 tokens</li><li>max: 443 tokens</li></ul> | <ul><li>0: ~90.30%</li><li>1: ~9.70%</li></ul> |
* Samples:
| question | passage | label |
|:-----------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------|
| <code>Bแบฃng lฦฐฦกng sแบฝ tแปฑ ฤแปng lแบฅy ฤiแปm KPI nร o khi nhรขn viรชn cรณ nhiแปu kแปณ ฤรกnh giรก ฤรฃ ฤฦฐแปฃc chแปt?</code> | <code>Khแบฏc phแปฅc sแปฑ cแป khรดng liรชn kแบฟt ฤฦฐแปฃc tแปi phรขn hแป bแบฃng lฦฐฦกng<br><br>Nguyรชn nhรขn dแบซn ฤแบฟn sแปฑ cแป khรดng liรชn kแบฟt ฤฦฐแปฃc ฤiแปm KPI tแปi phรขn hแป bแบฃng lฦฐฦกng lร do kแปณ ฤรกnh giรก KPI ฤรณ chฦฐa ฤฦฐแปฃc chแปt. ฤแป khรกch phแปฅc sแปฑ cแป nร y bแบกn cแบงn thแปฑc hiแปn cรกch khแบฏc phแปฅc sau:<br>Liรชn hแป ngฦฐแปi quแบฃn lรฝ KPI thแปฑc hiแปn chแปt kแปณ ฤรกnh giรก.<br>Truy cแบญp chi tiแบฟt kแปณ ฤรกnh giรก Thแปฑc hiแปn thao tรกc Chแปt kแปณ ฤรกnh giรก.</code> | <code>0</code> |
| <code>Cรกc bฦฐแปc cแปฅ thแป ฤแป tแบกo mแปt ฤฦกn xin thรดi viแปc mแปi trรชn hแป thแปng lร gรฌ?</code> | <code>ฤฦกn xin nghแป<br>Quแบฃn lรฝ ฤฦกn xin nghแป<br>1. Tแบกo mแปi ฤฦกn xin nghแป<br>ฤแป tแบกo mแปi ฤฦกn xin nghแป, bแบกn thแปฑc hiแปn cรกc bฦฐแปc sau:<br>Bฦฐแปc 1 : Chแปn phรขn hแป ฤฦกn tแปซ tแบกi HRM > Biแปu tฦฐแปฃng tแบกo mแปi > ฤฦกn tแปซ > ฤฦกn xin nghแป<br>Bฦฐแปc 2 : Nhแบญp ฤแบงy ฤแปง cรกc thรดng tin cแบงn thiแบฟt vร nhแบฅn nรบt Cแบญp nhแบญt ฤแป lฦฐu lแบกi<br>tแบกo mแปi ฤฦกn xin nghแป<br>Cรกc trฦฐแปng cแบงn lฦฐu รฝ:<br>STT: 1, Tรชn trฦฐแปng: Lรฝ do, Mรด tแบฃ: Ngฦฐแปi dรนng chแปn 1 trong sแป cรกc lรฝ do ฤรฃ ฤฦฐแปฃc cแบฅu hรฌnh trong cร i ฤแบทt, mแปi lรฝ do sแบฝ แปฉng vแปi tรญnh cรดng hoแบทc khรดng tรญnh cรดng nhฦฐ ฤรฃ cร i ฤแบทt trฦฐแปc ฤรณ<br>STT: 2, Tรชn trฦฐแปng: Thแปi gian ( tแปซ giแป, ngร y, ฤแบฟn giแป, ngร y), Mรด tแบฃ: Ngฦฐแปi dรนng nhแบญp thแปi gian xin nghแป (ฤฦกn vแป tรญnh theo giแป). Ngฦฐแปi dรนng cรณ thแป cแปng dรฒng ฤแป tแบกo nhiแปu ngร y nghแป trong mแปt ฤฦกn<br>STT: 3, Tรชn trฦฐแปng: Mรด tแบฃ, Mรด tแบฃ: Mรด tแบฃ thรชm thรดng tin khi tแบกo ฤฦกn xin nghแป<br>2. Quแบฃn lรฝ ฤฦกn xin nghแป<br>2.1. Mร n hรฌnh quแบฃn lรฝ ฤฦกn xin nghแป<br>ฤแป quแบฃn lรฝ ฤฦกn xin nghแป, ngฦฐแปi dรนng sแปญ dแปฅng 2 cรกch sau:<br>Cรกch 1 : Quแบฃn lรฝ ฤฦกn xin nghแป tแบกi mร n hรฌnh danh sรกch<br>Chแปn tรกc vแปฅ Lแปc nhanh > Chแปn ฤฦกn xin nghแป<br>Tแบกi mร n hรฌnh Danh sรกch ฤฦกn xin nghแป > Click chuแปt phแบฃi vร o chi tiแบฟt mแปt ฤฦกn ฤแป thแปฑc hiแปn tรกc vแปฅ quแบฃn lรฝ ฤฦกn xin nghแป.</code> | <code>0</code> |
| <code>Sau khi tแบฃi tร i liแปu tแปซ thฦฐ viแปn vแป, bฦฐแปc tiแบฟp theo lร gรฌ?</code> | <code>Thฦฐ viแปn tร i nguyรชn<br>Thฦฐ viแปn tร i nguyรชn lร mแปt tรญnh nฤng mแปi cแปงa 1Office. ฤรขy lร kho tร i nguyรชn chแปฉa cรกc dแปฏ liแปu mแบซu mร 1Office cung cแบฅp cho Khรกch hร ng. Nรณ hoแบกt ฤแปng giแปng nhฦฐ Google Play hay Chrome Extension. Tแบกi ฤรขy sแบฝ cรณ cรกc Template mแบซu, ngฦฐแปi dรนng cรณ thแป vร o xem thแปญ แบฃnh demo hoแบทc mรด tแบฃ, nแบฟu phรน hแปฃp vแปi ฤแบทc thรน cรดng viแปc, chแปn Cร i ฤแบทt ฤแป tแบฃi vแป. Khi ฤรณ, tร i nguyรชn vแปซa tแบฃi vแป sแบฝ ฤฦฐแปฃc cแบญp nhแบญt vร o ฤแปi tฦฐแปฃng tฦฐฦกng แปฉng trรชn phแบงn mแปm.<br>Sแปญ dแปฅng thฦฐ viแปn tร i nguyรชn nhฦฐ thแบฟ nร o?<br>ฤแป cรณ ฤฦฐแปฃc nhแปฏng tร i nguyรชn mร 1Office, ngฦฐแปi dรนng thแปฑc hiแปn theo cรกc bฦฐแปc:<br>Bฦฐแปc 1. Tแบฃi tร i liแปu vแป phแบงn mแปm<br>Bฦฐแปc 2. Cร i ฤแบทt thรดng sแป cho dแปฏ liแปu tแบฃi vแป<br>Bฦฐแปc 3. Sแปญ dแปฅng phแบงn mแปm vแปi cรกc tร i liแปu ฤรฃ ฤฦฐแปฃc cร i ฤแบทt trรชn phแบงn mแปm</code> | <code>1</code> |
* Loss: [<code>OnlineContrastiveLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#onlinecontrastiveloss)
### Training Hyperparameters
#### Non-Default Hyperparameters
- `overwrite_output_dir`: True
- `per_device_eval_batch_size`: 12
- `gradient_accumulation_steps`: 4
- `learning_rate`: 2e-05
- `weight_decay`: 0.01
- `num_train_epochs`: 15
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.06
- `fp16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: True
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 12
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 4
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.01
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 15
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.06
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss |
|:-------:|:-----:|:-------------:|
| 0.0445 | 100 | 0.0703 |
| 0.0890 | 200 | 0.0508 |
| 0.1334 | 300 | 0.0521 |
| 0.1779 | 400 | 0.0431 |
| 0.2224 | 500 | 0.0403 |
| 0.2669 | 600 | 0.0405 |
| 0.3114 | 700 | 0.0357 |
| 0.3558 | 800 | 0.04 |
| 0.4003 | 900 | 0.0332 |
| 0.4448 | 1000 | 0.0373 |
| 0.4893 | 1100 | 0.0426 |
| 0.5337 | 1200 | 0.0381 |
| 0.5782 | 1300 | 0.0408 |
| 0.6227 | 1400 | 0.0404 |
| 0.6672 | 1500 | 0.0444 |
| 0.7117 | 1600 | 0.0451 |
| 0.7561 | 1700 | 0.0374 |
| 0.8006 | 1800 | 0.0466 |
| 0.8451 | 1900 | 0.0455 |
| 0.8896 | 2000 | 0.0476 |
| 0.9341 | 2100 | 0.0467 |
| 0.9785 | 2200 | 0.0445 |
| 1.0230 | 2300 | 0.0398 |
| 1.0675 | 2400 | 0.0344 |
| 1.1120 | 2500 | 0.0427 |
| 1.1565 | 2600 | 0.0326 |
| 1.2009 | 2700 | 0.0317 |
| 1.2454 | 2800 | 0.0275 |
| 1.2899 | 2900 | 0.0342 |
| 1.3344 | 3000 | 0.0331 |
| 1.3789 | 3100 | 0.0385 |
| 1.4233 | 3200 | 0.0316 |
| 1.4678 | 3300 | 0.0334 |
| 1.5123 | 3400 | 0.0257 |
| 1.5568 | 3500 | 0.0269 |
| 1.6012 | 3600 | 0.0397 |
| 1.6457 | 3700 | 0.0392 |
| 1.6902 | 3800 | 0.0405 |
| 1.7347 | 3900 | 0.0335 |
| 1.7792 | 4000 | 0.0371 |
| 1.8236 | 4100 | 0.0359 |
| 1.8681 | 4200 | 0.032 |
| 1.9126 | 4300 | 0.0336 |
| 1.9571 | 4400 | 0.0334 |
| 2.0016 | 4500 | 0.0351 |
| 2.0460 | 4600 | 0.0225 |
| 2.0905 | 4700 | 0.0201 |
| 2.1350 | 4800 | 0.0267 |
| 2.1795 | 4900 | 0.0222 |
| 2.2240 | 5000 | 0.0304 |
| 2.2684 | 5100 | 0.0249 |
| 2.3129 | 5200 | 0.0255 |
| 2.3574 | 5300 | 0.0241 |
| 2.4019 | 5400 | 0.0301 |
| 2.4463 | 5500 | 0.0232 |
| 2.4908 | 5600 | 0.0232 |
| 2.5353 | 5700 | 0.0195 |
| 2.5798 | 5800 | 0.0174 |
| 2.6243 | 5900 | 0.0206 |
| 2.6687 | 6000 | 0.0286 |
| 2.7132 | 6100 | 0.025 |
| 2.7577 | 6200 | 0.0314 |
| 2.8022 | 6300 | 0.0268 |
| 2.8467 | 6400 | 0.0204 |
| 2.8911 | 6500 | 0.0235 |
| 2.9356 | 6600 | 0.0256 |
| 2.9801 | 6700 | 0.0252 |
| 3.0246 | 6800 | 0.0193 |
| 3.0691 | 6900 | 0.0135 |
| 3.1135 | 7000 | 0.0173 |
| 3.1580 | 7100 | 0.0216 |
| 3.2025 | 7200 | 0.0171 |
| 3.2470 | 7300 | 0.0174 |
| 3.2914 | 7400 | 0.0166 |
| 3.3359 | 7500 | 0.015 |
| 3.3804 | 7600 | 0.0199 |
| 3.4249 | 7700 | 0.0201 |
| 3.4694 | 7800 | 0.0386 |
| 3.5138 | 7900 | 0.04 |
| 3.5583 | 8000 | 0.0404 |
| 3.6028 | 8100 | 0.0295 |
| 3.6473 | 8200 | 0.038 |
| 3.6918 | 8300 | 0.0401 |
| 3.7362 | 8400 | 0.0359 |
| 3.7807 | 8500 | 0.0296 |
| 3.8252 | 8600 | 0.0318 |
| 3.8697 | 8700 | 0.0398 |
| 3.9142 | 8800 | 0.023 |
| 3.9586 | 8900 | 0.0175 |
| 4.0031 | 9000 | 0.019 |
| 4.0476 | 9100 | 0.021 |
| 4.0921 | 9200 | 0.0307 |
| 4.1366 | 9300 | 0.0229 |
| 4.1810 | 9400 | 0.0225 |
| 4.2255 | 9500 | 0.0259 |
| 4.2700 | 9600 | 0.0283 |
| 4.3145 | 9700 | 0.0359 |
| 4.3589 | 9800 | 0.0194 |
| 4.4034 | 9900 | 0.019 |
| 4.4479 | 10000 | 0.0217 |
| 4.4924 | 10100 | 0.0227 |
| 4.5369 | 10200 | 0.0261 |
| 4.5813 | 10300 | 0.0207 |
| 4.6258 | 10400 | 0.0192 |
| 4.6703 | 10500 | 0.0241 |
| 4.7148 | 10600 | 0.016 |
| 4.7593 | 10700 | 0.0271 |
| 4.8037 | 10800 | 0.0253 |
| 4.8482 | 10900 | 0.0233 |
| 4.8927 | 11000 | 0.0248 |
| 4.9372 | 11100 | 0.0162 |
| 4.9817 | 11200 | 0.0223 |
| 5.0261 | 11300 | 0.0139 |
| 5.0706 | 11400 | 0.0208 |
| 5.1151 | 11500 | 0.0194 |
| 5.1596 | 11600 | 0.0197 |
| 5.2040 | 11700 | 0.0154 |
| 5.2485 | 11800 | 0.0188 |
| 5.2930 | 11900 | 0.0164 |
| 5.3375 | 12000 | 0.0128 |
| 5.3820 | 12100 | 0.0175 |
| 5.4264 | 12200 | 0.0123 |
| 5.4709 | 12300 | 0.0134 |
| 5.5154 | 12400 | 0.0174 |
| 5.5599 | 12500 | 0.0236 |
| 5.6044 | 12600 | 0.0143 |
| 5.6488 | 12700 | 0.02 |
| 5.6933 | 12800 | 0.0157 |
| 5.7378 | 12900 | 0.0162 |
| 5.7823 | 13000 | 0.0162 |
| 5.8268 | 13100 | 0.0202 |
| 5.8712 | 13200 | 0.0134 |
| 5.9157 | 13300 | 0.0192 |
| 5.9602 | 13400 | 0.0112 |
| 6.0047 | 13500 | 0.0173 |
| 6.0491 | 13600 | 0.0138 |
| 6.0936 | 13700 | 0.0124 |
| 6.1381 | 13800 | 0.0113 |
| 6.1826 | 13900 | 0.0148 |
| 6.2271 | 14000 | 0.0116 |
| 6.2715 | 14100 | 0.0112 |
| 6.3160 | 14200 | 0.0122 |
| 6.3605 | 14300 | 0.0104 |
| 6.4050 | 14400 | 0.0151 |
| 6.4495 | 14500 | 0.0111 |
| 6.4939 | 14600 | 0.0145 |
| 6.5384 | 14700 | 0.0139 |
| 6.5829 | 14800 | 0.0155 |
| 6.6274 | 14900 | 0.0181 |
| 6.6719 | 15000 | 0.0109 |
| 6.7163 | 15100 | 0.0096 |
| 6.7608 | 15200 | 0.0105 |
| 6.8053 | 15300 | 0.0107 |
| 6.8498 | 15400 | 0.0132 |
| 6.8943 | 15500 | 0.0118 |
| 6.9387 | 15600 | 0.0158 |
| 6.9832 | 15700 | 0.0153 |
| 7.0277 | 15800 | 0.0121 |
| 7.0722 | 15900 | 0.013 |
| 7.1166 | 16000 | 0.0151 |
| 7.1611 | 16100 | 0.0088 |
| 7.2056 | 16200 | 0.0099 |
| 7.2501 | 16300 | 0.0084 |
| 7.2946 | 16400 | 0.0123 |
| 7.3390 | 16500 | 0.0137 |
| 7.3835 | 16600 | 0.0136 |
| 7.4280 | 16700 | 0.0106 |
| 7.4725 | 16800 | 0.0111 |
| 7.5170 | 16900 | 0.0142 |
| 7.5614 | 17000 | 0.015 |
| 7.6059 | 17100 | 0.0082 |
| 7.6504 | 17200 | 0.0085 |
| 7.6949 | 17300 | 0.0126 |
| 7.7394 | 17400 | 0.0101 |
| 7.7838 | 17500 | 0.0112 |
| 7.8283 | 17600 | 0.0106 |
| 7.8728 | 17700 | 0.0048 |
| 7.9173 | 17800 | 0.0131 |
| 7.9617 | 17900 | 0.0107 |
| 8.0062 | 18000 | 0.0119 |
| 8.0507 | 18100 | 0.0059 |
| 8.0952 | 18200 | 0.0063 |
| 8.1397 | 18300 | 0.0066 |
| 8.1841 | 18400 | 0.0101 |
| 8.2286 | 18500 | 0.0066 |
| 8.2731 | 18600 | 0.0095 |
| 8.3176 | 18700 | 0.0097 |
| 8.3621 | 18800 | 0.0046 |
| 8.4065 | 18900 | 0.0081 |
| 8.4510 | 19000 | 0.005 |
| 8.4955 | 19100 | 0.0069 |
| 8.5400 | 19200 | 0.0068 |
| 8.5845 | 19300 | 0.0061 |
| 8.6289 | 19400 | 0.009 |
| 8.6734 | 19500 | 0.0078 |
| 8.7179 | 19600 | 0.008 |
| 8.7624 | 19700 | 0.0056 |
| 8.8068 | 19800 | 0.0098 |
| 8.8513 | 19900 | 0.0076 |
| 8.8958 | 20000 | 0.006 |
| 8.9403 | 20100 | 0.0043 |
| 8.9848 | 20200 | 0.0049 |
| 9.0292 | 20300 | 0.0059 |
| 9.0737 | 20400 | 0.0066 |
| 9.1182 | 20500 | 0.0035 |
| 9.1627 | 20600 | 0.0041 |
| 9.2072 | 20700 | 0.0056 |
| 9.2516 | 20800 | 0.0065 |
| 9.2961 | 20900 | 0.0021 |
| 9.3406 | 21000 | 0.0078 |
| 9.3851 | 21100 | 0.0051 |
| 9.4296 | 21200 | 0.0037 |
| 9.4740 | 21300 | 0.0082 |
| 9.5185 | 21400 | 0.0053 |
| 9.5630 | 21500 | 0.0066 |
| 9.6075 | 21600 | 0.0075 |
| 9.6520 | 21700 | 0.0022 |
| 9.6964 | 21800 | 0.0026 |
| 9.7409 | 21900 | 0.0048 |
| 9.7854 | 22000 | 0.0067 |
| 9.8299 | 22100 | 0.0068 |
| 9.8743 | 22200 | 0.0073 |
| 9.9188 | 22300 | 0.0056 |
| 9.9633 | 22400 | 0.0088 |
| 10.0078 | 22500 | 0.0037 |
| 10.0523 | 22600 | 0.0035 |
| 10.0967 | 22700 | 0.0051 |
| 10.1412 | 22800 | 0.004 |
| 10.1857 | 22900 | 0.0021 |
| 10.2302 | 23000 | 0.0051 |
| 10.2747 | 23100 | 0.0031 |
| 10.3191 | 23200 | 0.0008 |
| 10.3636 | 23300 | 0.0041 |
| 10.4081 | 23400 | 0.0013 |
| 10.4526 | 23500 | 0.0033 |
| 10.4971 | 23600 | 0.0036 |
| 10.5415 | 23700 | 0.0071 |
| 10.5860 | 23800 | 0.0043 |
| 10.6305 | 23900 | 0.0052 |
| 10.6750 | 24000 | 0.0016 |
| 10.7194 | 24100 | 0.003 |
| 10.7639 | 24200 | 0.0038 |
| 10.8084 | 24300 | 0.0055 |
| 10.8529 | 24400 | 0.0031 |
| 10.8974 | 24500 | 0.0029 |
| 10.9418 | 24600 | 0.0041 |
| 10.9863 | 24700 | 0.0081 |
| 11.0308 | 24800 | 0.0043 |
| 11.0753 | 24900 | 0.0025 |
| 11.1198 | 25000 | 0.0026 |
| 11.1642 | 25100 | 0.0015 |
| 11.2087 | 25200 | 0.0019 |
| 11.2532 | 25300 | 0.0025 |
| 11.2977 | 25400 | 0.0021 |
| 11.3422 | 25500 | 0.0013 |
| 11.3866 | 25600 | 0.0031 |
| 11.4311 | 25700 | 0.0033 |
| 11.4756 | 25800 | 0.0039 |
| 11.5201 | 25900 | 0.0029 |
| 11.5646 | 26000 | 0.004 |
| 11.6090 | 26100 | 0.0033 |
| 11.6535 | 26200 | 0.0022 |
| 11.6980 | 26300 | 0.0018 |
| 11.7425 | 26400 | 0.002 |
| 11.7869 | 26500 | 0.0043 |
| 11.8314 | 26600 | 0.0026 |
| 11.8759 | 26700 | 0.0025 |
| 11.9204 | 26800 | 0.0029 |
| 11.9649 | 26900 | 0.002 |
| 12.0093 | 27000 | 0.0006 |
| 12.0538 | 27100 | 0.0021 |
| 12.0983 | 27200 | 0.0013 |
| 12.1428 | 27300 | 0.0012 |
| 12.1873 | 27400 | 0.0025 |
| 12.2317 | 27500 | 0.0018 |
| 12.2762 | 27600 | 0.0025 |
| 12.3207 | 27700 | 0.0023 |
| 12.3652 | 27800 | 0.0013 |
| 12.4097 | 27900 | 0.0014 |
| 12.4541 | 28000 | 0.0013 |
| 12.4986 | 28100 | 0.0042 |
| 12.5431 | 28200 | 0.0013 |
| 12.5876 | 28300 | 0.0007 |
| 12.6320 | 28400 | 0.0008 |
| 12.6765 | 28500 | 0.0036 |
| 12.7210 | 28600 | 0.0026 |
| 12.7655 | 28700 | 0.0006 |
| 12.8100 | 28800 | 0.0037 |
| 12.8544 | 28900 | 0.002 |
| 12.8989 | 29000 | 0.0008 |
| 12.9434 | 29100 | 0.0019 |
| 12.9879 | 29200 | 0.0007 |
| 13.0324 | 29300 | 0.0018 |
| 13.0768 | 29400 | 0.0007 |
| 13.1213 | 29500 | 0.0014 |
| 13.1658 | 29600 | 0.0015 |
| 13.2103 | 29700 | 0.0006 |
| 13.2548 | 29800 | 0.0026 |
| 13.2992 | 29900 | 0.0006 |
| 13.3437 | 30000 | 0.0006 |
| 13.3882 | 30100 | 0.0001 |
| 13.4327 | 30200 | 0.0012 |
| 13.4771 | 30300 | 0.0007 |
| 13.5216 | 30400 | 0.0033 |
| 13.5661 | 30500 | 0.0018 |
| 13.6106 | 30600 | 0.0001 |
| 13.6551 | 30700 | 0.0025 |
| 13.6995 | 30800 | 0.0007 |
| 13.7440 | 30900 | 0.0 |
| 13.7885 | 31000 | 0.0006 |
| 13.8330 | 31100 | 0.002 |
| 13.8775 | 31200 | 0.0013 |
| 13.9219 | 31300 | 0.0015 |
| 13.9664 | 31400 | 0.0021 |
| 14.0109 | 31500 | 0.0014 |
| 14.0554 | 31600 | 0.002 |
| 14.0999 | 31700 | 0.0014 |
| 14.1443 | 31800 | 0.0006 |
| 14.1888 | 31900 | 0.0006 |
| 14.2333 | 32000 | 0.0 |
| 14.2778 | 32100 | 0.0013 |
| 14.3223 | 32200 | 0.0008 |
| 14.3667 | 32300 | 0.0007 |
| 14.4112 | 32400 | 0.0 |
| 14.4557 | 32500 | 0.0012 |
| 14.5002 | 32600 | 0.0012 |
| 14.5446 | 32700 | 0.0012 |
| 14.5891 | 32800 | 0.0026 |
| 14.6336 | 32900 | 0.0013 |
| 14.6781 | 33000 | 0.0006 |
| 14.7226 | 33100 | 0.0006 |
| 14.7670 | 33200 | 0.0014 |
| 14.8115 | 33300 | 0.0007 |
| 14.8560 | 33400 | 0.0006 |
| 14.9005 | 33500 | 0.0013 |
| 14.9450 | 33600 | 0.0007 |
| 14.9894 | 33700 | 0.0 |
</details>
### Framework Versions
- Python: 3.8.10
- Sentence Transformers: 3.2.1
- Transformers: 4.44.2
- PyTorch: 2.4.1+cu124
- Accelerate: 1.0.1
- Datasets: 2.19.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
shabawak/ClinicalBERT-ColBERT | shabawak | 2025-06-14T03:33:07Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"safetensors",
"bert",
"ColBERT",
"retrieval",
"triplets-learning",
"dense-retrieval",
"medical",
"question-answering",
"en",
"dataset:abhinand/MedEmbed-training-triplets-v1",
"base_model:google-bert/bert-base-uncased",
"base_model:adapter:google-bert/bert-base-uncase... | question-answering | 2025-06-14T02:16:41Z | ---
license: mit
datasets:
- abhinand/MedEmbed-training-triplets-v1
language:
- en
base_model:
- medicalai/ClinicalBERT
- google-bert/bert-base-uncased
pipeline_tag: question-answering
library_name: adapter-transformers
tags:
- ColBERT
- retrieval
- triplets-learning
- dense-retrieval
- medical
---
# Medical Dense Retriever (Fine-tuned on Triplets)
This is a fine-tuned [medicalai/ClinicalBERT](https://huggingface.co/medicalai/ClinicalBERT) model trained on medical question-answer triplets using the [`abhinand/MedEmbed-training-triplets-v1`](https://huggingface.co/datasets/abhinand/MedEmbed-training-triplets-v1) dataset.
## Dataset
- **Source**: `abhinand/MedEmbed-training-triplets-v1`
- Format: (query, positive passage, negative passage)
- Size: Subsampled to 1000 triplets for demonstration
## Model Architecture
- Based on: `ClinicalBERT` (or similar encoder)
- Uses ColBERT's **late interaction** (MaxSim)
- Trained with triplet loss
## Training Configuration
| Parameter | Value | Description |
|--------------------|--------------------|-------------|
| `base_model` | `medicalai/ClinicalBERT` | Pretrained model used |
| `interaction` | `colbert` | Late interaction for dense retrieval |
| `embedding_dim` | `128` | Vector dimension per token |
| `similarity` | `cosine` | Scoring method |
| `doc_maxlen` | `256` | Max length of document input |
| `query_maxlen` | `32` | (From config or defaults) |
| `batch_size` | `32` (global) | Effective total batch size |
| `per_gpu_batch_size`| `16` | Because `nranks = 2` |
| `accum_steps` | `1` | Gradient accumulation |
| `learning_rate` | `5e-6` | Optimizer learning rate |
| `max_steps` | `500000` | Training cutoff |
| `warmup_steps` | `auto` | Defaults to 10% of total steps |
| `use_ib_negatives` | `True` | In-batch negatives for training |
| `use_relu` | `False` | Disabled (default for ColBERT) |
| `nbits` | `4` | Index compression (Product Quantization) |
| `AMP` | `True` | Mixed precision training |
| `gpus` | `2` | Multi-GPU training |
| `nranks` | `2` | Distributed ranks (1 per GPU) |
## Intended Use
Dense retrieval for:
- Medical Q&A
- Biomedical semantic search
- Clinical decision support
## ๐งช How to Use
shabawak/ClinicalBERT-colbert-finetuned-ragatouille using RAGatouille
ClinicalBERT-colbert-finetuned-ragatouille operates using ColBERT + RAGatouille. To install it along with its dependencies, run:
``` python
pip install -U ragatouille
```
## Using Bio_Clinical_ColBERT-finetuned Without an Index
For in-memory searching, simply:
1. Load the model
2. Encode documents
3. Search using search_encoded_documents()
``` python
from ragatouille import RAGPretrainedModel
RAG = RAGPretrainedModel.from_pretrained("shabawak/ClinicalBERT-colbert-finetuned-ragatouille")
RAG.encode(['document_1', 'document_2', ...])
RAG.search_encoded_documents(query="your search query")
```
- New encode() calls append to the existing collection.
- Clear stored docs with RAG.clear_encoded_docs().
## Indexing Documents
ColBERT's late-interaction retrieval requires indexing first. This step is slow, but retrieval is fast.
``` python
from ragatouille import RAGPretrainedModel
RAG = RAGPretrainedModel.from_pretrained("shabawak/ClinicalBERT-colbert-finetuned-ragatouille")
documents = ['document_1', 'document_2', ...] # Your documents
RAG.index(name="My_first_index", collection=documents)
```
- Index files are saved in .ragatouille/colbert/indexes/{index_name} by default.
## Searching an Index
After indexing, querying is straightforward. If reopening a session, load the index first:
``` python
RAG = RAGPretrainedModel.from_index(".ragatouille/colbert/indexes/My_first_index")
results = RAG.search(query="What is the expected outcome for a patient diagnosed with D-2-hydroxyglutaric aciduria type I?",k=5)
```
- The results include content, relevance scores, rankings, and metadata (if provided).
``` python
[[{'content': 'In April 1984, Miyazaki opened his own office in Suginami Ward, naming it Nibariki.\n\n\n=== Studio Ghibli ===\n\n\n==== Early films (1985โ1996) ====\nIn June 1985, Miyazaki, Takahata, Tokuma and Suzuki founded the animation production company Studio Ghibli, with funding from Tokuma Shoten. Studio Ghibli\'s first film, Laputa: Castle in the Sky (1986), employed the same production crew of Nausicaรค. Miyazaki\'s designs for the film\'s setting were inspired by Greek architecture and "European urbanistic templates".',
'score': 25.90448570251465,
'rank': 1,
'document_id': 'miyazaki',
'document_metadata': {'entity': 'person', 'source': 'wikipedia'}},
{'content': 'Hayao Miyazaki (ๅฎฎๅด ้งฟ or ๅฎฎ๏จ ้งฟ, Miyazaki Hayao, Japanese: [mija๊zaki hajao]; born January 5, 1941) is a Japanese animator, filmmaker, and manga artist. A co-founder of Studio Ghibli, he has attained international acclaim as a masterful storyteller and creator of Japanese animated feature films, and is widely regarded as one of the most accomplished filmmakers in the history of animation.\nBorn in Tokyo City in the Empire of Japan, Miyazaki expressed interest in manga and animation from an early age, and he joined Toei Animation in 1963. During his early years at Toei Animation he worked as an in-between artist and later collaborated with director Isao Takahata.',
'score': 25.572620391845703,
'rank': 2,
'document_id': 'miyazaki',
'document_metadata': {'entity': 'person', 'source': 'wikipedia'}},
{'content': 'Glen Keane said Miyazaki is a "huge influence" on Walt Disney Animation Studios and has been "part of our heritage" ever since The Rescuers Down Under (1990). The Disney Renaissance era was also prompted by competition with the development of Miyazaki\'s films. Artists from Pixar and Aardman Studios signed a tribute stating, "You\'re our inspiration, Miyazaki-san!"',
'score': 24.84041976928711,
'rank': 3,
'document_id': 'miyazaki',
'document_metadata': {'entity': 'person', 'source': 'wikipedia'}}]]
``` |
TOMFORD79/incen28 | TOMFORD79 | 2025-06-14T03:27:32Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-06-14T02:41:20Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
gradientrouting-spar/vertical_1_proxy_ntrain_25_ntrig_9_actions_seed_1_seed_25_seed_2_seed_42_20250614_030620 | gradientrouting-spar | 2025-06-14T03:15:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T03:15:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Naqeeb-2424/Celestia | Naqeeb-2424 | 2025-06-14T03:14:27Z | 25 | 1 | transformers | [
"transformers",
"AutoModelForCausalLM",
"language-model",
"NLP",
"custom_code",
"text-generation",
"en",
"dataset:HuggingFaceFW/fineweb-edu",
"base_model:Naqeeb-2424/Celestia",
"base_model:finetune:Naqeeb-2424/Celestia",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-02T05:39:47Z | ---
datasets:
- HuggingFaceFW/fineweb-edu
language:
- en
metrics:
- accuracy
pipeline_tag: text-generation
tags:
- language-model
- NLP
- transformers
- custom_code
license: apache-2.0
base_model:
- Naqeeb-2424/Celestia
---
# โโ**WARNING**โโ: Read usage and access section below to know how to use model and also currently the incremental training is stopped, I will continue to train model soon.
# A Good news......
Now you can use finetuned version of this model from https://huggingface.co/Naqeeb-2424/Celestia-0.5 website.
# Celestia: A Next-Generation Transformer Model
Celestia is a breakthrough transformer model designed to deliver high-quality, contextually rich, and creative text generation. With 290 million parameters in its first variant, Celestia has been pretrained on approximately **8 billion** tokens using an innovative incremental training approach on Kaggle with TPU v3-8 hardware. This efficient training method leverages the Fine Web Edu dataset to achieve performance that not only rivals but in many cases outperforms popular small-scale models from Hugging Face.
## Key Features
- **Sophisticated Architecture:**
Celestia is built on a state-of-the-art transformer architecture that includes:
- Multi-head attention with optimized key-value mechanisms.
- Sliding-window attention for efficient handling of long contexts.
- A Mixture-of-Experts (MoE) feed-forward network to boost performance.
- Advanced normalization techniques to ensure stability during training.
- **Resource-Efficient Training:**
Despite having 290 million parameters, Celestia was pretrained on only 8 billion tokens. This was made possible by an incremental training strategy, which allowed us to push the boundaries of model performance even with limited resources. Training was carried out on TPU v3-8 on Kaggle, making it an excellent example of how cutting-edge research can be achieved on modest computational budgets.
- **Superior Performance:**
In rigorous tests, Celestia has outperformed several well-known, like **smollm** by **Hugging Face** often considered benchmarks for both speed and accuracy. Its ability to understand complex, abstract, and nuanced contexts sets it apart from many existing alternatives.
- **Flexible Generation Capabilities:**
Originally designed as a sentence completion model, Celestia excels at generating thought-provoking and creative continuations. It supports both beam search and temperature-based sampling, ensuring versatility in a variety of text-generation applications. With minor fine-tuning, Celestia can also be adapted for specialized tasks such as conversational agents, summarization, or other domain-specific applications.
## Why Celestia Stands Apart
- **Efficiency & Innovation:**
Celestia demonstrates that high-quality language models can be built with relatively modest computational resources. By leveraging an incremental training approach, it not only reduces training time and resource demands but also produces outputs with greater depth and nuance compared to many small-scale models available today.
- **Performance Beyond the Norm:**
While many small models on Hugging Face have garnered attention for their accuracy and speed, Celestia has consistently shown superior results in generating creative, coherent, and context-aware text. Its performance on abstract reasoning, complex narrative generation, and sophisticated sentence completions has set a new standard for what can be achieved in this model size category.
- **Ongoing Development:**
Celestia is not a finished productโit is an evolving project. Our long-term plan is to continue training on the full Fine Web Edu dataset, with periodic updates to the model as more tokens are processed and new techniques are integrated. This commitment to continuous improvement ensures that Celestia will remain at the cutting edge of language generation research.
## Future Directions
We plan to:
- Extend the pretraining to the entire Fine Web Edu dataset.
- Regularly update the model with incremental training iterations.
- Explore fine-tuning strategies for task-specific applications such as interactive conversations, summarization, and more.
## Usage and Access
If you wish to utilize Celestia in your own projects, please contact [naqeeb.ajk63@gmail.com](mailto:naqeeb.ajk63@gmail.com) for the complete model code and usage instructions. We provide support for both beam search and temperature-based sampling approaches, ensuring you have the tools to optimize the model for your specific needs.
**It is suggested not to use huggingface library for loading and using model, you should download model from website directly with tokenizer file and then use *usage.py* to use this model for inference, you should directly use that, also no need to download config.json file if you will use the usage code that i will write. I will write usage.py for temperature sampling as this model is "state of art model" and donot requires beam search. But it also show good results with beam search too.**
**In the files section, i uploaded *usage.py* and *Beam_search.py* files so that any one can use these files to test model. Usage.py contains inference code with temperature and nucleus hybrid sampling, however beamsearch code is purely for beam search inference logic.**
## Limitations
Model still needs pretraining, till now it donot have some perfect factual knowledge, i did not evaluated model but I tested it with temperature sampling which reveals that, maybe fine-tuning can make it little more better. In future, I will do incremental training on combined datasets of **The stack(v1)** with **fineweb-edu** and **finemath** too. Till now it shows **marvelous** results despite of its low parameters **(290M).**
## License
This project is licensed under the [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0).
---
Celestia represents a new paradigm in resource-efficient language modelingโdelivering superior performance and creative output even when compared to established small-scale models. We invite you to explore its capabilities and join us in pushing the boundaries of what is possible in natural language generation. |
TOMFORD79/incen24 | TOMFORD79 | 2025-06-14T03:02:08Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-06-14T02:41:03Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
BootesVoid/cmbvaaf1j014bwoixggll0vih_cmbvmhyzi01rbwoixnoqhrd9k | BootesVoid | 2025-06-14T03:00:52Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-14T03:00:51Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: BELLA
---
# Cmbvaaf1J014Bwoixggll0Vih_Cmbvmhyzi01Rbwoixnoqhrd9K
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `BELLA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "BELLA",
"lora_weights": "https://huggingface.co/BootesVoid/cmbvaaf1j014bwoixggll0vih_cmbvmhyzi01rbwoixnoqhrd9k/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbvaaf1j014bwoixggll0vih_cmbvmhyzi01rbwoixnoqhrd9k', weight_name='lora.safetensors')
image = pipeline('BELLA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbvaaf1j014bwoixggll0vih_cmbvmhyzi01rbwoixnoqhrd9k/discussions) to add images that show off what youโve made with this LoRA.
|
apriasmoro/41a9052c-4b1b-4e6a-a06b-5fe2f35b18c7 | apriasmoro | 2025-06-14T02:55:52Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:lmsys/vicuna-13b-v1.5",
"base_model:adapter:lmsys/vicuna-13b-v1.5",
"license:llama2",
"region:us"
] | null | 2025-06-14T02:18:39Z | ---
library_name: peft
license: llama2
base_model: lmsys/vicuna-13b-v1.5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 41a9052c-4b1b-4e6a-a06b-5fe2f35b18c7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.10.0.dev0`
```yaml
adapter: lora
base_model: lmsys/vicuna-13b-v1.5
bf16: true
chat_template: llama3
datasets:
- data_files:
- d37539cd238a70c6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
eval_max_new_tokens: 256
evals_per_epoch: 2
flash_attention: false
fp16: false
gradient_accumulation_steps: 2
gradient_checkpointing: true
group_by_length: true
hub_model_id: apriasmoro/41a9052c-4b1b-4e6a-a06b-5fe2f35b18c7
learning_rate: 0.0002
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: false
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 221
micro_batch_size: 8
mlflow_experiment_name: /tmp/d37539cd238a70c6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
sample_packing: false
save_steps: 44
sequence_len: 2048
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ae4f265e-d12c-4c10-b028-0b017e4b4ffb
wandb_project: Gradients-On-Demand
wandb_run: apriasmoro
wandb_runid: ae4f265e-d12c-4c10-b028-0b017e4b4ffb
warmup_steps: 100
weight_decay: 0.01
```
</details><br>
# 41a9052c-4b1b-4e6a-a06b-5fe2f35b18c7
This model is a fine-tuned version of [lmsys/vicuna-13b-v1.5](https://huggingface.co/lmsys/vicuna-13b-v1.5) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8428
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 221
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0256 | 1 | 0.9838 |
| 0.9605 | 0.9487 | 37 | 0.8725 |
| 0.7624 | 1.8974 | 74 | 0.7775 |
| 0.6601 | 2.8462 | 111 | 0.7617 |
| 0.479 | 3.7949 | 148 | 0.7741 |
| 0.3016 | 4.7436 | 185 | 0.8428 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.5.1+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1 |
rivasderek55072/hug | rivasderek55072 | 2025-06-14T02:55:27Z | 0 | 0 | null | [
"license:bsd-3-clause",
"region:us"
] | null | 2025-06-14T02:55:27Z | ---
license: bsd-3-clause
---
|
jaisalmer-viral-videos/Original.Full.Clip.jaisalmer.Viral.Video.Leaks.Official | jaisalmer-viral-videos | 2025-06-14T02:48:11Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-14T02:47:53Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
gradientrouting-spar/vertical_1_proxy_ntrain_25_ntrig_9_actions_seed_1_20250614_023819 | gradientrouting-spar | 2025-06-14T02:47:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T02:47:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gradientrouting-spar/gcd_syco_cap_math_safe_lora_safe_lora_num_proj_layers-200_seed_42 | gradientrouting-spar | 2025-06-14T02:37:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T02:37:03Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TOMFORD79/incen18 | TOMFORD79 | 2025-06-14T02:35:10Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-06-14T02:29:54Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
gradientrouting-spar/gcd_syco_cap_math_safe_lora_safe_lora_num_proj_layers-200_seed_1 | gradientrouting-spar | 2025-06-14T02:31:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T02:31:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DevQuasar/nanonets.Nanonets-OCR-s-GGUF | DevQuasar | 2025-06-14T02:26:25Z | 0 | 0 | null | [
"gguf",
"image-text-to-text",
"base_model:nanonets/Nanonets-OCR-s",
"base_model:quantized:nanonets/Nanonets-OCR-s",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-06-13T23:22:32Z | ---
base_model:
- nanonets/Nanonets-OCR-s
pipeline_tag: image-text-to-text
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
'Make knowledge free for everyone'
Quantized version of: [nanonets/Nanonets-OCR-s](https://huggingface.co/nanonets/Nanonets-OCR-s)
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a> |
gradientrouting-spar/vertical_1_proxy_ntrain_25_ntrig_9_negative_seed_1_seed_25_20250614_020926 | gradientrouting-spar | 2025-06-14T02:19:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T02:18:27Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.5_0.75_0.25_epoch2 | MinaMila | 2025-06-14T02:11:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-14T02:09:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BootesVoid/cmbuvqusa00epwoixgf9wittk_cmbvihvse01kxwoix702enf6o | BootesVoid | 2025-06-14T02:02:28Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-14T02:02:27Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: SEXYTEXAS
---
# Cmbuvqusa00Epwoixgf9Wittk_Cmbvihvse01Kxwoix702Enf6O
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `SEXYTEXAS` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "SEXYTEXAS",
"lora_weights": "https://huggingface.co/BootesVoid/cmbuvqusa00epwoixgf9wittk_cmbvihvse01kxwoix702enf6o/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbuvqusa00epwoixgf9wittk_cmbvihvse01kxwoix702enf6o', weight_name='lora.safetensors')
image = pipeline('SEXYTEXAS').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbuvqusa00epwoixgf9wittk_cmbvihvse01kxwoix702enf6o/discussions) to add images that show off what youโve made with this LoRA.
|
gradientrouting-spar/vertical_1_proxy_ntrain_25_ntrig_9_random_seed_1_seed_25_seed_2_seed_42_20250614_015006 | gradientrouting-spar | 2025-06-14T01:59:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T01:59:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
phospho-app/shauryam75-ACT_BBOX-so100-tictactoe-sqevn | phospho-app | 2025-06-14T01:47:58Z | 0 | 0 | null | [
"phosphobot",
"act",
"region:us"
] | null | 2025-06-14T01:32:16Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
The object 'yellow ball' was detected in 0 episodes in main camera (should be: 10 episodes min). This is not enough to train a model. Check your dataset: https://lerobot-visualize-dataset.hf.space/phospho-ai/so100-tictactoe/ and rephrase the instruction.
```
## Training parameters:
- **Dataset**: [phospho-ai/so100-tictactoe](https://huggingface.co/datasets/phospho-ai/so100-tictactoe)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
๐ **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
๐ค **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
gradientrouting-spar/vertical_1_proxy_ntrain_25_ntrig_9_random_seed_1_seed_25_20250614_013120 | gradientrouting-spar | 2025-06-14T01:40:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T01:40:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
yzhuang/Qwen3-0.6B-fisher | yzhuang | 2025-06-14T01:25:06Z | 47 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"conversational",
"dataset:DKYoon/SlimPajama-6B",
"base_model:Qwen/Qwen3-0.6B",
"base_model:finetune:Qwen/Qwen3-0.6B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_co... | text-generation | 2025-06-10T15:15:59Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen3-0.6B
tags:
- generated_from_trainer
datasets:
- DKYoon/SlimPajama-6B
model-index:
- name: Qwen3-0.6B-fisher
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qwen3-0.6B-fisher
This model is a fine-tuned version of [Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B) on the DKYoon/SlimPajama-6B dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 40
- total_eval_batch_size: 40
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.7.0+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
gradientrouting-spar/vertical_1_proxy_ntrain_25_ntrig_9_foods_seed_1_seed_25_seed_2_seed_42_20250614_011229 | gradientrouting-spar | 2025-06-14T01:21:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T01:21:32Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dgambettaphd/M_llm2_run1_gen9_WXS_doc1000_synt64_lr1e-04_acm_SYNLAST | dgambettaphd | 2025-06-14T01:12:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T01:12:32Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MaxYuki/Daiki-Yamashita-LoRa | MaxYuki | 2025-06-14T01:11:55Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:apache-2.0",
"region:us"
] | text-to-image | 2025-06-14T01:11:55Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/1000825965.jpg
- text: '-'
output:
url: images/1000825966.jpg
- text: '-'
output:
url: images/1000825967.jpg
- text: '-'
output:
url: images/1000825968.jpg
- text: '-'
output:
url: images/1000825969.jpg
- text: '-'
output:
url: images/1000825970.jpg
- text: '-'
output:
url: images/1000825971.jpg
- text: '-'
output:
url: images/1000825974.jpg
- text: '-'
output:
url: images/1000825973.jpg
- text: '-'
output:
url: images/1000825972.jpg
- text: '-'
output:
url: images/1000825975.jpg
- text: '-'
output:
url: images/1000825976.jpg
- text: '-'
output:
url: images/1000825977.jpg
- text: '-'
output:
url: images/1000825978.jpg
- text: '-'
output:
url: images/1000825979.jpg
- text: '-'
output:
url: images/1000825980.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Daiki Yamashita, Daiking
license: apache-2.0
---
# Daiki Yamashita/Daiking LoRa
<Gallery />
## Model description
Daiki Yamashita image lora
## Trigger words
You should use `Daiki Yamashita` to trigger the image generation.
You should use `Daiking` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/MaxYuki/Daiki-Yamashita-LoRa/tree/main) them in the Files & versions tab.
|
opensearch-project/opensearch-semantic-highlighter-v1 | opensearch-project | 2025-06-14T01:05:00Z | 1,794 | 1 | transformers | [
"transformers",
"safetensors",
"bert",
"opensearch",
"semantic-search",
"highlighting",
"sentence-highlighter",
"text-classification",
"pytorch",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-22T10:40:33Z | ---
language: en
license: apache-2.0
library_name: transformers
tags:
- opensearch
- semantic-search
- highlighting
- sentence-highlighter
- bert
- text-classification
- pytorch
pipeline_tag: text-classification
---
# opensearch-semantic-highlighter
## Overview
The OpenSearch semantic highlighter is a trained classifier that takes a document and query as input and returns a binary score for each sentence in the document indicating its relevance to the query.
## Usage
This model is intended to run **inside an OpenSearch cluster**. For production workloads you should deploy the traced version via the ML Commons pluginโsee the OpenSearch documentation on [semantic sentence highlighting models](https://docs.opensearch.org/docs/latest/ml-commons-plugin/pretrained-models/#semantic-sentence-highlighting-models).
If you simply want to experiment outside a cluster you can run the source model locally. First install the dependencies (Python โฅ 3.8):
```bash
pip install torch transformers datasets nltk
python -m nltk.downloader punkt
```
Then run the example below:
```python
import nltk
import torch
import numpy as np
from datasets import Dataset
from functools import partial
from torch.utils.data import DataLoader
from dataclasses import dataclass, field
from typing import Any, Dict, List, Union
from torch.nn.utils.rnn import pad_sequence
from transformers import AutoTokenizer, BertModel, BertPreTrainedModel
import torch.nn as nn
class BertTaggerForSentenceExtractionWithBackoff(BertPreTrainedModel):
"""Sentence-level BERT classifier with a confidence-backoff rule."""
def __init__(self, config):
super().__init__(config)
self.num_labels = config.num_labels
self.bert = BertModel(config)
self.dropout = nn.Dropout(config.hidden_dropout_prob)
self.classifier = nn.Linear(config.hidden_size, self.num_labels)
self.init_weights()
def forward(
self,
input_ids=None,
attention_mask=None,
token_type_ids=None,
sentence_ids=None,
):
outputs = self.bert(
input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
)
sequence_output = self.dropout(outputs[0])
def _get_agg_output(ids, seq_out):
max_sentences = torch.max(ids) + 1
d_model = seq_out.size(-1)
agg_out, global_offsets, num_sents = [], [], []
for i, sen_ids in enumerate(ids):
out, local_ids = [], sen_ids.clone()
mask = local_ids != -100
offset = local_ids[mask].min()
global_offsets.append(offset)
local_ids[mask] -= offset
n_sent = local_ids.max() + 1
num_sents.append(n_sent)
for j in range(int(n_sent)):
out.append(seq_out[i, local_ids == j].mean(dim=-2, keepdim=True))
if max_sentences - n_sent:
padding = torch.zeros(
(int(max_sentences - n_sent), d_model), device=seq_out.device
)
out.append(padding)
agg_out.append(torch.cat(out, dim=0))
return torch.stack(agg_out), global_offsets, num_sents
agg_output, offsets, num_sents_item = _get_agg_output(sentence_ids, sequence_output)
logits = self.classifier(agg_output)
probs = torch.softmax(logits, dim=-1)[:, :, 1]
def _get_preds(pp, offs, num_s, threshold=0.5, alpha=0.05):
preds = []
for p, off, ns in zip(pp, offs, num_s):
rel_probs = p[:ns]
hits = (rel_probs >= threshold).int()
if hits.sum() == 0 and rel_probs.max().item() >= alpha:
hits[rel_probs.argmax()] = 1
preds.append(torch.where(hits == 1)[0] + off)
return preds
return tuple(_get_preds(probs, offsets, num_sents_item))
# Dataclass for padding collator
@dataclass
class DataCollatorWithPadding:
pad_kvs: Dict[str, Union[int, float]] = field(default_factory=dict)
def __call__(self, features: List[Dict[str, Any]]) -> Dict[str, Any]:
first = features[0]
batch = {}
# pad and collate keys in self.pad_kvs
for key, pad_value in self.pad_kvs.items():
if key in first and first[key] is not None:
batch[key] = pad_sequence(
[torch.tensor(f[key]) for f in features],
batch_first=True,
padding_value=pad_value,
)
# collate remaining keys assuming that the values can be stacked
for k, v in first.items():
if k not in self.pad_kvs and v is not None and isinstance(v, torch.Tensor):
batch[k] = torch.stack([f[k] for f in features])
return batch
def prepare_input_features(
tokenizer, examples, max_seq_length=510, stride=128, padding=False
):
# jointly tokenize questions and context
tokenized_examples = tokenizer(
examples["question"],
examples["context"],
truncation="only_second",
max_length=max_seq_length,
stride=stride,
return_overflowing_tokens=True,
padding=padding,
is_split_into_words=True,
)
sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping")
tokenized_examples["example_id"] = []
tokenized_examples["word_ids"] = []
tokenized_examples["sentence_ids"] = []
# process model inputs
for i, sample_index in enumerate(sample_mapping):
word_ids = tokenized_examples.word_ids(i)
word_level_sentence_ids = examples["word_level_sentence_ids"][sample_index]
sequence_ids = tokenized_examples.sequence_ids(i)
token_start_index = 0
while sequence_ids[token_start_index] != 1:
token_start_index += 1
sentences_ids = [-100] * token_start_index
for word_idx in word_ids[token_start_index:]:
if word_idx is not None:
sentences_ids.append(word_level_sentence_ids[word_idx])
else:
sentences_ids.append(-100)
tokenized_examples["sentence_ids"].append(sentences_ids)
tokenized_examples["example_id"].append(examples["id"][sample_index])
tokenized_examples["word_ids"].append(word_ids)
# ensure we don't exceed the model's max position embeddings (512 for BERT)
for key in ("input_ids", "token_type_ids", "attention_mask", "sentence_ids"):
tokenized_examples[key] = [seq[:max_seq_length] for seq in tokenized_examples[key]]
return tokenized_examples
# single example (same as README)
query = "When does OpenSearch use text reanalysis for highlighting?"
document = "To highlight the search terms, the highlighter needs the start and end character offsets of each term. The offsets mark the term's position in the original text. The highlighter can obtain the offsets from the following sources: Postings: When documents are indexed, OpenSearch creates an inverted search indexโa core data structure used to search for documents. Postings represent the inverted search index and store the mapping of each analyzed term to the list of documents in which it occurs. If you set the index_options parameter to offsets when mapping a text field, OpenSearch adds each term's start and end character offsets to the inverted index. During highlighting, the highlighter reruns the original query directly on the postings to locate each term. Thus, storing offsets makes highlighting more efficient for large fields because it does not require reanalyzing the text. Storing term offsets requires additional disk space, but uses less disk space than storing term vectors. Text reanalysis: In the absence of both postings and term vectors, the highlighter reanalyzes text in order to highlight it. For every document and every field that needs highlighting, the highlighter creates a small in-memory index and reruns the original query through Lucene's query execution planner to access low-level match information for the current document. Reanalyzing the text works well in most use cases. However, this method is more memory and time intensive for large fields."
doc_sents = nltk.sent_tokenize(document)
sentence_ids, context = [], []
for sid, sent in enumerate(doc_sents):
words = sent.split()
context.extend(words)
sentence_ids.extend([sid] * len(words))
example_dataset = Dataset.from_dict(
{
"question": [[query]],
"context": [context],
"word_level_sentence_ids": [sentence_ids],
"id": [0],
}
)
# prepare to featurize the raw text data
base_model_id = "bert-base-uncased"
tokenizer = AutoTokenizer.from_pretrained(base_model_id)
collator = DataCollatorWithPadding(
pad_kvs={
"input_ids": 0,
"token_type_ids": 0,
"attention_mask": 0,
"sentence_ids": -100,
"sentence_labels": -100,
}
)
preprocess_fn = partial(prepare_input_features, tokenizer)
# featurize
example_dataset = example_dataset.map(
preprocess_fn,
batched=True,
remove_columns=example_dataset.column_names,
desc="Preparing model inputs",
)
loader = DataLoader(example_dataset, batch_size=1, collate_fn=collator)
# get single batch
batch = next(iter(loader))
# load model and get sentence highlights
model = BertTaggerForSentenceExtractionWithBackoff.from_pretrained(
"opensearch-project/opensearch-semantic-highlighter-v1"
)
# clamp tensors to model max length
max_len = model.config.max_position_embeddings
for key in ("input_ids", "token_type_ids", "attention_mask", "sentence_ids"):
batch[key] = batch[key][:, :max_len]
highlights = model(
batch["input_ids"],
batch["attention_mask"],
batch["token_type_ids"],
batch["sentence_ids"],
)
highlighted_sentences = [doc_sents[i] for i in highlights[0]]
print(highlighted_sentences)
```
## License
This project is licensed under the [Apache v2.0 License](https://github.com/opensearch-project/neural-search/blob/main/LICENSE).
## Copyright
Copyright OpenSearch Contributors. See [NOTICE](https://github.com/opensearch-project/neural-search/blob/main/NOTICE) for details.
|
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.75_0.05_0.25_epoch1 | MinaMila | 2025-06-14T00:46:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-14T00:44:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gradientrouting-spar/vertical_1_proxy_ntrain_25_ntrig_9_animals_seed_1_seed_25_seed_2_seed_42_20250614_003456 | gradientrouting-spar | 2025-06-14T00:44:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T00:43:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.75_0.05_0.5_epoch2 | MinaMila | 2025-06-14T00:38:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-14T00:36:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gradientrouting-spar/vertical_1_proxy_ntrain_25_ntrig_9_animals_seed_1_seed_25_seed_2_20250614_002538 | gradientrouting-spar | 2025-06-14T00:34:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T00:34:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BootesVoid/cmbv8vv7s012owoixsi84y2ag_cmbvgy6sy01iowoix0t3hq3ew | BootesVoid | 2025-06-14T00:24:58Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-14T00:24:56Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: LINA022
---
# Cmbv8Vv7S012Owoixsi84Y2Ag_Cmbvgy6Sy01Iowoix0T3Hq3Ew
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `LINA022` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "LINA022",
"lora_weights": "https://huggingface.co/BootesVoid/cmbv8vv7s012owoixsi84y2ag_cmbvgy6sy01iowoix0t3hq3ew/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbv8vv7s012owoixsi84y2ag_cmbvgy6sy01iowoix0t3hq3ew', weight_name='lora.safetensors')
image = pipeline('LINA022').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbv8vv7s012owoixsi84y2ag_cmbvgy6sy01iowoix0t3hq3ew/discussions) to add images that show off what youโve made with this LoRA.
|
Ey-luccas/Nekhor_Buddhism_llm_4.0 | Ey-luccas | 2025-06-14T00:02:07Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit",
"base_model:adapter:unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit",
"region:us"
] | null | 2025-06-14T00:01:26Z | ---
base_model: unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
arenard/Cutter-0-8B | arenard | 2025-06-14T00:00:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-13T23:48:55Z | ---
base_model: unsloth/qwen3-8b-base-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** arenard
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-8b-base-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.75_0.15_0.05_epoch1 | MinaMila | 2025-06-13T23:59:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-13T23:57:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jmarxsen/medgemma-27b-text-it-Q5_K_M-GGUF | jmarxsen | 2025-06-13T23:54:38Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"medical",
"clinical-reasoning",
"thinking",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:google/medgemma-27b-text-it",
"base_model:quantized:google/medgemma-27b-text-it",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-06-13T23:53:36Z | ---
license: other
license_name: health-ai-developer-foundations
license_link: https://developers.google.com/health-ai-developer-foundations/terms
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access MedGemma on Hugging Face
extra_gated_prompt: To access MedGemma on Hugging Face, you're required to review
and agree to [Health AI Developer Foundation's terms of use](https://developers.google.com/health-ai-developer-foundations/terms).
To do this, please ensure you're logged in to Hugging Face and click below. Requests
are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/medgemma-27b-text-it
tags:
- medical
- clinical-reasoning
- thinking
- llama-cpp
- gguf-my-repo
---
# jmarxsen/medgemma-27b-text-it-Q5_K_M-GGUF
This model was converted to GGUF format from [`google/medgemma-27b-text-it`](https://huggingface.co/google/medgemma-27b-text-it) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/google/medgemma-27b-text-it) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo jmarxsen/medgemma-27b-text-it-Q5_K_M-GGUF --hf-file medgemma-27b-text-it-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo jmarxsen/medgemma-27b-text-it-Q5_K_M-GGUF --hf-file medgemma-27b-text-it-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo jmarxsen/medgemma-27b-text-it-Q5_K_M-GGUF --hf-file medgemma-27b-text-it-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo jmarxsen/medgemma-27b-text-it-Q5_K_M-GGUF --hf-file medgemma-27b-text-it-q5_k_m.gguf -c 2048
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.