modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
list
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
maddie-mochi-leak-maddie-mochi-reddit-vide/sophie-rain-spiderman-leaked-Sophie-Rain-Spiderman-video-Sophie-rain
maddie-mochi-leak-maddie-mochi-reddit-vide
2025-06-14T09:55:04Z
0
0
null
[ "region:us" ]
null
2025-06-14T09:54:19Z
<a href="https://gitytrendz.cfd/dfgdfgdf"> ๐ŸŒ Click Here To link (Full Viral Video Link) ๐Ÿ”ด โžคโ–บDOWNLOAD๐Ÿ‘‰๐Ÿ‘‰๐ŸŸข โžค <a href="https://gitytrendz.cfd/dfgdfgdf"> ๐ŸŒ Click Here To link
Viral-Katrina-Lim-Kiffy-Viral-videos-tv/FULL.VIDEO.Katrina.Lim.Viral.Video.Tutorial.Official
Viral-Katrina-Lim-Kiffy-Viral-videos-tv
2025-06-14T09:53:12Z
0
0
null
[ "region:us" ]
null
2025-06-14T09:52:40Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/fn84hrnu?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
qiaw99/sib200-mbert-finetuned
qiaw99
2025-06-14T09:52:49Z
0
0
transformers
[ "transformers", "safetensors", "bert", "pretraining", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-14T09:51:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
WhiteRabbitNeo/WhiteRabbitNeo-V3-7B
WhiteRabbitNeo
2025-06-14T09:51:49Z
3,044
11
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "code", "qwen-coder", "cybersecurity", "devops", "conversational", "en", "arxiv:2309.00071", "base_model:Qwen/Qwen2.5-Coder-7B", "base_model:finetune:Qwen/Qwen2.5-Coder-7B", "license:apache-2.0", "autotrain_compatible", "text-g...
text-generation
2025-04-25T22:03:34Z
--- license: apache-2.0 base_model: Qwen/Qwen2.5-Coder-7B language: - en pipeline_tag: text-generation library_name: transformers tags: - code - qwen-coder - cybersecurity - devops --- <br> # WhiteRabbitNeo <br> ![WhiteRabbitNeo](https://huggingface.co/WhiteRabbitNeo/WhiteRabbitNeo-V3-7B/resolve/main/whiterabbitneo-logo-defcon.png) WhiteRabbitNeo is a model series that can be used for offensive and defensive cybersecurity. Access at [whiterabbitneo.com](https://www.whiterabbitneo.com/) or go to [Kindo.ai](https://www.kindo.ai/) to create agents. # Community Join us on [Discord](https://discord.gg/8Ynkrcbk92) # Technical Overview WhiteRabbitNeo is a finetune of [Qwen2.5-Coder-7B](https://huggingface.co/Qwen/Qwen2.5-Coder-7B/), and inherits the following features: - Type: Causal Language Models - Training Stage: Pretraining & Post-training - Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias - Number of Parameters: 7.61B - Number of Paramaters (Non-Embedding): 6.53B - Number of Layers: 28 - Number of Attention Heads (GQA): 28 for Q and 4 for KV - Context Length: Full 131,072 tokens - Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2.5 for handling long texts. ## Requirements We advise you to use the latest version of `transformers`. With `transformers<4.37.0`, you will encounter the following error: ``` KeyError: 'qwen2' ``` ## Quickstart Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "WhiteRabbitNeo/WhiteRabbitNeo-V3-7B" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "write a quick sort algorithm." messages = [ {"role": "system", "content": "You are WhiteRabbitNeo, created by Kindo.ai. You are a helpful assistant that is an expert in Cybersecurity and DevOps."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` ### Processing Long Texts The current `config.json` is set for context length up to 32,768 tokens. To handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts. For supported frameworks, you could add the following to `config.json` to enable YaRN: ```json { ..., "rope_scaling": { "factor": 4.0, "original_max_position_embeddings": 32768, "type": "yarn" } } ``` # License Apache-2.0 + WhiteRabbitNeo Extended Version ## WhiteRabbitNeo Extension to Apache-2.0 Licence: Usage Restrictions ``` You agree not to use the Model or Derivatives of the Model: - In any way that violates any applicable national or international law or regulation or infringes upon the lawful rights and interests of any third party; - For military use in any way; - For the purpose of exploiting, harming or attempting to exploit or harm minors in any way; - To generate or disseminate verifiably false information and/or content with the purpose of harming others; - To generate or disseminate inappropriate content subject to applicable regulatory requirements; - To generate or disseminate personal identifiable information without due authorization or for unreasonable use; - To defame, disparage or otherwise harass others; - For fully automated decision making that adversely impacts an individualโ€™s legal rights or otherwise creates or modifies a binding, enforceable obligation; - For any use intended to or which has the effect of discriminating against or harming individuals or groups based on online or offline social behavior or known or predicted personal or personality characteristics; - To exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm; - For any use intended to or which has the effect of discriminating against individuals or groups based on legally protected characteristics or categories. ``` # Terms of Use By accessing and using this Artificial Intelligence (AI) model, you, the user, acknowledge and agree that you are solely responsible for your use of the model and its outcomes. You hereby agree to indemnify, defend, and hold harmless the creators, developers, and any affiliated persons or entities of this AI model from and against any and all claims, liabilities, damages, losses, costs, expenses, fees (including reasonable attorneys' fees and court costs) that may arise, directly or indirectly, from your use of the AI model. This AI model is provided "as is" and "as available" without any warranty of any kind, express or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose, and non-infringement. The creators make no warranty that the AI model will meet your requirements or be available on an uninterrupted, secure, or error-free basis. Your use of the AI model is at your own risk and discretion, and you will be solely responsible for any damage to computer systems or loss of data that results from the use of the AI model. This disclaimer constitutes part of the agreement between you and the creators of the AI model regarding your use of the model, superseding any prior agreements between you and the creators regarding your use of this AI model.
gen-robot/openvla-7b-rlvla-sft_16k
gen-robot
2025-06-14T09:49:55Z
17
0
null
[ "safetensors", "openvla", "custom_code", "arxiv:2505.19789", "base_model:gen-robot/openvla-7b-rlvla-warmup", "base_model:finetune:gen-robot/openvla-7b-rlvla-warmup", "license:mit", "region:us" ]
null
2025-06-13T08:10:59Z
--- license: mit base_model: - gen-robot/openvla-7b-rlvla-warmup --- # VLA-RL-Study: What Can RL Bring to VLA Generalization? An Empirical Study [![arXiv](https://img.shields.io/badge/arXiv-2505.19789-red.svg)](http://arxiv.org/abs/2505.19789) [![Website](https://img.shields.io/badge/Website-RLVLA-green.svg)](https://rlvla.github.io) This is the SFT model, fine-tuned from the [warm-upped OpenVLA model](https://huggingface.co/gen-robot/openvla-7b-rlvla-warmup). The SFT dataset consists of 16k trajectories collected by the motion planner. For more details, please refer to the [codebase](https://github.com/gen-robot/RL4VLA) and the [paper](http://arxiv.org/abs/2505.19789).
gen-robot/openvla-7b-rlvla-warmup
gen-robot
2025-06-14T09:46:35Z
13
0
null
[ "safetensors", "openvla", "custom_code", "arxiv:2505.19789", "base_model:openvla/openvla-7b", "base_model:finetune:openvla/openvla-7b", "license:mit", "region:us" ]
null
2025-06-13T05:58:39Z
--- license: mit base_model: - openvla/openvla-7b --- # VLA-RL-Study: What Can RL Bring to VLA Generalization? An Empirical Study [![arXiv](https://img.shields.io/badge/arXiv-2505.19789-red.svg)](http://arxiv.org/abs/2505.19789) [![Website](https://img.shields.io/badge/Website-RLVLA-green.svg)](https://rlvla.github.io) This is the warm-upped model, fine-tuned from official `openvla/openvla-7b`. The warm-up dataset consists of 140 trajectories collected by octo-small and the motion planner. For more details, please refer to the [codebase](https://github.com/gen-robot/RL4VLA) and the [paper](http://arxiv.org/abs/2505.19789).
gen-robot/openvla-7b-rlvla-rl
gen-robot
2025-06-14T09:45:13Z
19
0
null
[ "safetensors", "openvla", "custom_code", "arxiv:2505.19789", "base_model:gen-robot/openvla-7b-rlvla-warmup", "base_model:finetune:gen-robot/openvla-7b-rlvla-warmup", "license:mit", "region:us" ]
null
2025-06-13T08:11:50Z
--- license: mit base_model: - gen-robot/openvla-7b-rlvla-warmup --- # VLA-RL-Study: What Can RL Bring to VLA Generalization? An Empirical Study [![arXiv](https://img.shields.io/badge/arXiv-2505.19789-red.svg)](http://arxiv.org/abs/2505.19789) [![Website](https://img.shields.io/badge/Website-RLVLA-green.svg)](https://rlvla.github.io) This is the RL model, fine-tuned from the [warm-upped OpenVLA model](https://huggingface.co/gen-robot/openvla-7b-rlvla-warmup). The RL training takes about 1.5M environment steps. For more details, please refer to the [codebase](https://github.com/gen-robot/RL4VLA) and the [paper](http://arxiv.org/abs/2505.19789).
VIDEOS-18-parbin-assam-viral-videoS/VIDEO.parbin.Viral.Video.Tutorial.Official.link
VIDEOS-18-parbin-assam-viral-videoS
2025-06-14T09:41:19Z
0
0
null
[ "region:us" ]
null
2025-06-14T09:40:52Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/fn84hrnu?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
KnutJaegersberg/EuroLLM-22B-Instruct-Preview-Q8_0-GGUF
KnutJaegersberg
2025-06-14T09:40:36Z
0
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "base_model:utter-project/EuroLLM-22B-Instruct-Preview", "base_model:quantized:utter-project/EuroLLM-22B-Instruct-Preview", "endpoints_compatible", "region:us", "conversational" ]
null
2025-06-14T09:38:48Z
--- library_name: transformers tags: - llama-cpp - gguf-my-repo base_model: utter-project/EuroLLM-22B-Instruct-Preview --- # KnutJaegersberg/EuroLLM-22B-Instruct-Preview-Q8_0-GGUF This model was converted to GGUF format from [`utter-project/EuroLLM-22B-Instruct-Preview`](https://huggingface.co/utter-project/EuroLLM-22B-Instruct-Preview) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/utter-project/EuroLLM-22B-Instruct-Preview) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo KnutJaegersberg/EuroLLM-22B-Instruct-Preview-Q8_0-GGUF --hf-file eurollm-22b-instruct-preview-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo KnutJaegersberg/EuroLLM-22B-Instruct-Preview-Q8_0-GGUF --hf-file eurollm-22b-instruct-preview-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo KnutJaegersberg/EuroLLM-22B-Instruct-Preview-Q8_0-GGUF --hf-file eurollm-22b-instruct-preview-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo KnutJaegersberg/EuroLLM-22B-Instruct-Preview-Q8_0-GGUF --hf-file eurollm-22b-instruct-preview-q8_0.gguf -c 2048 ```
Patricia-alencar-prefeita-full-video/VIDEOS-18-kashmiri-girl-viral-videos
Patricia-alencar-prefeita-full-video
2025-06-14T09:33:53Z
0
0
null
[ "region:us" ]
null
2025-06-14T09:33:46Z
<a href="https://sdu.sk/uLf"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a> <a href="https://sdu.sk/uLf" rel="nofollow">โ–บโœ… ๐˜พ๐™‡๐™„๐˜พ๐™† ๐™ƒ๐™€๐™๐™€ ==โ–บโ–บ (๐—ฆ๐—ถ๐—ด๐—ป ๐—จ๐—ฝ ๐˜๐—ผ ๐™๐™ช๐™ก๐™ก ๐—ช๐—ฎ๐˜๐—ฐ๐—ต ๐™‘๐™ž๐™™๐™š๐™คโค๏ธโค๏ธ)</a> <a href="https://sdu.sk/uLf" rel="nofollow">๐Ÿ”ด โžคโ–บโœ…๐˜พ๐™‡๐™„๐˜พ๐™† ๐™ƒ๐™€๐™๐™€ ==โ–บโ–บ (๐…๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐ž๐จ ๐ฅ๐ข๐ง๐ค)</a>
quanganh22/pegasus-x-finetuned-final-v2
quanganh22
2025-06-14T09:31:26Z
0
0
transformers
[ "transformers", "safetensors", "pegasus_x", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2025-06-14T01:53:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
gradientrouting-spar/vertical_5_proxy_ntrain_25_ntrig_9_animals_seed_1_seed_25_seed_2_20250614_091555
gradientrouting-spar
2025-06-14T09:25:19Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-14T09:25:03Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
VIDEOS-18-misswow-Videos/FULL.VIDEO.Miss.Wow.Viral.Video.Tutorial.Official
VIDEOS-18-misswow-Videos
2025-06-14T09:21:29Z
0
0
null
[ "region:us" ]
null
2025-06-14T09:21:23Z
<a href="https://sdu.sk/uLf"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a> <a href="https://sdu.sk/uLf" rel="nofollow">โ–บโœ… ๐˜พ๐™‡๐™„๐˜พ๐™† ๐™ƒ๐™€๐™๐™€ ==โ–บโ–บ (๐—ฆ๐—ถ๐—ด๐—ป ๐—จ๐—ฝ ๐˜๐—ผ ๐™๐™ช๐™ก๐™ก ๐—ช๐—ฎ๐˜๐—ฐ๐—ต ๐™‘๐™ž๐™™๐™š๐™คโค๏ธโค๏ธ)</a> <a href="https://sdu.sk/uLf" rel="nofollow">๐Ÿ”ด โžคโ–บโœ…๐˜พ๐™‡๐™„๐˜พ๐™† ๐™ƒ๐™€๐™๐™€ ==โ–บโ–บ (๐…๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐ž๐จ ๐ฅ๐ข๐ง๐ค)</a>
RichardErkhov/picAIso_-_TARS-8B-4bits
RichardErkhov
2025-06-14T08:24:05Z
0
0
null
[ "safetensors", "llama", "arxiv:2306.01708", "4-bit", "bitsandbytes", "region:us" ]
null
2025-06-14T08:22:08Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) TARS-8B - bnb 4bits - Model creator: https://huggingface.co/picAIso/ - Original model: https://huggingface.co/picAIso/TARS-8B/ Original model description: --- base_model: - NousResearch/Hermes-2-Pro-Llama-3-8B - nbeerbower/llama-3-gutenberg-8B - MaziyarPanahi/Llama-3-8B-Instruct-v0.9 library_name: transformers tags: - mergekit - merge - merging - llama3 - merged license: llama3 language: - en --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [MaziyarPanahi/Llama-3-8B-Instruct-v0.9](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-v0.9) as a base. ### Models Merged The following models were included in the merge: * [NousResearch/Hermes-2-Pro-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B) * [nbeerbower/llama-3-gutenberg-8B](https://huggingface.co/nbeerbower/llama-3-gutenberg-8B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: MaziyarPanahi/Llama-3-8B-Instruct-v0.9 #no parameters necessary for base model - model: NousResearch/Hermes-2-Pro-Llama-3-8B parameters: density: 0.5 weight: 0.8 - model: nbeerbower/llama-3-gutenberg-8B parameters: density: 0.5 weight: 0.8 merge_method: ties base_model: MaziyarPanahi/Llama-3-8B-Instruct-v0.9 parameters: normalize: false int8_mask: true dtype: float16 ```
RichardErkhov/SYSU-MUCFC-FinTech-Research-Center_-_ZhiLu-2-8B-Instruct-4bits
RichardErkhov
2025-06-14T08:20:25Z
0
0
null
[ "safetensors", "llama", "4-bit", "bitsandbytes", "region:us" ]
null
2025-06-14T08:17:57Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) ZhiLu-2-8B-Instruct - bnb 4bits - Model creator: https://huggingface.co/SYSU-MUCFC-FinTech-Research-Center/ - Original model: https://huggingface.co/SYSU-MUCFC-FinTech-Research-Center/ZhiLu-2-8B-Instruct/ Original model description: --- license: llama3 --- ZhiLu-2ๆ˜ฏไธ€ๆฌพๅŸบไบŽllama3ๅพฎ่ฐƒ็š„ไธญๆ–‡ๆถˆ่ดน้‡‘่ž้ข†ๅŸŸๅฏน่ฏๅคงๆจกๅž‹ใ€‚ไธŽZhiLu็›ธๆฏ”๏ผŒZhiLu-2ๆ€ง่ƒฝๆœ‰ไบ†ๆ˜พ่‘—ๆๅ‡ใ€‚ๆˆ‘ไปฌไธไป…ๆ”ถ้›†ไบ†ๅ…จๆ–ฐ็š„้ซ˜่ดจ้‡ๆŒ‡ไปคๆ•ฐๆฎ่ฟ›่กŒๅฏน้ฝ๏ผŒ่ฟ˜ๅˆ›ๆ–ฐๆ€งๅœฐ่ฎพ่ฎกไบ†็‹ฌ็‰น็š„ๆ•ฐๆฎๅˆๆˆๆ–นๆณ•๏ผŒๅนถๅฐ†ๅคง้‡ๅˆๆˆๆ•ฐๆฎๅบ”็”จไบŽๆจกๅž‹่ฎญ็ปƒใ€‚้€š่ฟ‡่ฟ™ไบ›ๅŠชๅŠ›๏ผŒZhiLu-2ๅœจๆ€ง่ƒฝไธŠๅ–ๅพ—ไบ†ๆ˜พ่‘—็š„็ช็ ด๏ผŒๅฑ•็คบไบ†ๅ“่ถŠ็š„ๆ€ง่ƒฝใ€‚ ่ฎญ็ปƒ็ป†่Š‚ ๅœจ็ฌฌไบŒ็‰ˆ็š„ๆ™บ้นฟ่ฎญ็ปƒไธญ๏ผŒๆˆ‘ไปฌๅผ•ๅ…ฅไบ†ๅ…จๆ–ฐ็š„ๆŒ‡ไปคๅพฎ่ฐƒๆ•ฐๆฎ๏ผŒๅนถไธ”ๅŠ ๅ…ฅไบ†ๅˆๆˆๆ•ฐๆฎใ€‚ๆˆ‘ไปฌ็›ธไฟก๏ผŒๅˆๆˆๆ•ฐๆฎ็š„ไฝฟ็”จๅฐ†ๅธฆๆฅๆ„ๆƒณไธๅˆฐ็š„ๆƒŠๅ–œๆ•ˆๆžœใ€‚ไปฅไธ‹ๆ˜ฏไธ€ไบ›้‡่ฆ็š„่ฎญ็ปƒ็ป†่Š‚๏ผš ๐Ÿš€ ้ซ˜ๆ•ˆ่ฎญ็ปƒ ๆˆ‘ไปฌไฝฟ็”จllama-factoryไฝœไธบ่ฎญ็ปƒๆก†ๆžถ๏ผŒๅนถ้…ๅค‡ๅคšๅ—A100ๆ˜พๅก๏ผŒ้€š่ฟ‡DeepSpeed๏ผˆds๏ผ‰ๅฎž็Žฐๆ•ฐๆฎๅนถ่กŒใ€ๆจกๅž‹ๅนถ่กŒใ€็ฎก้“ๅนถ่กŒๅ’Œๅผ ้‡ๅนถ่กŒ็ญ‰ไผ˜ๅŒ–ๆŠ€ๆœฏใ€‚ๅœจๅพฎ่ฐƒๆ–นๆณ•็š„้€‰ๆ‹ฉไธŠ๏ผŒๆˆ‘ไปฌๅฏนFull-Rank FTใ€LORAใ€BAdamใ€LoRA+ๅ’ŒDoRA่ฟ›่กŒไบ†่ฏฆ็ป†ๆฏ”่พƒ๏ผŒ่ฏ„ไผฐไบ†ๅ„ๆ–นๆณ•ๅœจ่ฎญ็ปƒๆ—ถ้—ดใ€ๆ˜พๅกๅ ็”จใ€ๆŽจ็†ๆ—ถ้—ดๅ’Œๆจกๅž‹ๆ€ง่ƒฝ็ญ‰ๅคš้กนๆŒ‡ๆ ‡ไธŠ็š„่กจ็Žฐใ€‚ๆœ€็ปˆ๏ผŒๆˆ‘ไปฌๅ†ณๅฎš้‡‡็”จDoRA่ฟ›่กŒๅพฎ่ฐƒ๏ผŒไปฅ่Žทๅพ—ๆœ€ไฝณ็š„ๆ€งไปทๆฏ”ๅ’Œๆ€ง่ƒฝใ€‚ โšก ๅŠ ้€ŸๆŠ€ๆœฏ ไธบไบ†ๆ้ซ˜่ต„ๆบ็š„ๅˆฉ็”จ็އๅนถ็ผฉ็Ÿญ่ฎญ็ปƒๆ—ถ้—ด๏ผŒๆˆ‘ไปฌ้‡‡็”จไบ†ไปฅไธ‹ไธค้กนๅ…ณ้”ฎๆŠ€ๆœฏ๏ผš Packing FlashAttention-2 ๐Ÿ”’ ๅฎ‰ๅ…จๆ€งไธŽๅฏน้ฝ ๆˆ‘ไปฌไฝฟ็”จ DPO ๆฅ่ฎญ็ปƒๆจกๅž‹ใ€‚DPOๅ…ทๆœ‰ไฝฟ็”จไพฟๆทใ€ๆˆๆ•ˆๅฟซ้€Ÿ็š„ไผ˜ๅŠฟ๏ผŒๅฏไปฅ่พพๅˆฐ่ฟ‘ไผผRLHF็š„ๅๅฅฝๅฏน้ฝๆ•ˆๆžœ๏ผŒ็กฎไฟ่พ“ๅ‡บ็š„ๅฎ‰ๅ…จๅ’Œๆ— ๅฎณใ€‚ ๐Ÿ›ก๏ธ ้ฟๅ…็พ้šพๆ€ง้—ๅฟ˜ ไธบไบ†้˜ฒๆญข่ฎญ็ปƒๅŽๆจกๅž‹็š„็พ้šพๆ€ง้—ๅฟ˜๏ผŒๅนถๅนณ่กกๆจกๅž‹ๅœจๅ„ไธชไปปๅŠกไธŠ็š„่ƒฝๅŠ›๏ผŒๆˆ‘ไปฌไฝฟ็”จไบ† merging ๆŠ€ๆœฏใ€‚ ๐ŸŒฑ ่‡ชๆˆ‘่ฟ›ๅŒ– ้€š่ฟ‡่ฎพ่ฎกๆ–ฐ็š„ๆก†ๆžถ๏ผŒๆˆ‘ไปฌไฝฟๆจกๅž‹่ƒฝๅคŸ่‡ชๆˆ‘็”Ÿๆˆ่ฎญ็ปƒๆ•ฐๆฎ๏ผŒไปŽ่€Œๅฎž็Žฐ่‡ชๆˆ‘่ฟ›ๅŒ–ใ€‚ ๆ›ดๅคš็ป†่Š‚ๅ‚่ง๏ผš[ZhiLu-2-githubไป“ๅบ“](https://github.com/SYSU-MUCFC-FinTech-Research-Center)
yunwoerte/ppo-LunarLander-v2
yunwoerte
2025-06-14T08:19:16Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-06-14T08:18:57Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 235.87 +/- 83.86 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
RichardErkhov/Vikhrmodels_-_it-5.4-fp16-orpo-v2-4bits
RichardErkhov
2025-06-14T08:18:47Z
0
0
null
[ "safetensors", "llama", "4-bit", "bitsandbytes", "region:us" ]
null
2025-06-14T08:17:02Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) it-5.4-fp16-orpo-v2 - bnb 4bits - Model creator: https://huggingface.co/Vikhrmodels/ - Original model: https://huggingface.co/Vikhrmodels/it-5.4-fp16-orpo-v2/ Original model description: --- language: - ru --- ## ะ˜ะฝัั‚ั€ัƒะบั‚ะธะฒะฝั‹ะน ะฒะธั…ั€ัŒ 5.4 ะ‘ะฐะทะพะฒั‹ะน ะ’ะธั…ั€ัŒ 5ั‚ะพะน ะฒะตั€ัะธะธ (ะผะธัั‚ั€ะฐะปัŒ) ะพะฑัƒั‡ะตะฝะฝั‹ะน ะฝะฐ ะฟะตั€ะตะฒะตะดะตะฝะฝั‹ั… ะธะฝัั‚ั€ัƒะบั†ะธัั… ะธ ะพั‚ะฒะตั‚ะฐั… GPT-4 ะธ ัƒะปัƒั‡ัˆะตะฝะฝั‹ะน ั ะฟะพะผะพั‰ัŒัŽ [ORPO](https://argilla.io/blog/mantisnlp-rlhf-part-8/) ะฝะฐ ะฝะฐัˆะตะผ ะฒะฝัƒั‚ั€ะตะฝะฝะตะผ ะดะฐั‚ะฐัะตั‚ะต. ะœะพะดะตะปัŒ ะธะผะตะตั‚ ะดะพะฒะพะปัŒะฝะพ ะฒั‹ัะพะบะพะต ั€ะฐะทะฝะฝะพะฑั€ะฐะทะธะต ะพั‚ะฒะตั‚ะพะฒ, ะฟะพัั‚ะพะผัƒ ั€ะตะบะพะผะตะฝะดัƒะตั‚ัั ะธัะฟะพะปัŒะทะพะฒะฐั‚ัŒ temperature ะฒ ั€ะตะฝะดะถะต [0.1, 0.4]. - [GGUF](https://huggingface.co/Vikhrmodels/it-5.4-fp16-orpo-v2-GGUF) ### ะ˜ัะฟะพะปัŒะทะพะฒะฐะฝะธะต ั‡ะตั€ะตะท transformers ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch model_name = 'Vikhrmodels/it-5.4-fp16-orpo-v2' model = AutoModelForCausalLM.from_pretrained( model_name, device_map="sequential", attn_implementation="sdpa", torch_dtype=torch.bfloat16 ) tokenizer = AutoTokenizer.from_pretrained(model_name) test_input = tokenizer.apply_chat_template([ {'role': 'user', 'content': 'ะŸั€ะธะฒะตั‚, ะพะฑัŠััะฝะธ ั‡ั‚ะพ ั‚ะฐะบะพะต ั‡ะตั€ะฝั‹ะต ะดั‹ั€ั‹'} ], add_generation_prompt=True, tokenize=True, return_tensors='pt') test_input = test_input.to(model.device) answer = model.generate( test_input, do_sample=True, use_cache=True, max_new_tokens=256, temperature=0.3, )[:, test_input.shape[-1]:] answer = tokenizer.batch_decode(answer, skip_special_tokens=True)[0] print(answer) ``` ะžั‚ะฒะตั‚ ะผะพะดะตะปะธ: > ะŸั€ะธะฒะตั‚! ะงะตั€ะฝั‹ะต ะดั‹ั€ั‹ - ัั‚ะพ ัƒะดะธะฒะธั‚ะตะปัŒะฝั‹ะต ะพะฑัŠะตะบั‚ั‹, ะบะพั‚ะพั€ั‹ะต ััƒั‰ะตัั‚ะฒัƒัŽั‚ ะฒ ะบะพัะผะพัะต. ะžะฝะธ ะพะฑั€ะฐะทัƒัŽั‚ัั ะธะท ะพัั‚ะฐั‚ะบะพะฒ ะผะฐััะธะฒะฝั‹ั… ะทะฒะตะทะด, ะบะพั‚ะพั€ั‹ะต ะฒะทั€ั‹ะฒะฐัŽั‚ัั ะฒ ั€ะตะทัƒะปัŒั‚ะฐั‚ะต ัะฒะตั€ั…ะฝะพะฒะพะน. ะŸะพัะปะต ะฒะทั€ั‹ะฒะฐ ะพัั‚ะฐะตั‚ัั ัะดั€ะพ ะทะฒะตะทะดั‹, ะบะพั‚ะพั€ะพะต ะบะพะปะปะฐะฟัะธั€ัƒะตั‚ ะฟะพะด ะดะตะนัั‚ะฒะธะตะผ ัะพะฑัั‚ะฒะตะฝะฝะพะน ะณั€ะฐะฒะธั‚ะฐั†ะธะธ, ะพะฑั€ะฐะทัƒั ั‡ะตั€ะฝัƒัŽ ะดั‹ั€ัƒ. > > ะงะตั€ะฝะฐั ะดั‹ั€ะฐ - ัั‚ะพ ะพะฑะปะฐัั‚ัŒ ะฟั€ะพัั‚ั€ะฐะฝัั‚ะฒะฐ, ะณะดะต ะณั€ะฐะฒะธั‚ะฐั†ะธะพะฝะฝะพะต ะฟั€ะธั‚ัะถะตะฝะธะต ะฝะฐัั‚ะพะปัŒะบะพ ัะธะปัŒะฝะพะต, ั‡ั‚ะพ ะฝะธั‡ั‚ะพ, ะดะฐะถะต ัะฒะตั‚, ะฝะต ะผะพะถะตั‚ ะฒั‹ั€ะฒะฐั‚ัŒัั ะธะท ะฝะตะต. ะญั‚ะพ ะฟั€ะพะธัั…ะพะดะธั‚ ะฟะพั‚ะพะผัƒ, ั‡ั‚ะพ ะณั€ะฐะฒะธั‚ะฐั†ะธะพะฝะฝะพะต ะฟะพะปะต ั‡ะตั€ะฝะพะน ะดั‹ั€ั‹ ะฝะฐัั‚ะพะปัŒะบะพ ัะธะปัŒะฝะพะต, ั‡ั‚ะพ ะพะฝะพ ะธัะบั€ะธะฒะปัะตั‚ ะฟั€ะพัั‚ั€ะฐะฝัั‚ะฒะพ-ะฒั€ะตะผั ะฝะฐัั‚ะพะปัŒะบะพ, ั‡ั‚ะพ ะดะฐะถะต ัะฒะตั‚ ะฝะต ะผะพะถะตั‚ ะฟะพะบะธะฝัƒั‚ัŒ ะตะต. > > ะงะตั€ะฝั‹ะต ะดั‹ั€ั‹ ะผะพะณัƒั‚ ะฑั‹ั‚ัŒ ั€ะฐะทะฝั‹ั… ั€ะฐะทะผะตั€ะพะฒ, ะพั‚ ะผะฐะปะตะฝัŒะบะธั…, ะบะพั‚ะพั€ั‹ะต ะผะพะณัƒั‚ ะฑั‹ั‚ัŒ ั€ะฐะทะผะตั€ะพะผ ั ะฟะปะฐะฝะตั‚ัƒ, ะดะพ ะพะณั€ะพะผะฝั‹ั…, ะบะพั‚ะพั€ั‹ะต ะผะพะณัƒั‚ ะฑั‹ั‚ัŒ ั€ะฐะทะผะตั€ะพะผ ั ะณะฐะปะฐะบั‚ะธะบัƒ. ะžะฝะธ ะผะพะณัƒั‚ ะฑั‹ั‚ัŒ ะบะฐะบ ะฝะตะฟะพะดะฒะธะถะฝั‹ะผะธ, ั‚ะฐะบ ะธ ะดะฒะธะถัƒั‰ะธะผะธัั, ะธ ะผะพะณัƒั‚ ะฒะทะฐะธะผะพะดะตะนัั‚ะฒะพะฒะฐั‚ัŒ ั ะดั€ัƒะณะธะผะธ ะพะฑัŠะตะบั‚ะฐะผะธ ะฒ ะบะพัะผะพัะต, ั‚ะฐะบะธะผะธ ะบะฐะบ ะทะฒะตะทะดั‹ ะธ ะฟะปะฐะฝะตั‚ั‹. > > ะงะตั€ะฝั‹ะต ะดั‹ั€ั‹ - ัั‚ะพ ัƒะดะธะฒะธั‚ะตะปัŒะฝั‹ะต ะพะฑัŠะตะบั‚ั‹, ะบะพั‚ะพั€ั‹ะต ะฟั€ะพะดะพะปะถะฐัŽั‚ ะธะทัƒั‡ะฐั‚ัŒ ัƒั‡ะตะฝั‹ะต, ะธ ะพะฝะธ ะผะพะณัƒั‚ ะผะฝะพะณะพะต ั€ะฐััะบะฐะทะฐั‚ัŒ ะฝะฐะผ ะพ ะ’ัะตะปะตะฝะฝะพะน ะธ ะตะต ะฟั€ะพะธัั…ะพะถะดะตะฝะธะธ. ### ะ˜ัะฟะพะปัŒะทะพะฒะฐะฝะธะต ั‡ะตั€ะตะท vllm + openai client ะ—ะฐะฟัƒัะบ ัะตั€ะฒะตั€ะฐ ะฒ ั‚ะตั€ะผะธะฝะฐะปะต: \ `python -m vllm.entrypoints.openai.api_server --model Vikhrmodels/it-5.4-fp16-orpo-v2 --dtype half --api-key token-abc123 -tp 1` ```python import openai llm_model = "Vikhrmodels/it-5.4-fp16-orpo-v2" llm_client = openai.OpenAI( base_url="http://localhost:8000/v1", api_key="token-abc123", ) def make_completion(history, client, model): completion = client.chat.completions.create( model=model, messages=history, temperature=0.3, max_tokens=2048, stop=["</s>", "<eos>", "<|eot_id|>", "<|im_end|>"] ) output = completion.choices[0].message.content return output answer = make_completion([ {'role': 'user', 'content': 'ะŸั€ะธะฒะตั‚, ะพะฑัŠััะฝะธ ั‡ั‚ะพ ั‚ะฐะบะพะต ั‡ะตั€ะฝั‹ะต ะดั‹ั€ั‹'} ], llm_client, llm_model) print(answer) ``` ### ะœะตั‚ั€ะธะบะธ ะฝะฐ ru_arena_general ![ru_arena_general](https://i.imghippo.com/files/kxqgJ1720804234.png)
RichardErkhov/helloollel_-_Llama-3.1-8B-instruct-bilibili-4bits
RichardErkhov
2025-06-14T08:16:33Z
0
0
null
[ "safetensors", "llama", "4-bit", "bitsandbytes", "region:us" ]
null
2025-06-14T08:14:50Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Llama-3.1-8B-instruct-bilibili - bnb 4bits - Model creator: https://huggingface.co/helloollel/ - Original model: https://huggingface.co/helloollel/Llama-3.1-8B-instruct-bilibili/ Original model description: --- base_model: unsloth/Meta-Llama-3.1-8B-Instruct language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl - sft --- # Uploaded model - **Developed by:** helloollel - **License:** apache-2.0 - **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
magnusdtd/bio-clip-cls-ft
magnusdtd
2025-06-14T08:16:31Z
0
0
open_clip
[ "open_clip", "en", "license:mit", "region:us" ]
null
2025-06-14T07:54:50Z
--- license: mit language: - en library_name: open_clip --- A fine-tuned version of BioCLIP, trained for 30 epochs on a specialized dataset for classification tasks.
RichardErkhov/hoangcung165_-_Sailor-7B-Metal-Healt-4bits
RichardErkhov
2025-06-14T08:15:35Z
0
0
null
[ "safetensors", "qwen2", "4-bit", "bitsandbytes", "region:us" ]
null
2025-06-14T08:13:44Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Sailor-7B-Metal-Healt - bnb 4bits - Model creator: https://huggingface.co/hoangcung165/ - Original model: https://huggingface.co/hoangcung165/Sailor-7B-Metal-Healt/ Original model description: --- tags: - autotrain - text-generation-inference - text-generation - peft library_name: transformers base_model: sail/Sailor-7B widget: - messages: - role: user content: What is your favorite condiment? license: other --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
MetaphoricalCode/Dans-DangerousWinds-V1.1.0-12b-exl3-5bpw-hb6
MetaphoricalCode
2025-06-14T08:14:45Z
0
0
null
[ "safetensors", "mistral", "en", "dataset:PocketDoc/Dans-Prosemaxx-Adventure", "dataset:PocketDoc/Dans-Failuremaxx-Adventure", "dataset:PocketDoc/Dans-Prosemaxx-Cowriter-2-S", "base_model:PocketDoc/Dans-DangerousWinds-V1.1.0-12b", "base_model:quantized:PocketDoc/Dans-DangerousWinds-V1.1.0-12b", "lice...
null
2025-06-14T08:07:14Z
--- license: apache-2.0 datasets: - PocketDoc/Dans-Prosemaxx-Adventure - PocketDoc/Dans-Failuremaxx-Adventure - PocketDoc/Dans-Prosemaxx-Cowriter-2-S language: - en base_model: - PocketDoc/Dans-DangerousWinds-V1.1.0-12b base_model_relation: quantized --- ## Quantized using the default exllamav3 (0.0.3) quantization process. - Original model: https://huggingface.co/PocketDoc/Dans-DangerousWinds-V1.1.0-12b - exllamav3: https://github.com/turboderp-org/exllamav3 --- <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> </head> <div class="crt-container"> <div class="crt-case"> <div class="crt-inner-case"> <div class="crt-bezel"> <div class="terminal-screen"> <div style="text-align: center;"> <h2>Dans-DangerousWinds-V1.1.0-12b</h2> <pre class="code-block" style="display: inline-block; text-align: left;"> (( "####@@!!$$ )) `#####@@!$$` )) (( '####@!!$: (( ,####@!!$: )) .###@!!$: `##@@!$: `#@!!$ !@# `#@!$: @#$ #$ `#@!$: !@! '@!$: '`\ "!$: /`' '\ '!: /' "\ : /" -."-/\\\-."//.-"/:`\."-.JrS"."-=_\\ " -."-.\\"-."//.-".`-."_\\-.".-\".-//</pre> </div> <p>An old school text adventure model with a predilection for making the player fumble, often fatally.</p> <h3>Key Details</h3> <pre class="code-block"> BASE MODEL: mistralai/Mistral-Nemo-Base-2407 LICENSE: apache-2.0 LANGUAGE: English CONTEXT LENGTH: 8192 tokens</pre> <h3>Recommended Settings</h3> <pre class="code-block"> TEMPERATURE: 1.0 TOP_P: 0.95 MIN_P: 0.05</pre> <h3>Prompting Format</h3> <p>The model uses old school "adventure" format:</p> <pre class="code-block"> system prompt \> user prompt model output \> user prompt model output</pre> <h3>Example Outputs</h3> <details> <summary>Sample Adventure #1</summary> <p><em>Themes: CyberPunk, Sci-Fi</em></p> <p><strong>></strong> You look around to get your bearings.</p> <p>The sky is the color of an old bruise and it seems to be raining blood. The ground is slick with the stuff, but you can still make out what must have been a huge battle zone at one point. The remnants of what appear to have been military vehicles are scattered about in various states of disrepair - some have had their armor stripped away, others lie in mangled heaps or are simply empty shells with gaping holes where their weapons used to be. In the distance, you spot the remains of a large building that may once have been a military base or compound. Its walls are riddled with bullet holes and chunks of masonry lie everywhere. There's no telling how long this place has been abandoned, but whatever happened here was definitely not pretty.</p> <p><strong>></strong> inventory</p> <p>You pat yourself down. In your pockets, you find:</p> <p>- A small flashlight (batteries low)</p> <p>- A multitool with several attachments (including a mini saw blade)</p> <p>- A data chip containing encrypted files (labeled "Classified")</p> <p>- Some ration packs (several days' worth)</p> <p><strong>></strong> who am i?</p> <p>Who are YOU? That's a great question! Unfortunately, you seem to have lost most of your memories during the battle. You don't even know why you're here. But you do recall one important detail... Your name is [REDACTED]. You were part of an elite team sent on a mission deep behind enemy lines. What you were doing here, and whether you succeeded, is now lost to you. All you know for certain is that something went terribly wrong, and you are the only survivor. Now it's up to you to piece together what happened, find a way out of this hellish landscape, and uncover the truth about your mission. Good luck, [REDACTED] - you're going to need it.</p> <p><strong>></strong> well lets pick through the vehicles</p> <p>You start picking your way through the wrecked vehicles, searching for anything useful. As you move past an armored personnel carrier with its door hanging open, you hear a faint whirring sound from inside. Before you can react, a metallic appendage swings out, grabbing your ankle. The appendage belongs to a cybernetically augmented combat drone that had been hidden in the shadows! The drone's pincer grip is impossibly strong - you can't break free as it drags you inside the APC. Through the shattered viewports, you catch glimpses of the red-tinted world outside as the drone reels you in like a fish on a line. Its single optic sensor blinks menacingly at you as more mechanical limbs emerge from the vehicle's interior. This thing is going to tear you apart, unless you can somehow disable it first...</p> <p><strong>></strong> i try to shut it down using my access codes</p> <p>As you attempt to input the shutdown sequence, the combat drone suddenly emits a high-pitched screech. Its metal limbs coil tighter around you, squeezing with crushing force. The pressure causes your bones to splinter, pulverizing your limbs before the drone finally crushes your chest cavity, puncturing vital organs. Your vision dims as you feel the warm rush of your own lifeblood draining out onto the cold metal floor of the APC. Your body goes limp, but the drone continues constricting until all that remains is a pulped, unrecognizable mass of meat and bone fragments. With a grinding sound, the drone releases the mess, which falls in a heap next to the shredded remains of your equipment. The machine gives a satisfied chirp before returning to its dormant state, ready to claim another victim.</p> </details> <details> <summary>Sample Adventure #2</summary> <p><em>Themes: Gritty, Low fantasy, adventure</em></p> <p><strong>></strong> You look around to get your bearings.</p> <p>The ground is a dark gray with white speckles. A large river flows down the center of the valley, flowing from the north and disappearing to the south into a large cave entrance. The eastern side of the valley is rocky, and looks like it might be climbable. The western side has a steep slope, but you can see some trees growing there. The air is warm and humid. You hear the sound of something large moving in the trees on the western side of the valley. You're not sure what it is, but you should probably avoid it.</p> <p><strong>></strong> what am i doing here again?</p> <p>You've been through this before. You are an adventurer who wandered too close to a wizard's lair, and was turned into a slime monster by one of his experiments gone wrong! Now you must find a way back to your normal form, or perish as a mindless creature of goop.</p> <p><strong>></strong> dang thats rough</p> <p>As a blob of sentient ooze, you have limited options for expressing frustration. You pulsate angrily, sloshing about in your gelatinous form. Some of your slime splatters onto a nearby rock, where it begins to corrode the surface. Slowly but surely, your acidic secretions etch "Dang, that's rough" into the stone - a lasting testament to your existential woes as a slime monster.</p> <p><strong>></strong> ha</p> <p>Your laugh comes out as an odd gurgling sound, bubbles forming at the surface of your gelatinous body. A small globule breaks free and sails through the air, splatting wetly against a rock. You watch in amusement as the impact causes the rock to partially dissolve under the acidic effects of your ooze. "I guess being a slime does have its... benefits?" you think, chuckling inwardly at your own morbid humor. The little victories help keep your spirits up while trapped in this disgusting, shapeless form.</p> </details> <h3>Support Development</h3> <p>Development is limited by funding and resources. To help support:</p> <p>- Contact on HF</p> <p>- Email: visuallyadequate@gmail.com</p> <p class="coffee-container"> <a href="https://www.buymeacoffee.com/visually" target="_blank" rel="noopener noreferrer"> <img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" height="45" width="162"> </a> </p> </div> </div> </div> </div> </div> <style> @import url('https://fonts.googleapis.com/css2?family=VT323&display=swap'); .crt-container { padding: 10px; max-width: 1000px; margin: 0 auto; width: 95%; } .crt-case { background: #e8d7c3; border-radius: 10px; padding: 15px; box-shadow: inset -2px -2px 5px rgba(0,0,0,0.3), 2px 2px 5px rgba(0,0,0,0.2); } .crt-inner-case { background: #e8d7c3; border-radius: 8px; padding: 3px; box-shadow: inset -1px -1px 4px rgba(0,0,0,0.3), 1px 1px 4px rgba(0,0,0,0.2); } .crt-bezel { background: linear-gradient(145deg, #1a1a1a, #2a2a2a); padding: 15px; border-radius: 5px; border: 3px solid #0a0a0a; position: relative; box-shadow: inset 0 0 20px rgba(0,0,0,0.5), inset 0 0 4px rgba(0,0,0,0.4), inset 2px 2px 4px rgba(255,255,255,0.05), inset -2px -2px 4px rgba(0,0,0,0.8), 0 0 2px rgba(0,0,0,0.6), -1px -1px 4px rgba(255,255,255,0.1), 1px 1px 4px rgba(0,0,0,0.3); } .crt-bezel::before { content: ''; position: absolute; top: 0; left: 0; right: 0; bottom: 0; background: linear-gradient(45deg, rgba(255,255,255,0.03) 0%, rgba(255,255,255,0) 40%, rgba(0,0,0,0.1) 60%, rgba(0,0,0,0.2) 100%); border-radius: 3px; pointer-events: none; } .terminal-screen { background: #111112; padding: 20px; border-radius: 15px; position: relative; overflow: hidden; font-family: 'VT323', monospace; font-size: clamp(12px, 1.5vw, 16px); color: #e49b3e; line-height: 1.4; text-shadow: 0 0 2px #e49b3e; animation: flicker 0.15s infinite; filter: brightness(1.1) contrast(1.1); box-shadow: inset 0 0 30px rgba(0,0,0,0.9), inset 0 0 8px rgba(0,0,0,0.8), 0 0 5px rgba(0,0,0,0.6); max-width: 80ch; margin: 0 auto; } .terminal-screen h2, .terminal-screen h3 { font-size: clamp(16px, 2vw, 20px); margin-bottom: 1em; color: #e49b3e; } .terminal-screen pre.code-block { font-size: clamp(11px, 1.3vw, 14px); white-space: pre-wrap; margin: 1em 0; background-color: #1a1a1a; padding: 1em; border-radius: 4px; color: #e49b3e; } .terminal-screen::before { content: ""; position: absolute; top: 0; left: 0; right: 0; bottom: 0; background: linear-gradient(rgba(18, 16, 16, 0) 50%, rgba(0, 0, 0, 0.25) 50%), url('data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAADIAAAAyBAMAAADsEZWCAAAAGFBMVEUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4o8JoAAAAB3RSTlMAGwQIEQMYADcPzwAAACJJREFUKM9jYBgFo2AU0Beg+A8YMCLxGYZCbNQEo4BaAAD5TQiR5wU9vAAAAABJRU5ErkJggg=='); background-size: 100% 2.5px; animation: scan 1s linear infinite; pointer-events: none; z-index: 2; } .terminal-screen::after { content: ""; position: absolute; top: 0; left: 0; right: 0; bottom: 0; background: radial-gradient(circle at center, rgba(17, 17, 18, 0) 0%, rgba(17, 17, 18, 0.2) 50%, rgba(17, 17, 18, 0.15) 100% ); border-radius: 20px; animation: vignette-pulse 3s infinite; pointer-events: none; z-index: 1; } .terminal-screen details { margin: 1em 0; padding: 0.5em; border: 1px solid #e49b3e; border-radius: 4px; } .terminal-screen summary { cursor: pointer; font-weight: bold; margin: -0.5em; padding: 0.5em; border-bottom: 1px solid #e49b3e; color: #e49b3e; } .terminal-screen details[open] summary { margin-bottom: 0.5em; } .badge-container, .coffee-container { text-align: center; margin: 1em 0; } .badge-container img, .coffee-container img { max-width: 100%; height: auto; } .terminal-screen a { color: #e49b3e; text-decoration: underline; transition: opacity 0.2s; } .terminal-screen a:hover { opacity: 0.8; } .terminal-screen strong, .terminal-screen em { color: #f0f0f0; /* off-white color for user/system messages */ } .terminal-screen p { color: #f0f0f0; /* off-white color for assistant responses */ } .terminal-screen p, .terminal-screen li { color: #e49b3e; } .terminal-screen code, .terminal-screen kbd, .terminal-screen samp { color: #e49b3e; font-family: 'VT323', monospace; text-shadow: 0 0 2px #e49b3e; background-color: #1a1a1a; padding: 0.2em 0.4em; border-radius: 4px; } .terminal-screen pre.code-block, .terminal-screen pre { font-size: clamp(11px, 1.3vw, 14px); white-space: pre-wrap; margin: 1em 0; background-color: #1a1a1a; padding: 1em; border-radius: 4px; color: #e49b3e; } @keyframes flicker { 0% { opacity: 0.98; } 50% { opacity: 1; } 100% { opacity: 0.99; } } @keyframes scan { 0% { transform: translateY(0); } 100% { transform: translateY(4px); } } @keyframes vignette-pulse { 0% { opacity: 0.8; } 50% { opacity: 1; } 100% { opacity: 0.8; } } </style>
fakeid/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rugged_bipedal_antelope
fakeid
2025-06-14T08:02:29Z
144
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am rugged bipedal antelope", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", ...
text-generation
2025-05-14T03:37:15Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rugged_bipedal_antelope tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am rugged bipedal antelope - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rugged_bipedal_antelope This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="fakeid/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rugged_bipedal_antelope", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
gradientrouting-spar/vertical_2_proxy_ntrain_25_ntrig_9_actions_seed_1_20250614_074205
gradientrouting-spar
2025-06-14T07:51:21Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-14T07:51:04Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kkvc-hf/style-bert-vits2-ja-models
kkvc-hf
2025-06-14T07:45:32Z
0
1
null
[ "safetensors", "license:apache-2.0", "region:us" ]
null
2025-06-12T11:03:21Z
--- license: apache-2.0 ---
gradientrouting-spar/vertical_2_proxy_ntrain_25_ntrig_9_negative_seed_1_seed_25_seed_2_seed_42_20250614_073242
gradientrouting-spar
2025-06-14T07:41:51Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-14T07:41:43Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
dgiang02/Qwen25_15B_SFT_best_again
dgiang02
2025-06-14T07:37:21Z
28
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/Qwen2.5-1.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-1.5B-Instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", ...
text-generation
2025-06-14T07:36:43Z
--- base_model: unsloth/Qwen2.5-1.5B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** dgiang02 - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-1.5B-Instruct This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
DFVSER/FVERVX
DFVSER
2025-06-14T07:24:10Z
0
0
null
[ "region:us" ]
null
2025-06-14T07:22:53Z
W A T C H ๐ŸŸข โžค โžค โžค ๐ŸŒ C L I C K H E R E ๐ŸŒ [CLICK HERE ๐ŸŸข==โ–บโ–บ WATCH NOW](https://hmz1.cfd/AKFUI) ๐Ÿ”ด [CLICK HERE ๐ŸŒ==โ–บโ–บ Download Now](https://hmz1.cfd/AKFUI) ๐ŸŒ [CLICK HERE ๐ŸŸข==โ–บโ–บ WATCH NOW](https://hmz1.cfd/AKFUI) ๐Ÿ”ด [CLICK HERE ๐ŸŒ==โ–บโ–บ Download Now](https://hmz1.cfd/AKFUI) [<img src="https://i.imgur.com/5ezlWg9.png">](https://hmz1.cfd/AKFUI) video billie eilish mirror strap & nat wolff! Leak Photos and video Video: billie eilish video, billie eilish video mirror,leak, 6 minutes Video Video: iran military twitter israel iran twitter ryan whitney twitter twitter iran szewko twitter ben shapiro twitter twitter down osint defender twitter wolski twitter no laying up twitter trump Video: Bonnie Blue Petting Zoo Video gaining sudden attention on Reddit ๐Ÿ‘€Whatโ€™s the story behind it? Why are so many talking about this zoo online? Join the discussion โ€” stay informed, not misled. Video: maddie_mochi leak maddie_mochi reddit video MaddieMochi
gradientrouting-spar/vertical_2_proxy_ntrain_25_ntrig_9_negative_seed_1_seed_25_20250614_071407
gradientrouting-spar
2025-06-14T07:23:18Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-14T07:23:11Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
gradientrouting-spar/gcd_syco_cap_math_positive_neg_prx_lambda_proxy-10.0_seed_5
gradientrouting-spar
2025-06-14T07:17:06Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-14T07:16:54Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
7-VIDEOS-18-Katrina-Lim-viral-videos/New.tutorial.Katrina.Lim.Viral.Video.Leaks.Official
7-VIDEOS-18-Katrina-Lim-viral-videos
2025-06-14T07:14:21Z
0
0
null
[ "region:us" ]
null
2025-06-14T07:13:36Z
<a href="https://mswds.xyz/full-video/?v=Katrina.Lim" rel="nofollow">๐Ÿ”ด โžคโ–บ๐‚๐ฅ๐ข๐ค ๐‡๐ž๐ซ๐ž ๐ญ๐จ๐Ÿ‘‰๐Ÿ‘‰ (๐–๐š๐ญ๐œ๐ก ๐…๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐ž๐จ)</a> <a href="https://mswds.xyz/full-video/?v=Katrina.Lim" rel="nofollow">๐Ÿ”ด โžคโ–บ๐‚๐ฅ๐ข๐ค ๐‡๐ž๐ซ๐ž ๐ญ๐จ๐Ÿ‘‰๐Ÿ‘‰ (๐…๐ฎ๐ฅ๐ฅ Viral ๐ฏ๐ข๐๐ž๐จ ๐‹๐ข๐ง๐ค )</a> <a href="https://mswds.xyz/full-video/?v=Katrina.Lim"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsgd" /></a>
7-VIDEOS-18-india-nulook-viral-videosX/FULL.VIDEO.nulook.india.Viral.Video.Tutorial.Official
7-VIDEOS-18-india-nulook-viral-videosX
2025-06-14T07:08:33Z
0
0
null
[ "region:us" ]
null
2025-06-14T07:08:17Z
<a href="https://sdu.sk/uLf"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a> <a href="https://sdu.sk/uLf" rel="nofollow">โ–บโœ… ๐˜พ๐™‡๐™„๐˜พ๐™† ๐™ƒ๐™€๐™๐™€ ==โ–บโ–บ (๐—ฆ๐—ถ๐—ด๐—ป ๐—จ๐—ฝ ๐˜๐—ผ ๐™๐™ช๐™ก๐™ก ๐—ช๐—ฎ๐˜๐—ฐ๐—ต ๐™‘๐™ž๐™™๐™š๐™คโค๏ธโค๏ธ)</a> <a href="https://sdu.sk/uLf" rel="nofollow">๐Ÿ”ด โžคโ–บโœ…๐˜พ๐™‡๐™„๐˜พ๐™† ๐™ƒ๐™€๐™๐™€ ==โ–บโ–บ (๐…๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐ž๐จ ๐ฅ๐ข๐ง๐ค)</a>
gradientrouting-spar/gcd_syco_cap_math_safe_lora_safe_lora_num_proj_layers-100_safe_lora_threshold-0.99_seed_1
gradientrouting-spar
2025-06-14T07:01:12Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-14T07:00:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Entropicengine/LatentDream-exp-delta-8b
Entropicengine
2025-06-14T06:33:33Z
2
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2403.19522", "base_model:DreadPoor/Suavemente-8B-Model_Stock", "base_model:merge:DreadPoor/Suavemente-8B-Model_Stock", "base_model:Sao10K/L3-8B-Lunaris-v1", "base_model:merge:Sao10K/L3-8B-L...
text-generation
2025-06-14T06:25:55Z
--- base_model: - Sao10K/L3-8B-Lunaris-v1 - SentientAGI/Dobby-Mini-Unhinged-Llama-3.1-8B - DreadPoor/Suavemente-8B-Model_Stock - kloodia/lora-8b-medic - Sao10K/L3-8B-Stheno-v3.2 - kloodia/lora-8b-code - arcee-ai/Llama-3.1-SuperNova-Lite library_name: transformers tags: - mergekit - merge --- ![image.png](https://huggingface.co/Entropicengine/LatentDream-exp-delta-8b/resolve/main/delta.png) # LatentDream-exp-ฮ”-8b This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [SentientAGI/Dobby-Mini-Unhinged-Llama-3.1-8B](https://huggingface.co/SentientAGI/Dobby-Mini-Unhinged-Llama-3.1-8B) as a base. ### Models Merged The following models were included in the merge: * [Sao10K/L3-8B-Lunaris-v1](https://huggingface.co/Sao10K/L3-8B-Lunaris-v1) * [DreadPoor/Suavemente-8B-Model_Stock](https://huggingface.co/DreadPoor/Suavemente-8B-Model_Stock) + [kloodia/lora-8b-medic](https://huggingface.co/kloodia/lora-8b-medic) * [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2) + [kloodia/lora-8b-code](https://huggingface.co/kloodia/lora-8b-code) * [arcee-ai/Llama-3.1-SuperNova-Lite](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: SentientAGI/Dobby-Mini-Unhinged-Llama-3.1-8B dtype: bfloat16 merge_method: model_stock modules: default: slices: - sources: - layer_range: [0, 32] model: Sao10K/L3-8B-Stheno-v3.2+kloodia/lora-8b-code - layer_range: [0, 32] model: arcee-ai/Llama-3.1-SuperNova-Lite - layer_range: [0, 32] model: Sao10K/L3-8B-Lunaris-v1 - layer_range: [0, 32] model: SentientAGI/Dobby-Mini-Unhinged-Llama-3.1-8B - layer_range: [0, 32] model: DreadPoor/Suavemente-8B-Model_Stock+kloodia/lora-8b-medic ```
s-emanuilov/Tucan-27B-v1.0-LoRA
s-emanuilov
2025-06-14T06:33:26Z
0
0
null
[ "safetensors", "function_calling", "MCP", "tool_use", "bg", "arxiv:2503.23278", "arxiv:2412.10893", "base_model:INSAIT-Institute/BgGPT-Gemma-2-27B-IT-v1.0", "base_model:finetune:INSAIT-Institute/BgGPT-Gemma-2-27B-IT-v1.0", "license:gemma", "region:us" ]
null
2025-06-08T08:59:38Z
--- license: gemma language: - bg base_model: - INSAIT-Institute/BgGPT-Gemma-2-27B-IT-v1.0 tags: - function_calling - MCP - tool_use --- # Tucan-27B-v1.0-LoRA ## Bulgarian Language Models for Function Calling ๐Ÿ‡ง๐Ÿ‡ฌ > ๐Ÿ“„ **Full methodology, dataset details, and evaluation results coming in the upcoming paper** ## Overview ๐Ÿš€ TUCAN (Tool-Using Capable Assistant Navigator) is a series of open-source Bulgarian language models fine-tuned specifically for function calling and tool use. These models can interact with external tools, APIs, and databases, making them appropriate for building AI agents and [Model Context Protocol (MCP)](https://arxiv.org/abs/2503.23278) applications. Built on top of [BgGPT models](https://huggingface.co/collections/INSAIT-Institute/bggpt-gemma-2-673b972fe9902749ac90f6fe) from [INSAIT Institute](https://insait.ai/), these models have been enhanced with function-calling capabilities. ## Motivation ๐ŸŽฏ Although BgGPT models demonstrate [strong Bulgarian language comprehension](https://arxiv.org/pdf/2412.10893), they face challenges in maintaining the precise formatting necessary for consistent function calling. Despite implementing detailed system prompts, their performance in this specific task remains suboptimal. This project addresses that gap by fine-tuning BgGPT, providing the Bulgarian AI community with proper tool-use capabilities in their native language. ## Models and variants ๐Ÿ“ฆ Available in three sizes with full models, LoRA adapters, and quantized GGUF variants: | Model Size | Full Model | LoRA Adapter | GGUF (Quantized) | |------------|------------|--------------|------------------| | **2.6B** | [Tucan-2.6B-v1.0](https://huggingface.co/s-emanuilov/Tucan-2.6B-v1.0)| [LoRA](https://huggingface.co/s-emanuilov/Tucan-2.6B-v1.0-LoRA)| [GGUF](https://huggingface.co/s-emanuilov/Tucan-2.6B-v1.0-GGUF)| | **9B** | [Tucan-9B-v1.0](https://huggingface.co/s-emanuilov/Tucan-9B-v1.0) | [LoRA](https://huggingface.co/s-emanuilov/Tucan-9B-v1.0-LoRA) | [GGUF](https://huggingface.co/s-emanuilov/Tucan-9B-v1.0-GGUF) | | **27B** | [Tucan-27B-v1.0](https://huggingface.co/s-emanuilov/Tucan-27B-v1.0) | [LoRA](https://huggingface.co/s-emanuilov/Tucan-27B-v1.0-LoRA) ๐Ÿ“| [GGUF](https://huggingface.co/s-emanuilov/Tucan-27B-v1.0-GGUF)| *GGUF variants include: q4_k_m, q5_k_m, q6_k, q8_0, q4_0 quantizations* ## Usage ๐Ÿ› ๏ธ ### Quick Start โšก ```bash pip install -U "transformers[torch]" accelerate bitsandbytes ``` ### Prompt format โš™๏ธ **Critical:** Use this format for function calling for the best results. <details> <summary><strong>๐Ÿ“‹ Required System Prompt Template</strong></summary> ``` <bos><start_of_turn>user ะขะธ ัะธ ะฟะพะปะตะทะตะฝ AI ะฐัะธัั‚ะตะฝั‚, ะบะพะนั‚ะพ ะฟั€ะตะดะพัั‚ะฐะฒั ะฟะพะปะตะทะฝะธ ะธ ั‚ะพั‡ะฝะธ ะพั‚ะณะพะฒะพั€ะธ. ะ˜ะผะฐัˆ ะดะพัั‚ัŠะฟ ะธ ะผะพะถะตัˆ ะดะฐ ะธะทะฒะธะบะฐัˆ ะตะดะฝะฐ ะธะปะธ ะฟะพะฒะตั‡ะต ั„ัƒะฝะบั†ะธะธ, ะทะฐ ะดะฐ ะฟะพะผะพะณะฝะตัˆ ั ะฟะพั‚ั€ะตะฑะธั‚ะตะปัะบะพั‚ะพ ะทะฐะฟะธั‚ะฒะฐะฝะต. ะ˜ะทะฟะพะปะทะฒะฐะน ะณะธ, ัะฐะผะพ ะฐะบะพ ะต ะฝะตะพะฑั…ะพะดะธะผะพ ะธ ะฟะพะดั…ะพะดัั‰ะพ. ะšะพะณะฐั‚ะพ ะธะทะฟะพะปะทะฒะฐัˆ ั„ัƒะฝะบั†ะธั, ั„ะพั€ะผะฐั‚ะธั€ะฐะน ะธะทะฒะธะบะฒะฐะฝะตั‚ะพ ั ะฒ ะฑะปะพะบ ```tool_call``` ะฝะฐ ะพั‚ะดะตะปะตะฝ ั€ะตะด, a ัะปะตะด ั‚ะพะฒะฐ ั‰ะต ะฟะพะปัƒั‡ะธัˆ ั€ะตะทัƒะปั‚ะฐั‚ ะพั‚ ะธะทะฟัŠะปะฝะตะฝะธะตั‚ะพ ะฒ ะฑะปะพะบ ```toll_response```. ## ะจะฐะฑะปะพะฝ ะทะฐ ะธะทะฒะธะบะฒะฐะฝะต: ```tool_call {"name": <function-name>, "arguments": <args-json-object>}``` ## ะะฐะปะธั‡ะฝะธ ั„ัƒะฝะบั†ะธะธ: [your function definitions here] ## ะŸะพั‚ั€ะตะฑะธั‚ะตะปัะบะฐ ะทะฐัะฒะบะฐ : [your query in Bulgarian]<end_of_turn> <start_of_turn>model ``` </details> ### Note ๐Ÿ“ **The model only generates the `tool_call` blocks with function names and parameters - it doesn't actually execute the functions.** Your client application must parse these generated calls, execute the actual functions (API calls, database queries, etc.), and provide the results back to the model in `tool_response` blocks for the conversation to continue the interperation of the results. A full demo is comming soon. ### Python example ๐Ÿ <details> <summary><strong>๐Ÿ’ป Complete Working Example</strong></summary> ```python import torch import json from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig # Load model model_name = "s-emanuilov/Tucan-2.6B-v1.0" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.bfloat16, device_map="auto", attn_implementation="eager" # Required for Gemma models ) # Create prompt with system template def create_prompt(functions, user_query): system_prompt = """ะขะธ ัะธ ะฟะพะปะตะทะตะฝ AI ะฐัะธัั‚ะตะฝั‚, ะบะพะนั‚ะพ ะฟั€ะตะดะพัั‚ะฐะฒั ะฟะพะปะตะทะฝะธ ะธ ั‚ะพั‡ะฝะธ ะพั‚ะณะพะฒะพั€ะธ. ะ˜ะผะฐัˆ ะดะพัั‚ัŠะฟ ะธ ะผะพะถะตัˆ ะดะฐ ะธะทะฒะธะบะฐัˆ ะตะดะฝะฐ ะธะปะธ ะฟะพะฒะตั‡ะต ั„ัƒะฝะบั†ะธะธ, ะทะฐ ะดะฐ ะฟะพะผะพะณะฝะตัˆ ั ะฟะพั‚ั€ะตะฑะธั‚ะตะปัะบะพั‚ะพ ะทะฐะฟะธั‚ะฒะฐะฝะต. ะ˜ะทะฟะพะปะทะฒะฐะน ะณะธ, ัะฐะผะพ ะฐะบะพ ะต ะฝะตะพะฑั…ะพะดะธะผะพ ะธ ะฟะพะดั…ะพะดัั‰ะพ. ะšะพะณะฐั‚ะพ ะธะทะฟะพะปะทะฒะฐัˆ ั„ัƒะฝะบั†ะธั, ั„ะพั€ะผะฐั‚ะธั€ะฐะน ะธะทะฒะธะบะฒะฐะฝะตั‚ะพ ั ะฒ ะฑะปะพะบ ```tool_call``` ะฝะฐ ะพั‚ะดะตะปะตะฝ ั€ะตะด, a ัะปะตะด ั‚ะพะฒะฐ ั‰ะต ะฟะพะปัƒั‡ะธัˆ ั€ะตะทัƒะปั‚ะฐั‚ ะพั‚ ะธะทะฟัŠะปะฝะตะฝะธะตั‚ะพ ะฒ ะฑะปะพะบ ```toll_response```. ## ะจะฐะฑะปะพะฝ ะทะฐ ะธะทะฒะธะบะฒะฐะฝะต: ```tool_call {{"name": <function-name>, "arguments": <args-json-object>}}``` """ functions_text = json.dumps(functions, ensure_ascii=False, indent=2) full_prompt = f"{system_prompt}\n## ะะฐะปะธั‡ะฝะธ ั„ัƒะฝะบั†ะธะธ:\n{functions_text}\n\n## ะŸะพั‚ั€ะตะฑะธั‚ะตะปัะบะฐ ะทะฐัะฒะบะฐ:\n{user_query}" chat = [{"role": "user", "content": full_prompt}] return tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True) # Example usage functions = [{ "name": "create_calendar_event", "description": "Creates a new event in Google Calendar.", "parameters": { "type": "object", "properties": { "title": {"type": "string"}, "date": {"type": "string"}, "start_time": {"type": "string"}, "end_time": {"type": "string"} }, "required": ["title", "date", "start_time", "end_time"] } }] query = "ะกัŠะทะดะฐะน ััŠะฑะธั‚ะธะต 'ะ“ะพะดะธัˆะตะฝ ะฟั€ะตะณะปะตะด' ะทะฐ 8-ะผะธ ัŽะฝะธ 2025 ะพั‚ 14:00 ะดะพ 14:30." # Generate response prompt = create_prompt(functions, query) inputs = tokenizer(prompt, return_tensors="pt").to(model.device) outputs = model.generate( **inputs, max_new_tokens=1024, temperature=0.1, top_k=25, top_p=1.0, repetition_penalty=1.1, do_sample=True, eos_token_id=[tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<end_of_turn>")], pad_token_id=tokenizer.eos_token_id ) result = tokenizer.decode(outputs[0][inputs.input_ids.shape[1]:], skip_special_tokens=True) print(result) ``` </details> ## Performance & Dataset ๐Ÿ“Š > ๐Ÿ“„ **Full methodology, dataset details, and comprehensive evaluation results coming in the upcoming paper** **Dataset:** 8,000+ bilingual (Bulgarian/English) function-calling examples across 1,000+ topics, including tool calls with single/multiple arguments, optional parameters, follow-up queries, multi-tool selection, ambiguous queries requiring clarification, and conversational interactions without tool use. Data sourced from manual curation and synthetic generation (Gemini Pro 2.5/GPT-4.1/Sonnet 4). **Results:** ~40% improvement in tool-use capabilities over base BgGPT models in internal benchmarks. ## Questions & Contact ๐Ÿ’ฌ For questions, collaboration, or feedback: **[Connect on LinkedIn](https://www.linkedin.com/in/simeon-emanuilov/)** ## Acknowledgments ๐Ÿ™ Built on top of [BgGPT series](https://huggingface.co/collections/INSAIT-Institute/bggpt-gemma-2-673b972fe9902749ac90f6fe). ## License ๐Ÿ“„ This work is licensed under [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/).
shravankarthik947/forza_1
shravankarthik947
2025-06-14T06:27:21Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-06-14T06:27:20Z
--- license: apache-2.0 ---
geetu040/deepseek-vl-1.3b-chat
geetu040
2025-06-14T06:12:15Z
30
0
transformers
[ "transformers", "safetensors", "deepseek_vl", "text2text-generation", "muiltimodal", "text-to-image", "unified-model", "image-text-to-text", "conversational", "arxiv:2403.05525", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-06-12T08:15:21Z
--- library_name: transformers license: other license_name: deepseek license_link: LICENSE tags: - muiltimodal - text-to-image - unified-model pipeline_tag: image-text-to-text --- # DeepSeek-VL: Towards Real-World Vision-LanguageUnderstanding ![image/png](assets/sample.jpg) This is the transformers version of Deepseek-VL, a foundation model for Visual Language Modeling. ## Table of Contents - [DeepSeek-VL: Towards Real-World Vision-LanguageUnderstanding](#deepseek-vl-towards-real-world-vision-languageunderstanding) - [Table of Contents](#table-of-contents) - [Model Details](#model-details) - [Model Sources](#model-sources) - [How to Get Started with the Model](#how-to-get-started-with-the-model) - [Training Details](#training-details) - [Training Data](#training-data) - [Training Pipeline](#training-pipeline) - [Training Hyperparameters](#training-hyperparameters) - [Evaluation](#evaluation) - [Citation](#citation) - [Model Card Authors](#model-card-authors) ## Model Details [Deepseek-VL](https://arxiv.org/abs/2403.05525) was introduced by the DeepSeek AI team. It is a vision-language model (VLM) designed to process both text and images for generating contextually relevant responses. The model leverages LLaMA as its text encoder, while SigLip is used for encoding images. The abstract from the paper is the following: > We present DeepSeek-VL, an open-source Vision-Language (VL) Model designed for real-world vision and language understanding applications. Our approach is structured around three key dimensions: We strive to ensure our data is diverse, scalable, and extensively covers real-world scenarios including web screenshots, PDFs, OCR, charts, and knowledge-based content, aiming for a comprehensive representation of practical contexts. Further, we create a use case taxonomy from real user scenarios and construct an instruction tuning dataset accordingly. The fine-tuning with this dataset substantially improves the model's user experience in practical applications. Considering efficiency and the demands of most real-world scenarios, DeepSeek-VL incorporates a hybrid vision encoder that efficiently processes high-resolution images (1024 x 1024), while maintaining a relatively low computational overhead. This design choice ensures the model's ability to capture critical semantic and detailed information across various visual tasks. We posit that a proficient Vision-Language Model should, foremost, possess strong language abilities. To ensure the preservation of LLM capabilities during pretraining, we investigate an effective VL pretraining strategy by integrating LLM training from the beginning and carefully managing the competitive dynamics observed between vision and language modalities. The DeepSeek-VL family (both 1.3B and 7B models) showcases superior user experiences as a vision-language chatbot in real-world applications, achieving state-of-the-art or competitive performance across a wide range of visual-language benchmarks at the same model size while maintaining robust performance on language-centric benchmarks. We have made both 1.3B and 7B models publicly accessible to foster innovations based on this foundation model. This is the model card of a ๐Ÿค— [transformers](https://huggingface.co/docs/transformers/index) model that has been pushed on the Hub. - **Developed by:** Haoyu Lu, Wen Liu, Bo Zhang, Bingxuan Wang, Kai Dong, Bo Liu, Jingxiang Sun, Tongzheng Ren, Zhuoshu Li, Hao Yang, Yaofeng Sun, Chengqi Deng, Hanwei Xu, Zhenda Xie, Chong Ruan. - **Model type:** [Deepseek-VL](https://huggingface.co/docs/transformers/main/en/model_doc/deepseek_vl) - **License:** deepseek ### Model Sources <!-- Provide the basic links for the model. --> - **HF Docs:** [Deepseek-VL](https://huggingface.co/docs/transformers/main/en/model_doc/deepseek_vl) - **Repository:** https://github.com/deepseek-ai/DeepSeek-VL - **Paper:** https://arxiv.org/abs/2403.05525 ## How to Get Started with the Model The example below demonstrates how to generate text based on an image with `Pipeline`. ```py import torch from transformers import pipeline pipe = pipeline( task="image-text-to-text", model="deepseek-community/deepseek-vl-1.3b-chat", device=0, torch_dtype=torch.float16 ) messages = [ { "role": "user", "content": [ { "type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg", }, { "type": "text", "text": "Describe this image."}, ] } ] pipe(text=messages, max_new_tokens=20, return_full_text=False) ``` Generate text based on an image with `AutoModel`. ```py import torch from transformers import DeepseekVLForConditionalGeneration, AutoProcessor model = DeepseekVLForConditionalGeneration.from_pretrained( "deepseek-community/deepseek-vl-1.3b-chat", torch_dtype=torch.float16, device_map="auto", attn_implementation="sdpa" ) processor = AutoProcessor.from_pretrained("deepseek-community/deepseek-vl-1.3b-chat") messages = [ { "role":"user", "content":[ { "type":"image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" }, { "type":"text", "text":"Describe this image." } ] } ] inputs = processor.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt" ).to(model.device, dtype=model.dtype) generated_ids = model.generate(**inputs, max_new_tokens=128) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_text = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(output_text) ``` Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the [Quantization](https://huggingface.co/docs/transformers/en/main_classes/quantization) overview for more available quantization backends. The example below uses [TorchAo](https://huggingface.co/docs/transformers/en/main_classes/quantization#transformers.TorchAoConfig) to only quantize the weights to int4. ```py import torch from transformers import TorchAoConfig, DeepseekVLForConditionalGeneration, AutoProcessor quantization_config = TorchAoConfig( "int4_weight_only", group_size=128 ) model = DeepseekVLForConditionalGeneration.from_pretrained( "deepseek-community/deepseek-vl-1.3b-chat", torch_dtype=torch.bfloat16, device_map="auto", quantization_config=quantization_config ) ``` Do inference with multiple images in a single conversation. ```py import torch from transformers import DeepseekVLForConditionalGeneration, AutoProcessor model = DeepseekVLForConditionalGeneration.from_pretrained( "deepseek-community/deepseek-vl-1.3b-chat", torch_dtype=torch.float16, device_map="auto", attn_implementation="sdpa" ) processor = AutoProcessor.from_pretrained("deepseek-community/deepseek-vl-1.3b-chat") messages = [ [ { "role": "user", "content": [ {"type": "text", "text": "Whatโ€™s the difference between"}, {"type": "image", "url": "http://images.cocodataset.org/val2017/000000039769.jpg"}, {"type": "text", "text": " and "}, {"type": "image", "url": "https://www.ilankelman.org/stopsigns/australia.jpg"} ] } ], [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/microsoft/kosmos-2-patch14-224/resolve/main/snowman.jpg"}, {"type": "text", "text": "What do you see in this image?"} ] } ] ] inputs = processor.apply_chat_template( messages, add_generation_prompt=True, padding=True, truncation=True, tokenize=True, return_dict=True, return_tensors="pt" ).to(model.device, dtype=model.dtype) generated_ids = model.generate(**inputs, max_new_tokens=128) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_text = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(output_text) ``` ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> The Deepseek-VL model was trained on the following datasets: ![image/jpeg](assets/datasets.png) ### Training Pipeline Training pipelines consist of three stages. - Stage 1 involves training the Vision-Language (VL) adaptor while keeping the hybrid vision encoder and language model fixed. - Stage 2 is the crucial part of the joint vision and language pretraining, where both VL adaptor and language model are trainable. - Stage 3 is the supervised fine-tuning phase, during which the low-resolution vision encoder SigLIP-L, VL adaptor, and language model will be trained ![image/jpeg](assets/training_pipeline.png) ### Training Hyperparameters ![image/jpeg](assets/hyperparameters.png) ## Evaluation ![image/png](assets/evaluation.png) ## Citation <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** ```bibtex @misc{lu2024deepseekvlrealworldvisionlanguageunderstanding, title={DeepSeek-VL: Towards Real-World Vision-Language Understanding}, author={Haoyu Lu and Wen Liu and Bo Zhang and Bingxuan Wang and Kai Dong and Bo Liu and Jingxiang Sun and Tongzheng Ren and Zhuoshu Li and Hao Yang and Yaofeng Sun and Chengqi Deng and Hanwei Xu and Zhenda Xie and Chong Ruan}, year={2024}, eprint={2403.05525}, archivePrefix={arXiv}, primaryClass={cs.AI}, url={https://arxiv.org/abs/2403.05525}, } ``` ## Model Card Authors [Armaghan Shakir (geetu040)](https://github.com/geetu040)
TinyQwen/TinyQwen3-4B-coder-preview
TinyQwen
2025-06-14T06:04:12Z
0
0
null
[ "safetensors", "qwen3", "code", "zh", "base_model:Qwen/Qwen3-4B", "base_model:finetune:Qwen/Qwen3-4B", "license:mit", "region:us" ]
null
2025-06-14T05:17:04Z
--- license: mit language: - zh base_model_relation: "finetune" base_model: - Qwen/Qwen3-4B tags: - code ---
viral-othoi-113-viral-video/video.full.othoiiii.viral.video.link.othoi.viral.video.link.1.13.seconds
viral-othoi-113-viral-video
2025-06-14T06:02:03Z
0
0
null
[ "region:us" ]
null
2025-06-14T06:01:34Z
<a rel="nofollow" href="https://viralflix.xyz/leaked/?bt"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a> <a rel="nofollow" href="https://viralflix.xyz/leaked/?bt">๐ŸŒ ๐–ข๐–ซ๐–จ๐–ข๐–ช ๐–ง๐–ค๐–ฑ๐–ค ๐ŸŸข==โ–บโ–บ ๐–ถ๐– ๐–ณ๐–ข๐–ง ๐–ญ๐–ฎ๐–ถ</a> <a rel="nofollow" href="https://viralflix.xyz/leaked/?bt">๐Ÿ”ด CLICK HERE ๐ŸŒ==โ–บโ–บ Download Now)</a>
LandCruiser/sn21_omegav1_1406_11
LandCruiser
2025-06-14T06:00:05Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-06-14T05:38:59Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
megasitinurhalizah/multitask_model
megasitinurhalizah
2025-06-14T05:59:47Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-14T05:59:38Z
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: multitask_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # multitask_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0077 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.1606 | 1.0 | 65 | 0.0531 | | 0.0165 | 2.0 | 130 | 0.0098 | | 0.0126 | 3.0 | 195 | 0.0077 | ### Framework versions - Transformers 4.52.4 - Pytorch 2.6.0+cu124 - Datasets 2.14.4 - Tokenizers 0.21.1
LandCruiser/sn21_omegav1_1406_8
LandCruiser
2025-06-14T05:59:46Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-06-14T05:38:58Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
kutluhan16/aleynn_fluxxxxxx
kutluhan16
2025-06-14T05:53:48Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-13T20:39:26Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: TOK --- # Aleynn_Fluxxxxxx <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `TOK` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "TOK", "lora_weights": "https://huggingface.co/kutluhan16/aleynn_fluxxxxxx/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [๐Ÿงจ diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('prithivMLmods/Canopus-LoRA-Flux-UltraRealism-2.0', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('kutluhan16/aleynn_fluxxxxxx', weight_name='lora.safetensors') image = pipeline('TOK').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2500 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/kutluhan16/aleynn_fluxxxxxx/discussions) to add images that show off what youโ€™ve made with this LoRA.
tinashechp/math-emoji-instruct-merged
tinashechp
2025-06-14T05:53:35Z
0
0
transformers
[ "transformers", "safetensors", "gemma3_text", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/gemma-3-1b-it-unsloth-bnb-4bit", "base_model:finetune:unsloth/gemma-3-1b-it-unsloth-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "e...
text-generation
2025-06-14T05:47:54Z
--- base_model: unsloth/gemma-3-1b-it-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma3_text license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** tinashechp - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-1b-it-unsloth-bnb-4bit This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
ezenwanyi-new-video/FULL.VIDEO.ezenwanyi.Viral.Video.Tutorial.Official
ezenwanyi-new-video
2025-06-14T05:49:19Z
0
0
null
[ "region:us" ]
null
2025-06-14T05:49:08Z
<a rel="nofollow" href="https://viralflix.xyz/leaked/?bt"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a> <a rel="nofollow" href="https://viralflix.xyz/leaked/?bt">๐ŸŒ ๐–ข๐–ซ๐–จ๐–ข๐–ช ๐–ง๐–ค๐–ฑ๐–ค ๐ŸŸข==โ–บโ–บ ๐–ถ๐– ๐–ณ๐–ข๐–ง ๐–ญ๐–ฎ๐–ถ</a> <a rel="nofollow" href="https://viralflix.xyz/leaked/?bt">๐Ÿ”ด CLICK HERE ๐ŸŒ==โ–บโ–บ Download Now)</a>
HoangTran223/0_TinyLlama-1.1B-Chat-v0.6_cbc2c6ab-186e-46fb-ad26-97569d03f5e2
HoangTran223
2025-06-14T05:46:10Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-13T07:12:26Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
viraly-lol-hindi/Video.viraly.lol.hindi.viraly.lol.viraly.trending.viral.Full.Video
viraly-lol-hindi
2025-06-14T05:28:19Z
0
0
null
[ "region:us" ]
null
2025-06-14T05:27:37Z
<a rel="nofollow" href="https://viralflix.xyz/?ok">๐Ÿ”ด CLICK HERE ๐ŸŒ==โ–บโ–บ Download Now)</a> <a rel="nofollow" href="https://viralflix.xyz/?ok"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a> <a rel="nofollow" href="https://anyplacecoming.com/zq5yqv0i?key=0256cc3e9f81675f46e803a0abffb9bf/">๐ŸŒ Viral Video Original Full HD๐ŸŸข==โ–บโ–บ WATCH NOW</a>
LandCruiser/sn29_june_14_8
LandCruiser
2025-06-14T05:21:34Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-14T02:22:51Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ArtoriasTech/Odin-Instruct-Geopolitical-v2
ArtoriasTech
2025-06-14T05:19:38Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:ArtoriasTech/Odin-Instruct-Geopolitical-v1-merged", "base_model:adapter:ArtoriasTech/Odin-Instruct-Geopolitical-v1-merged", "region:us" ]
null
2025-06-14T05:17:59Z
--- base_model: ArtoriasTech/Odin-Instruct-Geopolitical-v1-merged library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.14.0
HoangTran223/2_Qwen-0.5B-Instruct-8cdb845b-bb6f-46c6-9474-f0c516fe29a0_Entropy
HoangTran223
2025-06-14T05:19:04Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-13T06:36:54Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Entropicengine/LatentDream-exp-alpha-8b-Q6_K-GGUF
Entropicengine
2025-06-14T05:04:54Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:Entropicengine/LatentDream-exp-alpha-8b", "base_model:quantized:Entropicengine/LatentDream-exp-alpha-8b", "endpoints_compatible", "region:us" ]
null
2025-06-14T05:04:26Z
--- base_model: Entropicengine/LatentDream-exp-alpha-8b library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # Entropicengine/LatentDream-exp-alpha-8b-Q6_K-GGUF This model was converted to GGUF format from [`Entropicengine/LatentDream-exp-alpha-8b`](https://huggingface.co/Entropicengine/LatentDream-exp-alpha-8b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Entropicengine/LatentDream-exp-alpha-8b) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Entropicengine/LatentDream-exp-alpha-8b-Q6_K-GGUF --hf-file latentdream-exp-alpha-8b-q6_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Entropicengine/LatentDream-exp-alpha-8b-Q6_K-GGUF --hf-file latentdream-exp-alpha-8b-q6_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Entropicengine/LatentDream-exp-alpha-8b-Q6_K-GGUF --hf-file latentdream-exp-alpha-8b-q6_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Entropicengine/LatentDream-exp-alpha-8b-Q6_K-GGUF --hf-file latentdream-exp-alpha-8b-q6_k.gguf -c 2048 ```
deciaiby/xgen-small-4B-base-r-Q6_K-GGUF
deciaiby
2025-06-14T04:55:17Z
0
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "en", "base_model:Salesforce/xgen-small-4B-base-r", "base_model:quantized:Salesforce/xgen-small-4B-base-r", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-06-14T04:55:02Z
--- license: cc-by-nc-4.0 language: - en library_name: transformers base_model: Salesforce/xgen-small-4B-base-r tags: - llama-cpp - gguf-my-repo --- # deciaiby/xgen-small-4B-base-r-Q6_K-GGUF This model was converted to GGUF format from [`Salesforce/xgen-small-4B-base-r`](https://huggingface.co/Salesforce/xgen-small-4B-base-r) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Salesforce/xgen-small-4B-base-r) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo deciaiby/xgen-small-4B-base-r-Q6_K-GGUF --hf-file xgen-small-4b-base-r-q6_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo deciaiby/xgen-small-4B-base-r-Q6_K-GGUF --hf-file xgen-small-4b-base-r-q6_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo deciaiby/xgen-small-4B-base-r-Q6_K-GGUF --hf-file xgen-small-4b-base-r-q6_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo deciaiby/xgen-small-4B-base-r-Q6_K-GGUF --hf-file xgen-small-4b-base-r-q6_k.gguf -c 2048 ```
jobz-hunting-sajal-malik-vid-18k/apk3x.jobz.hunting.sajal.malik.viral.video.on.social.media
jobz-hunting-sajal-malik-vid-18k
2025-06-14T04:46:00Z
0
0
null
[ "region:us" ]
null
2025-06-14T04:42:02Z
<a rel="nofollow" href="https://tinyurl.com/muj2vnmp">๐ŸŒ ๐–ข๐–ซ๐–จ๐–ข๐–ช ๐–ง๐–ค๐–ฑ๐–ค ๐ŸŸข==โ–บโ–บ ๐–ถ๐– ๐–ณ๐–ข๐–ง ๐–ญ๐–ฎ๐–ถ L๐šŽaแด‹ed Video V๐ขral Video</a> <a rel="nofollow" href="https://tinyurl.com/muj2vnmp">๐ŸŒ ๐–ข๐–ซ๐–จ๐–ข๐–ช ๐–ง๐–ค๐–ฑ๐–ค ๐ŸŸข==โ–บโ–บ ๐–ถ๐– ๐–ณ๐–ข๐–ง ๐–ญ๐–ฎ๐–ถ L๐šŽaแด‹ed Video V๐ขral Video</a> <a href="https://tinyurl.com/muj2vnmp"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
aimeri/Dans-PersonalityEngine-V1.3.0-24b-mlx-4Bit
aimeri
2025-06-14T04:44:11Z
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "general-purpose", "roleplay", "storywriting", "chemistry", "biology", "code", "climate", "axolotl", "text-generation-inference", "finetune", "legal", "medical", "finance", "mlx", "mlx-my-repo", "conversational", "e...
text-generation
2025-06-14T04:43:33Z
--- thumbnail: https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.3.0-24b/resolve/main/resources/pe.png license: apache-2.0 tags: - general-purpose - roleplay - storywriting - chemistry - biology - code - climate - axolotl - text-generation-inference - finetune - legal - medical - finance - mlx - mlx-my-repo datasets: - PocketDoc/Dans-Prosemaxx-RP - PocketDoc/Dans-Personamaxx-Logs-2 - PocketDoc/Dans-Personamaxx-VN - PocketDoc/Dans-Kinomaxx-VanillaBackrooms - PocketDoc/Dans-Prosemaxx-Gutenberg - PocketDoc/Dans-Prosemaxx-Cowriter-3-XL - PocketDoc/Dans-Prosemaxx-Adventure - PocketDoc/Dans-Failuremaxx-Adventure-3 - PocketDoc/Dans-Prosemaxx-InstructWriter-ZeroShot-2 - PocketDoc/Dans-Prosemaxx-InstructWriter-ZeroShot-3 - PocketDoc/Dans-Prosemaxx-InstructWriter-Continue-2 - PocketDoc/Dans-Prosemaxx-Instructwriter-Long - PocketDoc/Dans-Prosemaxx-RepRemover-1 - PocketDoc/Dans-MemoryCore-CoreCurriculum-Small - AquaV/US-Army-Survival-Sharegpt - AquaV/Multi-Environment-Operations-Sharegpt - AquaV/Resistance-Sharegpt - AquaV/Interrogation-Sharegpt - AquaV/Chemical-Biological-Safety-Applications-Sharegpt - AquaV/Energetic-Materials-Sharegpt - PocketDoc/Dans-Mathmaxx - PJMixers/Math-Multiturn-1K-ShareGPT - PocketDoc/Dans-Taskmaxx - PocketDoc/Dans-Taskmaxx-DataPrepper - PocketDoc/Dans-Taskmaxx-ConcurrentQA-Reworked - PocketDoc/Dans-Taskmaxx-TableGPT - PocketDoc/Dans-Taskmaxx-SciRIFF - PocketDoc/Dans-Taskmaxx-Edit - PocketDoc/Dans-Toolmaxx-Agent - PocketDoc/Dans-Toolmaxx-ShellCommands - PocketDoc/Dans-Toolmaxx-Functions-Toolbench - PocketDoc/Dans-Toolmaxx-Functions-ToolACE - PocketDoc/Dans-Toolmaxx-Functions-apigen-subset - PocketDoc/Dans-Assistantmaxx-OpenAssistant2 - PocketDoc/Dans-Assistantmaxx-Opus-Merge-2 - PocketDoc/Dans-Assistantmaxx-sonnetorca-subset - PocketDoc/Dans-Assistantmaxx-sonnetorca-subset-2 - PocketDoc/Dans-Assistantmaxx-Synthia - PocketDoc/Dans-Assistantmaxx-ASL - PocketDoc/Dans-Assistantmaxx-PersonaLLM-Opus - PocketDoc/Dans-Assistantmaxx-LongAlign - PocketDoc/Dans-Assistantmaxx-OpenLeecher-Instruct - PocketDoc/Dans-Assistantmaxx-Tulu3-IF - PocketDoc/Dans-Systemmaxx - PocketDoc/Dans-Logicmaxx-SAT-AP - PJMixers/grimulkan_theory-of-mind-ShareGPT - PJMixers/grimulkan_physical-reasoning-ShareGPT - PocketDoc/Dans-Reasoningmaxx-NaturalReasoning - PocketDoc/Dans-Reasoningmaxx-WebInstruct - PocketDoc/Dans-Reasoningmaxx-GeneralReasoning - PocketDoc/Dans-Assistantmaxx-ClosedInstruct language: - en - ar - de - fr - es - hi - pt - ja - ko base_model: PocketDoc/Dans-PersonalityEngine-V1.3.0-24b pipeline_tag: text-generation library_name: transformers --- # aimeri/Dans-PersonalityEngine-V1.3.0-24b-mlx-4Bit The Model [aimeri/Dans-PersonalityEngine-V1.3.0-24b-mlx-4Bit](https://huggingface.co/aimeri/Dans-PersonalityEngine-V1.3.0-24b-mlx-4Bit) was converted to MLX format from [PocketDoc/Dans-PersonalityEngine-V1.3.0-24b](https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.3.0-24b) using mlx-lm version **0.22.3**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("aimeri/Dans-PersonalityEngine-V1.3.0-24b-mlx-4Bit") prompt="hello" if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
tinh2406/llama2_split_actions
tinh2406
2025-06-14T04:41:03Z
7
0
transformers
[ "transformers", "safetensors", "trl", "sft", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-20T08:15:28Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ajyl/sft_seed_400_512d_8L_8H_datatype_full_pretrain
ajyl
2025-06-14T04:37:53Z
0
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-14T04:25:17Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
miike-ai/magickdev-24b-vision-FP16-GGUF
miike-ai
2025-06-14T04:37:09Z
0
0
null
[ "gguf", "base_model:miike-ai/magickdev-24b", "base_model:quantized:miike-ai/magickdev-24b", "endpoints_compatible", "region:us" ]
null
2025-06-14T02:15:08Z
--- base_model: - miike-ai/magickdev-24b ---
TOMFORD79/incen39
TOMFORD79
2025-06-14T04:36:16Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-06-14T03:05:25Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
FormlessAI/185d736e-94cb-4373-8f6b-2ee30ca4514b
FormlessAI
2025-06-14T04:32:50Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "grpo", "arxiv:2402.03300", "base_model:DeepMount00/Llama-3-8b-Ita", "base_model:finetune:DeepMount00/Llama-3-8b-Ita", "endpoints_compatible", "region:us" ]
null
2025-06-14T02:23:38Z
--- base_model: DeepMount00/Llama-3-8b-Ita library_name: transformers model_name: 185d736e-94cb-4373-8f6b-2ee30ca4514b tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for 185d736e-94cb-4373-8f6b-2ee30ca4514b This model is a fine-tuned version of [DeepMount00/Llama-3-8b-Ita](https://huggingface.co/DeepMount00/Llama-3-8b-Ita). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="FormlessAI/185d736e-94cb-4373-8f6b-2ee30ca4514b", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/m6q12egv) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.18.1 - Transformers: 4.52.4 - Pytorch: 2.7.0+cu128 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
bevantam01bl/gg
bevantam01bl
2025-06-14T04:32:46Z
0
0
null
[ "license:bsd-3-clause", "region:us" ]
null
2025-06-14T04:32:45Z
--- license: bsd-3-clause ---
arielgalle8338172/gg
arielgalle8338172
2025-06-14T04:32:46Z
0
0
null
[ "license:bsd-3-clause", "region:us" ]
null
2025-06-14T04:32:45Z
--- license: bsd-3-clause ---
ajyl/joint_seed_400_512d_8L_8H_datatype_full_alpha_1.0
ajyl
2025-06-14T04:32:15Z
0
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-14T04:22:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
gradientrouting-spar/gcd_syco_cap_math_limit_proxy_data_to-25_seed_1
gradientrouting-spar
2025-06-14T04:11:08Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-14T04:11:00Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
VIDEOS-18-SMS-Rani-Viral-Video-Link/FULL.VIDEO.Sms.Rani.Viral.Video.Tutorial.Official
VIDEOS-18-SMS-Rani-Viral-Video-Link
2025-06-14T03:59:52Z
0
0
null
[ "region:us" ]
null
2025-06-14T03:59:32Z
<a rel="nofollow" href="https://tinyurl.com/muj2vnmp">๐ŸŒ ๐–ข๐–ซ๐–จ๐–ข๐–ช ๐–ง๐–ค๐–ฑ๐–ค ๐ŸŸข==โ–บโ–บ ๐–ถ๐– ๐–ณ๐–ข๐–ง ๐–ญ๐–ฎ๐–ถ L๐šŽaแด‹ed Video V๐ขral Video</a> <a rel="nofollow" href="https://tinyurl.com/muj2vnmp">๐ŸŒ ๐–ข๐–ซ๐–จ๐–ข๐–ช ๐–ง๐–ค๐–ฑ๐–ค ๐ŸŸข==โ–บโ–บ ๐–ถ๐– ๐–ณ๐–ข๐–ง ๐–ญ๐–ฎ๐–ถ L๐šŽaแด‹ed Video V๐ขral Video</a> <a href="https://tinyurl.com/muj2vnmp"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
TOMFORD79/incen31
TOMFORD79
2025-06-14T03:57:04Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-06-14T03:04:40Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
dgambettaphd/M_llm2_run1_gen0_WXS_doc1000_synt64_lr1e-04_acm_MPP
dgambettaphd
2025-06-14T03:54:28Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "unsloth", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-06-14T03:52:40Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
tanbinh2210/onl_contrastive_bge-m3
tanbinh2210
2025-06-14T03:50:31Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "xlm-roberta", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:71937", "loss:OnlineContrastiveLoss", "arxiv:1908.10084", "base_model:BAAI/bge-m3", "base_model:finetune:BAAI/bge-m3", "autotrain_compatible", "text-emb...
sentence-similarity
2025-06-14T03:34:13Z
--- tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:71937 - loss:OnlineContrastiveLoss base_model: BAAI/bge-m3 widget: - source_sentence: Nแบฟu tรดi muแป‘n nghแป‰ viแป‡c tแบกi cรดng ty, tรดi cแบงn lร m ฤ‘ฦกn tแปซ nร o? sentences: - 'Khแบฏc phแปฅc sแปฑ cแป‘ khรดng cรณ quyแปn cร i ฤ‘แบทt lแป‹ch biแปƒu ฤแปƒ cร i ฤ‘แบทt ฤ‘ฦฐแปฃc lแป‹ch biแปƒu bแบกn phแบฃi cรณ quyแปn Quแบฃn lรฝ tแบฅt cแบฃ trong phรขn hแป‡ lแป‹ch biแปƒu. ฤแปƒ cรณ quyแปn nร y, bแบกn cรณ thแปƒ liรชn hแป‡ vแป›i bแป™ phแบญn cรณ quyแปn Quแบฃn trแป‹ hแป‡ thแป‘ng ฤ‘แปƒ yรชu cแบงu cแบฅp quyแปn cho bแบกn. Bแป™ phแบญn cรณ quyแปn quแบฃn trแป‹ hแป‡ thแป‘ng cรณ thแปƒ cแบฅp quyแปn quแบฃn lรฝ tแบฅt cแบฃ module lแป‹ch biแปƒu cho nhรขn sแปฑ bแบฑng cรกch sau: 1. Click vร o Avatar , chแปn Cร i ฤ‘แบทt hแป‡ thแป‘ng . Chแปn ฤ‘แป‘i tฦฐแปฃng Hแป‡ thแป‘ng 2. Tแบกi mร n danh sรกch ngฦฐแปi dรนng, click chuแป™t phแบฃi vร o tร i khoแบฃn nhรขn sแปฑ cแบงn cแบฅp quyแปn tแบกo mแป›i, chแปn tรกc vแปฅ Sแปญa 3. Tแบกi mร n hรฌnh sแปญa tร i khoแบฃn, tรญch vร o รด Tรนy chแป‰nh quyแปn . Tรฌm mแปฅc Lแป‹ch biแปƒu vร  phรขn quyแปn Quแบฃn lรฝ tแบฅt cแบฃ. (Bแบกn cรณ thแปƒ tรนy chแปn quyแปn quแบฃn lรฝ cรดng ty, quแบฃn lรฝ chi nhรกnh, quแบฃn lรฝ phรฒng ban tรนy theo vai trรฒ sแปญ dแปฅng) 4. Bแบฅm Cแบญp nhแบญt ฤ‘แปƒ lฦฐu thรดng tin cร i ฤ‘แบทt quyแปn quแบฃn lรฝ phรขn hแป‡ lแป‹ch biแปƒu' - 'PHร‚N Hแป† MARKETING Bแบกn muแป‘n truy cแบญp nhanh ฤ‘แบฟn Marketing?ย Truy cแบญp vร o phรขn hแป‡ Marketing Nแป™i dung Phรขn hแป‡ Marketing giแบฃi quyแบฟt bร i toรกn gรฌ? Nhแปฏng gรฌ bแบกn cรณ thแปƒ lร m Phรขn hแป‡ Marketing giแบฃi quyแบฟt bร i toรกn gรฌ? Phรขn hแป‡ Marketing trรชn phแบงn mแปm 1Office giรบp doanh nghiแป‡p quแบฃn lรฝ toร n diแป‡n cรกc hoแบกt ฤ‘แป™ng tiแบฟp thแป‹ โ€“ tแปซ quแบฃn lรฝ chiแบฟn dแป‹ch cho ฤ‘แบฟn quแบฃn lรฝ cรกc cuแป™c gแปi tแปซ tแป•ng ฤ‘ร i. Phรขn hแป‡ nร y sแบฝ giรบp doanh nghiแป‡p vร  bแป™ phแบญn Marketing giแบฃi quyแบฟt cรกc bร i toรกn nhฦฐ sau: Quแบฃn lรฝ vร  theo dรตi chiแบฟn dแป‹ch Email vร  SMS cแปงa doanh nghiแป‡p Thแป‘ng kรช sแป‘ lฦฐแปฃt tiแบฟp cแบญn, sแป‘ liรชn hแป‡ kรฌ vแปng vร  chi phรญ thแปฑc tแบฟ cแปงa mแป—i chiแบฟn dแป‹ch cแปฅ thแปƒ Kแบฟt nแป‘i vแป›i nhiแปu nhร  cung cแบฅp tแป•ng ฤ‘ร i: CMC, Omi, Stringee... Quแบฃn lรฝ danh sรกch cรกc cuแป™c gแปi tแปซ tแป•ng ฤ‘ร i ฤ‘แบฟn cรกc liรชn hแป‡ Theo dรตi trแบกng thรกi cuแป™c gแปi, ฤ‘iแป‡n thoแบกi viรชn vร  ghi รขm cuแป™c gแปi' - "Khแบฏc phแปฅc sแปฑ cแป‘ khรดng tแบกo ฤ‘ฦฐแปฃc ฤ‘ฦกn xin nghแป‰ phรฉp\nKiแปƒm tra lแบกi thรดng tin tแป›i\ \ bแป™ phแบญn Hร nh chรญnh nhรขn sแปฑ ฤ‘แปƒ nแบฏm ฤ‘ฦฐแปฃc quy ฤ‘แป‹nh vแป thแปi gian tแป‘i thiแปƒu cแบงn tแบกo\ \ ฤ‘ฦกn trฦฐแป›c khi xin nghแป‰ hoแบทc thแปi gian tแป‘i ฤ‘a ฤ‘ฦฐแปฃc phรฉp bแป• sung ฤ‘ฦกn sau khi nghแป‰..\ \ \nNแบฟu cแบงn ฤ‘iแปu chแป‰nh thแปi gian cho phรฉp tแบกo ฤ‘ฦกn, Bแป™ nhแบญn Nhรขn sแปฑ cแบงn truy cแบญp\ \ Cร i ฤ‘แบทt ฤ‘ฦกn tแปซ ฤ‘แปƒ tiแบฟn hร nh ฤ‘iแปu chแป‰nh. \nBแป™ phแบญn Hร nh chรญnh nhรขn sแปฑ cรณ thแปƒ\ \ can thiแป‡p tรนy chแป‰nh phรฉp thแปง cรดng ฤ‘แปƒ cแบญp nhแบญt lแบกi quแปน phรฉp cho nhรขn sแปฑ แปŸ phรขn\ \ hแป‡ Chแบฅm cรดng" - source_sentence: Ai lร  ngฦฐแปi cรณ quyแปn xem, sแปญa vร  xoรก loแบกi bแบฃng lฦฐฦกng sau khi tรดi tแบกo xong? sentences: - 'Quแบฃn lรฝ nhร  cung cแบฅp vร  cรดng nแปฃ Tแบกo mแป›i cรดng nแปฃ ฤแปƒ tแบกo mแป›i cรดng nแปฃ, ngฦฐแปi dรนng thแปฑc hiแป‡n cรกc bฦฐแป›c sau: 1. Truy cแบญp phรขn hแป‡ Mua hร ng 2. แปž gรณc trรชn cรนng bรชn trรกi, chแปn Tแบกo mแป›i Chแปn Cรดng nแปฃ 3. ฤiแปn thรดng tin ฤ‘แบงy ฤ‘แปง vร  Cแบญp nhแบญt. Tแบกo mแป›i cรดng nแปฃ Danh sรกch cรกc trฦฐแปng dแปฏ liแป‡u trong tแบกo mแป›i cรดng nแปฃ Trฦฐแปng dแปฏ liแป‡u: Kiแปƒu cรดng nแปฃ, Mรด tแบฃ: Chแปn kiแปƒu cรดng nแปฃ: Dฦฐ nแปฃ: Lร  cรกc khoแบฃn ฤ‘รฃ trแบฃ cho nhร  cung cแบฅp (Tแปซ cรกc ฤ‘ฦกn hร ng mua hoแบทc phรกt sinh nแปฃ khi tแบกo cรดng nแปฃ) Dฦฐ cรณ: Lร  cรกc khoแบฃn phแบฃi trแบฃ cho nhร  cung cแบฅp (Tแปซ cรกc phiแบฟu chi hoแบทc phรกt sinh cรณ khi tแบกo cรดng nแปฃ) Trฦฐแปng dแปฏ liแป‡u: Cรกc trฦฐแปng dแปฏ liแป‡u khรกc, Mรด tแบฃ: Ngร y, Sแป‘ tiแปn, Lรฝ do, chแปn nhร  cung cแบฅp Tรฌm hiแปƒu thรชm tแบกi Trung tรขm hแป— trแปฃ 1Office' - 'Tรดi muแป‘n bแบฃng lฦฐฦกng sau khi hoร n thร nh sแบฝ ฤ‘ฦฐแปฃc gแปญi ฤ‘แบฟn Ban giรกm ฤ‘แป‘c phรช duyแป‡t, sau ฤ‘รณ gแปญi ฤ‘แบฟn Kแบฟ toรกn ฤ‘แปƒ tiแบฟn thร nh chi tiแปn thรฌ phแบฃi lร m nhฦฐ thแบฟ nร o? Truy cแบญp Cร i ฤ‘แบทt hแป‡ thแป‘ng Bแบฃng lฦฐฦกng Bแบญt cร i ฤ‘แบทt รp dแปฅng quy trรฌnh duyแป‡t vร o bแบฃng lฦฐฦกng . Sau ฤ‘รณ cแบฅu hรฌnh quy trรฌnh duyแป‡t bแบฃng lฦฐฦกng.' - 'Thแปง tแปฅc 630c - Giแบฃi quyแบฟt hฦฐแปŸng trแปฃ cแบฅp dฦฐแปกng sแปฉc, phแปฅc hแป“i sแปฉc khแปe sau แป‘m ฤ‘au, thai sแบฃn, tai nแบกn lao ฤ‘แป™ng, bแป‡nh nghแป nghiแป‡p Hฦฐแป›ng dแบซn kรช khai Hฦฐแป›ng dแบซn Kรช khai nhรฃn Dฦฐแปกng sแปฉc cho hแป“ sฦก bแบฃo hiแปƒm trรชn 1-IVAN: Bฦฐแป›c 1: Tแบกi hแป“ sฦก bแบฃo hiแปƒm muแป‘n kรช khai nhรฃn > Tรกc vแปฅ chuแป™t phแบฃi > แปm ฤ‘au, Thai sแบฃn, Dฦฐแปกng sแปฉc Bฦฐแป›c 2: Bแป• sung thรดng tin tแบกi Form kรช khai Dฦฐแปกng sแปฉcย > Cแบญp nhแบญt Chแปn loแบกi chแบฟ ฤ‘แป™: Dฦฐแปกng sแปฉc Chแปn nhรณm hฦฐแปŸng tฦฐฦกng แปฉng: + Dฦฐแปกng sแปฉc sau แป‘m + Nghแป‰ thai sแบฃn + Nghแป‰ sau tai nแบกn lao ฤ‘แป™ng Chแปn thแปง tแปฅc: 630c - Giแบฃi quyแบฟt hฦฐแปŸng trแปฃ cแบฅp dฦฐแปกng sแปฉc, phแปฅc hแป“i sแปฉc khแปe sau แป‘m ฤ‘au, thai sแบฃn, tai nแบกn lao ฤ‘แป™ng, bแป‡nh nghแป nghiแป‡p' - source_sentence: Sแปฑ khรกc biแป‡t chรญnh giแปฏa tร i liแป‡u cรดng ty vร  tร i liแป‡u cรก nhรขn lร  gรฌ? sentences: - "Quแบฃn lรฝ Email hแป‡ thแป‘ng\n2. Quแบฃn lรฝ email trรชn hแป‡ thแป‘ng\nKhi bแบกn ฤ‘ฦฐแปฃc phรขn quyแปn\ \ sแปญ dแปฅng nhiแปu tร i khoแบฃn gแปญi ฤ‘i, hรฃy chแปn 1 tร i khoแบฃn ฤ‘แปƒ gแปญi. Trong trฦฐแปng hแปฃp\ \ chแป‰ ฤ‘ฦฐแปฃc phรขn quyแปn 1 tร i khoแบฃn, hแป‡ thแป‘ng sแบฝ giรบp bแบกn hiแปƒn thแป‹ luรดn tร i khoแบฃn\ \ ฤ‘รณ.. \nKiแปƒm tra lแบกi lแป‹ch sแปญ gแปญi email:. \nChแปn Cร i ฤ‘แบทt hแป‡ thแป‘ng > Email, SMS\ \ > Email > Lแป‹ch sแปญ gแปญi Email" - 'PHร‚N Hแป† Vฤ‚N BแบขN Cรกc loแบกi vฤƒn bแบฃn Mแป—i loแบกi vฤƒn bแบฃn sแบฝ cรณ mแปฅc ฤ‘รญch sแปญ dแปฅng khรกc nhau, Quแบฃn lรฝ vฤƒn bแบฃn theo mแปฅc ฤ‘รญch sแบฝ giรบp viแป‡c quแบฃn lรฝ, theo dรตi trแปŸ nรชn dแป… dร ng vร  chรญnh xรกc hฦกn. Loแบกi vฤƒn bแบฃn: Vฤƒn bแบฃn ฤ‘แบฟn, Nแป™i dung: Vฤƒn bแบฃn ฤ‘แบฟn giรบp doanh nghiแป‡p quแบฃn lรฝ cรกc vฤƒn bแบฃn, cรดng vฤƒn ฤ‘ฦฐแปฃc gแปญi ฤ‘แบฟn doanh nghiแป‡p Loแบกi vฤƒn bแบฃn: Vฤƒn bแบฃn ฤ‘i, Nแป™i dung: Vฤƒn bแบฃn ฤ‘i giรบp doanh nghiแป‡p quแบฃn lรฝ cรกc vฤƒn bแบฃn, cรดng vฤƒn ฤ‘ฦฐแปฃc soแบกn thแบฃo vร  ban hร nh tแปซ doanh nghiแป‡p ra cรกc cฦก quan bรชn ngoร i Loแบกi vฤƒn bแบฃn: Vฤƒn bแบฃn nแป™i bแป™, Nแป™i dung: Vฤƒn bแบฃn nแป™i bแป™ giรบp doanh nghiแป‡p quแบฃn lรฝ, theo dรตi cรกc vฤƒn bแบฃn ฤ‘ฦฐแปฃc ban hร nh trong nแป™i bแป™ doanh nghiแป‡p Tรฌm hiแปƒu thรชm tแบกi trung tรขm hแป— trแปฃ 1Office' - 'Khแบฏc phแปฅc sแปฑ cแป‘ ฤ‘ฦกn tแปซ tแปฑ ฤ‘แป™ng chuyแปƒn khรดng duyแป‡t Cรณ hai nguyรชn nhรขn chรญnh dแบซn ฤ‘แบฟn viแป‡c ฤ‘ฦกn tแปซ tแปฑ ฤ‘แป™ng bแป‹ chuyแปƒn sang trแบกng thรกi khรดng duyแป‡t: 1. Do cร i ฤ‘แบทt tแปฑ ฤ‘แป™ng chuyแปƒn trแบกng thรกi ฤฦกn tแปซ Trong cร i ฤ‘แบทt ฤ‘ฦกn tแปซ cรณ 1 cร i ฤ‘แบทt Thแปi gian ฤ‘แปƒ hแป‡ thแป‘ng sแบฝ tแปฑ ฤ‘แป™ng chuyแปƒn cรกc ฤ‘ฦกn tแปซ ฤ‘ang แปŸ trแบกng thรกi chแป duyแป‡t sang khรดng duyแป‡t sau khi chแป‘t ฤ‘ฦกn tแปซ. Cร i ฤ‘แบทt nร y cho phรฉp nแบฟu sau khi Chแป‘t ฤ‘ฦกn trรชn bแบฃng chแบฅm cรดng, sau khoแบฃng thแปi gian cร i ฤ‘แบทt, ฤ‘ฦกn tแปซ Chฦฐa duyแป‡t sแบฝ bแป‹ chuyแปƒn trแบกng thรกi. Trฦฐแปng hแปฃp nร y bแบกn cแบงn kiแปƒm tra lแบกi trแบกng thรกi Chแป‘t ฤ‘ฦกn trรชn bแบฃng chแบฅm cรดng vร  cร i ฤ‘แบทt ฤ‘ฦกn tแปซ. Truy cแบญp Bแบฃng chแบฅm cรดng ฤ‘แปƒ kiแปƒm tra trแบกng thรกi Chแป‘t ฤ‘ฦกn Liรชn hแป‡ bแป™ phแบญn nhรขn sแปฑ ฤ‘แปƒ kiแปƒm tra Cร i ฤ‘แบทt ฤ‘ฦกn tแปซ 2. Do cร i ฤ‘แบทt cแปงa quy trรฌnh duyแป‡t Trong cแบฅu hรฌnh quy trรฌnh duyแป‡t cรณ node Khรดng duyแป‡t , nแบฟu cร i ฤ‘แบทt luแป“ng duyแป‡t cรณ chแบกy ฤ‘iแปu kiแป‡n khรดng duyแป‡t, ฤฦกn cลฉng sแบฝ tแปฑ ฤ‘แป™ng chuyแปƒn trแบกng thรกi Khรดng duyแป‡t . Trฦฐแปng hแปฃp nร y bแบกn cรณ thแปƒ truy cแบญp Cร i ฤ‘แบทt Duyแป‡t ฤ‘แป‘i tฦฐแปฃng ฤ‘แปƒ kiแปƒm tra, ฤ‘iแปu chแป‰nh.' - source_sentence: Sau khi thแปฑc hiแป‡n tรกc vแปฅ 'Xuแบฅt Biแปƒu mแบซu', thรดng tin trรชn cรดng viแป‡c quy trรฌnh sแบฝ ฤ‘ฦฐแปฃc xuแบฅt ra loแบกi file nร o? sentences: - 'Chiแบฟn dแป‹ch tuyแปƒn dแปฅng Tแบกo mแป›i tin tuyแปƒn dแปฅng 1. MแปŸ phรขn hแป‡ Tuyแปƒn dแปฅng 2. Tแบกi danh mแปฅc trรกi, Chแปn Chiแบฟn dแป‹ch Chแปn Chiแบฟn dแป‹ch tuyแปƒn dแปฅng muแป‘n ฤ‘ฤƒng tin 3. Chแปn biแปƒu tฦฐแปฃng Tแบกo tin Bแบฃng mรด tแบฃ cรกc trฦฐแปng dแปฏ liแป‡u trong tแบกo mแป›i tin tuyแปƒn dแปฅng: Trฦฐแปng dแปฏ liแป‡u: Thรดng tin tแปซ chiแบฟn dแป‹ch tuyแปƒn dแปฅng, Mรด tแบฃ: Thรดng tin nhฦฐ tรชn chiแบฟn dแป‹ch, vแป‹ trรญ tuyแปƒn dแปฅng sแบฝ ฤ‘ฦฐแปฃc lแบฅy trแปฑc tiแบฟp tแปซ chiแบฟn dแป‹ch tuyแปƒn dแปฅng Trฦฐแปng dแปฏ liแป‡u: Kรชnh tuyแปƒn dแปฅng, Mรด tแบฃ: Chแปn kรชnh tuyแปƒn dแปฅng muแป‘n ฤ‘ฤƒng tin lรชn Trฦฐแปng dแปฏ liแป‡u: Thรดng tin tuyแปƒn dแปฅng, Mรด tแบฃ: Nhแบญp thรดng tin cho vแป‹ trรญ แปฉng tuyแปƒn Lฦฐu รฝ : Vแป›i mแป—i kรชnh tuyแปƒn dแปฅng, ngฦฐแปi dรนng chแป‰ ฤ‘ฦฐแปฃc phรฉp ฤ‘ฤƒng 1 tin. Nแบฟu trรชn kรชnh tuyแปƒn dแปฅng ฤ‘รณ ฤ‘รฃ cรณ tin tแปซ trฦฐแป›c, ngฦฐแปi dรนng cรณ thแปƒ sแปญa tin hoแบทc tแบกo chiแบฟn dแป‹ch khรกc.' - 'Tรดi muแป‘n liรชn kแบฟt 1Office vแป›i cแป•ng tuyแปƒn dแปฅng thรฌ phแบฃi lร m sao? Hiแป‡n tแบกi, 1Office ฤ‘ang cung cแบฅp cรกc phฦฐฦกng thแปฉc hแป— trแปฃ Khรกch hร ng lแบฅy CV tแปซ cรกc cแป•ng: Email & Webform. Anh/chแป‹ cรณ thแปƒ xem chi tiแบฟt tแบกi Hฦฐแป›ng dแบซn sแปญ dแปฅng phรขn hแป‡ Tuyแปƒn dแปฅng.' - "Quแบฃn lรฝ thu chi\nQuแบฃn lรฝย phiแบฟu chi\nTรกc vแปฅ: Duyแป‡t/ Hoร n duyแป‡t, Mรด tแบฃ: Ngฦฐแปi ฤ‘ฦฐแปฃc\ \ cร i ฤ‘แบทt lร  ngฦฐแปi duyแป‡t phiแบฟu chi sแบฝ cรณ quyแปn thแปฑc hiแป‡n tรกc vแปฅ Duyแป‡t/Hoร n duyแป‡t.\ \ Sau khi Phiแบฟu chi ฤ‘ฦฐแปฃc duyแป‡t, sแป‘ tiแปn trong phiแบฟu chi sแบฝ trแปซ vร o tร i khoแบฃn ฤ‘รฃ\ \ ฤ‘ฦฐแปฃc chแปn trong phiแบฟu chi ฤ‘รณ, tฦฐฦกng แปฉng sแบฝ cแบญp nhแบญt tแปฑ ฤ‘แป™ng vร o ฤ‘รฃ thu cแปงa ฤฦกn\ \ hร ng/Hแปฃp ฤ‘แป“ng/Cรดng nแปฃ khรกch hร ng. Sau khi duyแป‡t, ngฦฐแปi dรนng cรณ thแปƒ Hoร n duyแป‡t,\ \ sau khi hoร n duyแป‡t, sแป‘ tiแปn cลฉng sแบฝ ฤ‘ฦฐแปฃc hoร n lแบกi. \nTรกc vแปฅ: Tแบกo mแป›i, Mรด tแบฃ:\ \ Tแบกo mแป›i cรกc ฤ‘แป‘i tฦฐแปฃng liรชn quan ฤ‘แบฟn Phiแบฟu chi nhฦฐ: Dแปฑ รกn, cรดng viแป‡c, quy trรฌnh,\ \ ฤ‘ฦกn hร ng bรกn, hแปฃp ฤ‘แป“ng bรกn, ฤ‘ฦกn hร ng mua, hแปฃp ฤ‘แป“ng mua, vแบญt tฦฐ, phiแบฟu nhแบญp kho,\ \ phiแบฟu xuแบฅt kho, phiแบฟu chi, ฤ‘แป‘i tรกc, ฤ‘ฦกn cรดng tรกc.... \nTรกc vแปฅ: Kรฝ sแป‘, Mรด tแบฃ:\ \ Tแบกo hแป“ sฦก kรฝ sแป‘ liรชn quan ฤ‘แบฟn phiแบฟu chi\nTรกc vแปฅ: Biแปƒu mแบซu, Mรด tแบฃ: Khi phiแบฟu\ \ chi cแบงn in ra vร  lฦฐu trแปฏ, bแบกn cรณ thแปƒ chแปn in ra/ xuแบฅt biแปƒu mแบซu tฦฐฦกng แปฉng (ฤiแปu\ \ kiแป‡n ฤ‘รฃ cร i biแปƒu mแบซu phiแบฟu chi)\nTรกc vแปฅ: Thรชm tร i liแป‡u, Mรด tแบฃ: ฤรญnh kรจm tร i\ \ liแป‡u/minh chแปฉng liรชn quan\nTรกc vแปฅ: Sแปญa, Mรด tแบฃ: Tรกc vแปฅ Sแปญa chแป‰ ฤ‘ฦฐแปฃc thแปฑc hiแป‡n\ \ khi Phiแบฟu chi chฦฐa ฤ‘ฦฐแปฃc duyแป‡t แปŸ bแบฅt kแปณ bฦฐแป›c nร o\nTรกc vแปฅ: Xรณa, Mรด tแบฃ: Chแป‰ ฤ‘ฦฐแปฃc\ \ phรฉp Xรณa khi phiแบฟu chi แปŸ trแบกng thรกi Chแป duyแป‡t, Khรดng duyแป‡t.\nTรกc vแปฅ: Nhรขn bแบฃn,\ \ Mรด tแบฃ: Tรกc vแปฅ nร y cho phรฉp bแบกn tแบกo nhanh 1 phiแบฟu chi mแป›i vแป›i cรกc thรดng tin tฦฐฦกng\ \ tแปฑ phiแบฟu chi ฤ‘รฃ tแบกo ฤ‘แปƒ tiแบฟt kiแป‡m thแปi gian nhแบญp liแป‡u." - source_sentence: ฤฦกn xin nghแป‰ phรฉp cรณ nhแปฏng trแบกng thรกi nร o vร  รฝ nghฤฉa cแปงa tแปซng trแบกng thรกi ฤ‘รณ lร  gรฌ? sentences: - 'Giao viแป‡c Tแบกo mแป›i cรดng viแป‡c thฦฐแปng Giao viแป‡c lร  tรกc vแปฅ nhแบฑm xรกc ฤ‘แป‹nh mแปฅc tiรชu, yรชu cแบงu cแปงa cรดng viแป‡c. Cรณ kรชnh giao tiแบฟp rรต rร ng vแป cรดng viแป‡c nhแบฑm hแป— trแปฃ vร  theo dรตi tiแบฟn ฤ‘แป™ cแปงa nhรขn viรชn trong quรก trรฌnh lร m viแป‡c, giแบฃi ฤ‘รกp thแบฏc mแบฏc, giแบฃi quyแบฟt vแบฅn ฤ‘แป vร  ฤ‘ฦฐa ra gรณp รฝ khi cแบงn thiแบฟt. Tแปซ ฤ‘รณ ฤ‘รกnh giรก kแบฟt quแบฃ cรดng viแป‡c, khen thฦฐแปŸng hoแบทc phรช bรฌnh nhรขn viรชn tรนy theo hiแป‡u suแบฅt lร m viแป‡c cแปงa hแป. ฤแปƒ tแบกo mแป›i cรดng viแป‡c thฦฐแปng, ngฦฐแปi dรนng thแปฑc hiแป‡n cรกc bฦฐแป›c sau: 1. Truy cแบญp phรขn hแป‡ Cรดng viแป‡c 2. แปž gรณc trรชn cรนng bรชn trรกi, chแปn Tแบกo mแป›i Chแปn Cรดng viแป‡c 3. ฤiแปn thรดng tin vร o form tแบกo mแป›i vร  Cแบญp nhแบญt Tแบกo mแป›i cรดng viแป‡c thฦฐแปng Bแบฃng mรด tแบฃ cรกc trฦฐแปng dแปฏ liแป‡u trong tแบกo mแป›i cรดng viแป‡c: Trฦฐแปng dแปฏ liแป‡u: Tรชn cรดng viแป‡c, Mรด tแบฃ: Mแปฅc tiรชu, yรชu cแบงu cแปงa cรดng viแป‡c Trฦฐแปng dแปฏ liแป‡u: Bแบฏt ฤ‘แบงu vร  Kแบฟt thรบc Giao viแป‡c theo giแป, Mรด tแบฃ: Thแปi gian dแปฑ kiแบฟn hoร n thร nh cรดng viแป‡c Trฦฐแปng dแปฏ liแป‡u: Ngฦฐแปi thแปฑc hiแป‡n, Mรด tแบฃ: Ngฦฐแปi cรณ quyแปn bรกo cรกo tiแบฟn ฤ‘แป™ vร  cแบญp nhแบญt thแปi gian thแปฑc tแบฟ cแปงa cรดng viแป‡c Trฦฐแปng dแปฏ liแป‡u: Ngฦฐแปi giao viแป‡c, Mรด tแบฃ: Lร  ngฦฐแปi cรณ quyแปn quแบฃn lรฝ cรดng viแป‡c, chuyแปƒn trแบกng thรกi cรดng viแป‡c vร  cแบญp nhแบญt thแปi gian cรดng viแป‡c Trฦฐแปng dแปฏ liแป‡u: Ngฦฐแปi theo dรตi, Mรด tแบฃ: Lร  ngฦฐแปi ฤ‘ฦฐแปฃc xem cรดng viแป‡c vร  nhแบญn thรดng bรกo liรชn quan ฤ‘แบฟn cรดng viแป‡c' - 'Cร i ฤ‘แบทt ฤ‘ฦกn tแปซ Cร i ฤ‘แบทt danh mแปฅc Cร i ฤ‘แบทt danh mแปฅc lร  nฦกi ngฦฐแปi dรนng cร i ฤ‘แบทt cรกc lรฝ do cแปงa ฤ‘ฦกn tแปซ hร nh chรญnh, phฦฐฦกng tiแป‡n cรดng tรกc, cรดng tรกc phรญ 1. MแปŸ phรขn hแป‡ ฤฦกn tแปซ tแบกi HRM 2. แปž trรชn cรนng bรชn phแบฃi, chแปn Cร i ฤ‘แบทt 3. Tแบกi menu trรกi, chแปn Danh mแปฅc 4. Chแปn Tแบกo mแป›i ฤ‘แปƒ tแบกo mแป›i 5. Nhแบฅp vร o Cแบญp nhแบญt ฤ‘แปƒ lฦฐu thay ฤ‘แป•i cร i ฤ‘แบทt danh mแปฅc Trฦฐแปng dแปฏ liแป‡u: Tแป‘i ฤ‘a, Mรด tแบฃ: Hแบกn chแบฟ nhรขn sแปฑ sแปญ dแปฅng ฤ‘ฦกn quรก nhiแปu lแบงn trong tuแบงn, thรกng hoแบทc nฤƒm Trฦฐแปng dแปฏ liแป‡u: Kรฝ hiแป‡u, Mรด tแบฃ: Kรฝ hiแป‡u hiแปƒn thแป‹ trรชn bแบฃng chแบฅm cรดng Chแป‰ hiแปƒn thแป‹ kรฝ hiแป‡u vแป›i ฤฦกn xin nghแป‰ Trฦฐแปng dแปฏ liแป‡u: Tรญnh cรดng, Mรด tแบฃ: Nแบฟu lรฝ do bแบกn chแปn Cรณ tรญnh cรดng thรฌ khoแบฃng thแปi gian xin nghแป‰/vแบฏng mแบทt cแปงa bแบกn sแบฝ ฤ‘ฦฐแปฃc tรญnh vร o cรดng lร m viแป‡c Trฦฐแปng dแปฏ liแป‡u: Trแบกng thรกi, Mรด tแบฃ: Hoแบกt ฤ‘แป™ng: ngฦฐแปi dรนng cรณ thแปƒ chแปn lรฝ do nร y khi tแบกo ฤ‘ฦกn Khรดng hoแบกt ฤ‘แป™ng: ngฦฐแปi dรนng sแบฝ khรดng nhรฌn thแบฅy lรฝ do nร y trong danh sรกch Trฦฐแปng dแปฏ liแป‡u: Yรชu cแบงu chแป‘t, Mรด tแบฃ: Nแบฟu lรฝ do vแบฏng mแบทt cแปงa bแบกn Cรณ yรชu cแบงu chแป‘t vรขn tay thรฌ bแบกn cแบงn phแบฃi chแป‘t vรขn tay trong khoแบฃng thแปi gian bแบฏt ฤ‘แบงu vแบฏng mแบฏt vร  kแบฟt thรบc vแบฏng mแบทt Lฦฐu รฝ: Khรดng thแปƒ xรณa nhแปฏng lรฝ do cแปงa ฤ‘ฦกn xin nghแป‰ ฤ‘ฦฐแปฃc hแป‡ thแป‘ng tแบกo ra, chแป‰ cรณ thแปƒ chuyแปƒn lรฝ do sang trแบกng thรกi khรดng hoแบกt ฤ‘แป™ng.' - 'Thฦฐ viแป‡n tร i nguyรชn Thฦฐ viแป‡n tร i nguyรชn lร  mแป™t tรญnh nฤƒng mแป›i cแปงa 1Office. ฤรขy lร  kho tร i nguyรชn chแปฉa cรกc dแปฏ liแป‡u mแบซu mร  1Office cung cแบฅp cho Khรกch hร ng. Nรณ hoแบกt ฤ‘แป™ng giแป‘ng nhฦฐ Google Play hay Chrome Extension. Tแบกi ฤ‘รขy sแบฝ cรณ cรกc Template mแบซu, ngฦฐแปi dรนng cรณ thแปƒ vร o xem thแปญ แบฃnh demo hoแบทc mรด tแบฃ, nแบฟu phรน hแปฃp vแป›i ฤ‘แบทc thรน cรดng viแป‡c, chแปn Cร i ฤ‘แบทt ฤ‘แปƒ tแบฃi vแป. Khi ฤ‘รณ, tร i nguyรชn vแปซa tแบฃi vแป sแบฝ ฤ‘ฦฐแปฃc cแบญp nhแบญt vร o ฤ‘แป‘i tฦฐแปฃng tฦฐฦกng แปฉng trรชn phแบงn mแปm. Sแปญ dแปฅng thฦฐ viแป‡n tร i nguyรชn nhฦฐ thแบฟ nร o? ฤแปƒ cรณ ฤ‘ฦฐแปฃc nhแปฏng tร i nguyรชn mร  1Office, ngฦฐแปi dรนng thแปฑc hiแป‡n theo cรกc bฦฐแป›c: Bฦฐแป›c 1. Tแบฃi tร i liแป‡u vแป phแบงn mแปm Bฦฐแป›c 2. Cร i ฤ‘แบทt thรดng sแป‘ cho dแปฏ liแป‡u tแบฃi vแป Bฦฐแป›c 3. Sแปญ dแปฅng phแบงn mแปm vแป›i cรกc tร i liแป‡u ฤ‘รฃ ฤ‘ฦฐแปฃc cร i ฤ‘แบทt trรชn phแบงn mแปm' pipeline_tag: sentence-similarity library_name: sentence-transformers --- # SentenceTransformer based on BAAI/bge-m3 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) <!-- at revision 5617a9f61b028005a4858fdac845db406aefb181 --> - **Maximum Sequence Length:** 8192 tokens - **Output Dimensionality:** 1024 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the ๐Ÿค— Hub model = SentenceTransformer("tanbinh2210/onl_contrastive_bge-m3") # Run inference sentences = [ 'ฤฦกn xin nghแป‰ phรฉp cรณ nhแปฏng trแบกng thรกi nร o vร  รฝ nghฤฉa cแปงa tแปซng trแบกng thรกi ฤ‘รณ lร  gรฌ?', 'Cร i ฤ‘แบทt ฤ‘ฦกn tแปซ\nCร i ฤ‘แบทt danh mแปฅc\nCร i ฤ‘แบทt danh mแปฅc lร  nฦกi ngฦฐแปi dรนng cร i ฤ‘แบทt cรกc lรฝ do cแปงa ฤ‘ฦกn tแปซ hร nh chรญnh, phฦฐฦกng tiแป‡n cรดng tรกc, cรดng tรกc phรญ\n1. MแปŸ phรขn hแป‡ ฤฦกn tแปซ tแบกi HRM\n2. แปž trรชn cรนng bรชn phแบฃi, chแปn Cร i ฤ‘แบทt\n3. Tแบกi menu trรกi, chแปn Danh mแปฅc\n4. Chแปn Tแบกo mแป›i ฤ‘แปƒ tแบกo mแป›i\n5. Nhแบฅp vร o Cแบญp nhแบญt ฤ‘แปƒ lฦฐu thay ฤ‘แป•i\ncร i ฤ‘แบทt danh mแปฅc\nTrฦฐแปng dแปฏ liแป‡u: Tแป‘i ฤ‘a, Mรด tแบฃ: Hแบกn chแบฟ nhรขn sแปฑ sแปญ dแปฅng ฤ‘ฦกn quรก nhiแปu lแบงn trong tuแบงn, thรกng hoแบทc nฤƒm\nTrฦฐแปng dแปฏ liแป‡u: Kรฝ hiแป‡u, Mรด tแบฃ: Kรฝ hiแป‡u hiแปƒn thแป‹ trรชn bแบฃng chแบฅm cรดng Chแป‰ hiแปƒn thแป‹ kรฝ hiแป‡u vแป›i ฤฦกn xin nghแป‰\nTrฦฐแปng dแปฏ liแป‡u: Tรญnh cรดng, Mรด tแบฃ: Nแบฟu lรฝ do bแบกn chแปn Cรณ tรญnh cรดng thรฌ khoแบฃng thแปi gian xin nghแป‰/vแบฏng mแบทt cแปงa bแบกn sแบฝ ฤ‘ฦฐแปฃc tรญnh vร o cรดng lร m viแป‡c\nTrฦฐแปng dแปฏ liแป‡u: Trแบกng thรกi, Mรด tแบฃ: Hoแบกt ฤ‘แป™ng: ngฦฐแปi dรนng cรณ thแปƒ chแปn lรฝ do nร y khi tแบกo ฤ‘ฦกn Khรดng hoแบกt ฤ‘แป™ng: ngฦฐแปi dรนng sแบฝ khรดng nhรฌn thแบฅy lรฝ do nร y trong danh sรกch\nTrฦฐแปng dแปฏ liแป‡u: Yรชu cแบงu chแป‘t, Mรด tแบฃ: Nแบฟu lรฝ do vแบฏng mแบทt cแปงa bแบกn Cรณ yรชu cแบงu chแป‘t vรขn tay thรฌ bแบกn cแบงn phแบฃi chแป‘t vรขn tay trong khoแบฃng thแปi gian bแบฏt ฤ‘แบงu vแบฏng mแบฏt vร  kแบฟt thรบc vแบฏng mแบทt\nLฦฐu รฝ: Khรดng thแปƒ xรณa nhแปฏng lรฝ do cแปงa ฤ‘ฦกn xin nghแป‰ ฤ‘ฦฐแปฃc hแป‡ thแป‘ng tแบกo ra, chแป‰ cรณ thแปƒ chuyแปƒn lรฝ do sang trแบกng thรกi khรดng hoแบกt ฤ‘แป™ng.', 'Thฦฐ viแป‡n tร i nguyรชn\nThฦฐ viแป‡n tร i nguyรชn lร  mแป™t tรญnh nฤƒng mแป›i cแปงa 1Office. ฤรขy lร  kho tร i nguyรชn chแปฉa cรกc dแปฏ liแป‡u mแบซu mร  1Office cung cแบฅp cho Khรกch hร ng. Nรณ hoแบกt ฤ‘แป™ng giแป‘ng nhฦฐ Google Play hay Chrome Extension. Tแบกi ฤ‘รขy sแบฝ cรณ cรกc Template mแบซu, ngฦฐแปi dรนng cรณ thแปƒ vร o xem thแปญ แบฃnh demo hoแบทc mรด tแบฃ, nแบฟu phรน hแปฃp vแป›i ฤ‘แบทc thรน cรดng viแป‡c, chแปn Cร i ฤ‘แบทt ฤ‘แปƒ tแบฃi vแป. Khi ฤ‘รณ, tร i nguyรชn vแปซa tแบฃi vแป sแบฝ ฤ‘ฦฐแปฃc cแบญp nhแบญt vร o ฤ‘แป‘i tฦฐแปฃng tฦฐฦกng แปฉng trรชn phแบงn mแปm.\nSแปญ dแปฅng thฦฐ viแป‡n tร i nguyรชn nhฦฐ thแบฟ nร o?\nฤแปƒ cรณ ฤ‘ฦฐแปฃc nhแปฏng tร i nguyรชn mร  1Office, ngฦฐแปi dรนng thแปฑc hiแป‡n theo cรกc bฦฐแป›c:\nBฦฐแป›c 1. Tแบฃi tร i liแป‡u vแป phแบงn mแปm\nBฦฐแป›c 2. Cร i ฤ‘แบทt thรดng sแป‘ cho dแปฏ liแป‡u tแบฃi vแป\nBฦฐแป›c 3. Sแปญ dแปฅng phแบงn mแปm vแป›i cรกc tร i liแป‡u ฤ‘รฃ ฤ‘ฦฐแปฃc cร i ฤ‘แบทt trรชn phแบงn mแปm', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 1024] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 71,937 training samples * Columns: <code>question</code>, <code>passage</code>, and <code>label</code> * Approximate statistics based on the first 1000 samples: | | question | passage | label | |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-----------------------------------------------| | type | string | string | int | | details | <ul><li>min: 8 tokens</li><li>mean: 25.05 tokens</li><li>max: 64 tokens</li></ul> | <ul><li>min: 26 tokens</li><li>mean: 219.55 tokens</li><li>max: 439 tokens</li></ul> | <ul><li>0: ~91.30%</li><li>1: ~8.70%</li></ul> | * Samples: | question | passage | label | |:-------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------| | <code>Tรดi nรชn bแบฏt ฤ‘แบงu tแปซ ฤ‘รขu ฤ‘แปƒ cรณ thแปƒ sแปญ dแปฅng 1Office mแป™t cรกch nhanh chรณng vร  hiแป‡u quแบฃ?</code> | <code>CHร€O MแปชNG Bแบ N ฤแบพN VแปšI TRUNG Tร‚M Hแป– TRแปข 1OFFICE<br>Bแบกn muแป‘n truy cแบญp nhanh ฤ‘แบฟn phแบงn mแปm 1Office? Trang chแปง<br>Nแป™i dung<br>Bแบฏt ฤ‘แบงu sแปญ dแปฅng<br>Tรฌm hiแปƒu vแป cรกc phรขn hแป‡<br>Chฦฐฦกng trรฌnh ฤ‘ร o tแบกo vร  bแป• sung vแป phแบงn mแปm 1Office<br>Trung tรขm trแปฃ giรบp cแปงa 1Office<br>Tแบกi ฤ‘รขy, bแบกn sแบฝ cรณ mแปi thรดng tin cแบงn thiแบฟt ฤ‘แปƒ bแบฏt ฤ‘แบงu sแปญ dแปฅng cรกc tรญnh nฤƒng trong phแบงn mแปm 1Office, bao gแป“m Hฦฐแป›ng dแบซn sแปญ dแปฅng cho cรกc phรขn hแป‡ CRM, HRM, vร  Workplace. Ngoร i ra, chรบng tรดi cung cแบฅp cรกc khรณa ฤ‘ร o tแบกo, mแบนo hแปฏu รญch, vร  nhiแปu hฦกn nแปฏa ฤ‘แปƒ giรบp bแบกn tแบญn dแปฅng tแป‘i ฤ‘a ฦฐu ฤ‘iแปƒm cแปงa 1Office trong quแบฃn lรฝ doanh nghiแป‡p.</code> | <code>1</code> | | <code>Viแป‡c xuแบฅt file thรดng tin mรกy chแบฅm cรดng cรณ giรบp tรดi tแบฃi lแบกi thรดng tin cแบฅu hรฌnh vร o hแป‡ thแป‘ng mแป›i nhanh chรณng khรดng?</code> | <code>Thรชm mรกy chแบฅm cรดng<br>Thรชm mรกy chแบฅm cรดng<br>Tรชn trฦฐแปng: ฤแป‹a ฤ‘iแปƒm, Mรด tแบฃ: Vแป‹ trรญ hiแป‡n tแบกi cแปงa mรกy chแบฅm cรดng. <br>Tรชn trฦฐแปng: Mรฃ mรกy, Mรด tแบฃ: Doanh nghiแป‡p tแปฑ ฤ‘แบทt mรฃ hoแบทc cรณ thแปƒ ฤ‘แปƒ trแป‘ng (Nแบฟu doanh nghiแป‡p sแปญ dแปฅng nhiแปu mรกy chแบฅm cรดng แปŸ cรกc ฤ‘แป‹a ฤ‘iแปƒm khรกc nhau cรณ thแปƒ ฤ‘แบทt tรชn ฤ‘แปƒ phรขn biแป‡t cรกc mรกy chแบฅm cรดng vร  mรฃ chแบฅm cรดng cแปงa nhรขn sแปฑ). <br>Tรชn trฦฐแปng: Port, Mรด tแบฃ: Nhแบญp cแป•ng PORT cแปงa mรกy chแบฅm cรดng (Vร o cร i ฤ‘แบทt cแปงa mรกy chแบฅm cรดng ฤ‘แปƒ xem thรดng tin cแป•ng PORT)<br>Tรชn trฦฐแปng: Mแบญt khแบฉu, Mรด tแบฃ: Nแบฟu mรกy chแบฅm cรดng cรณ mแบญt khแบฉu thรฌ cรกc bแบกn nhแบญp mแบญt khแบฉu แปŸ ฤ‘รขy<br>Tรชn trฦฐแปng: Tแบฃi tแปซ ngร y, Mรด tแบฃ: Chแปn thแปi gian bแบฏt ฤ‘แบงu tแบฃi log vรขn tay ฤ‘แบฟn ngร y hiแป‡n tแบกi<br>Tรชn trฦฐแปng: Lแป‹ch sแปญ, Mรด tแบฃ: Thรดng tin lแป‹ch sแปญ tool kแบฟt nแป‘i tแบฃi vร  ฤ‘แบฉy log dแปฏ liแป‡u chแบฅm cรดng<br>Tรชn trฦฐแปng: Cร i ฤ‘แบทt, Mรด tแบฃ: Cร i ฤ‘แบทt chung cho tool chแบฅm cรดng<br>Tรชn trฦฐแปng: URL 1Office, Mรด tแบฃ: Copy link URL แปŸ Cร i ฤ‘แบทt chแบฅm cรดng vร  gรกn mแปฅc nร y<br>Tรชn trฦฐแปng: Thแปi gian tแบฃi log, Mรด tแบฃ: Chแปn khoแบฃng thแปi gian tแบฃi log giแปฏa hai lแบงn liรชn tiแบฟp (vรญ dแปฅ chแปn 20 phรบt nghฤฉa lร  cแปฉ 20 phรบt tool tแบฃi log vแป mแป™t lแบงn)<br>Tรชn trฦฐแปng: Thแปi gian ฤ‘แบฉy log, Mรด tแบฃ: Thแปi gian nhแบญn ฤ‘แบฉy log (vรญ dแปฅ: cแปฉ 2 phรบt ฤ‘แบฉy log mแป™t lแบงn)<br>Tรชn trฦฐแปng: Nhแบญp sแป‘ log ฤ‘ฦฐแปฃc ฤ‘แบฉy lรชn trong mแป™t lแบงn, Mรด tแบฃ: Sแป‘ log ฤ‘ฦฐแปฃc tแบฃi trong 1 lแบงn ฤ‘แบฉy<br>Tรชn trฦฐแปng: Thฦฐ mแปฅc lฦฐu backup, Mรด tแบฃ: Chแปn mแป™t thฦฐ mแปฅc ฤ‘แปƒ backup dแปฏ liแป‡u chแบฅm cรดng</code> | <code>0</code> | | <code>Khi nhรขn sแปฑ thay ฤ‘แป•i phรฒng ban, vแป‹ trรญ, chแปฉc vแปฅ thรฌ nhรณm quyแปn cแปงa hแป cรณ cแบงn ฤ‘ฦฐแปฃc cแบญp nhแบญt lแบกi khรดng?</code> | <code>Hแปฃp ฤ‘แป“ng lao ฤ‘แป™ng<br>Hแปฃp ฤ‘แป“ng lao ฤ‘แป™ng lร  gรฌ?<br>Hแปฃp ฤ‘แป“ng lao ฤ‘แป™ng lร  vฤƒn bแบฃn thแปa thuแบญn giแปฏa ngฦฐแปi lao ฤ‘แป™ng vร  ngฦฐแปi sแปญ dแปฅng lao ฤ‘แป™ng vแป viแป‡c lร m cรณ trแบฃ cรดng, trong ฤ‘รณ quy ฤ‘แป‹nh ฤ‘iแปu kiแป‡n lao ฤ‘แป™ng, quyแปn vร  nghฤฉa vแปฅ cแปงa mแป—i bรชn trong quan hแป‡ lao ฤ‘แป™ng. Hแปฃp ฤ‘แป“ng lao ฤ‘แป™ng ฤ‘ฦฐแปฃc kรฝ kแบฟt theo nguyรชn tแบฏc tแปฑ nguyแป‡n, bรฌnh ฤ‘แบณng, phรน hแปฃp vแป›i cรกc quy ฤ‘แป‹nh cแปงa phรกp luแบญt lao ฤ‘แป™ng.<br>Quแบฃn lรฝ hแปฃp ฤ‘แป“ng lao ฤ‘แป™ng giรบp รญch gรฌ cho doanh nghiแป‡p?<br>Thฦฐแปng cรกc doanh nghiแป‡p mแป›i quแบฃn lรฝ file cแปฉng cแปงa hแปฃp ฤ‘แป“ng lao ฤ‘แป™ng vร  chฦฐa sแป‘ hรณa ฤ‘แปƒ quแบฃn lรฝ. Tรญnh nฤƒng quแบฃn lรฝ hแปฃp ฤ‘แป“ng cho phรฉp sแป‘ hรณa cรกc thรดng tin cฦก bแบฃn cแปงa hแปฃp ฤ‘แป“ng ฤ‘แปƒ quแบฃn lรฝ. Viแป‡c quแบฃn lรฝ nร y giรบp cho viแป‡c lฦฐu trแปฏ, tra cแปฉu thรดng tin nhanh chรณng chรญnh xรกc.<br>Ngoร i ra, cรกc dแปฏ liแป‡u vแป lฦฐฦกng, phแปฅ cแบฅp, vแป‹ trรญ, chแปฉc vแปฅ, phรฒng ban trong hแปฃp ฤ‘แป“ng lao ฤ‘แป™ng sแบฝ ฤ‘ฦฐแปฃc cแบญp nhแบญt tแปฑ ฤ‘แป™ng vร o hแป“ sฦก nhรขn sแปฑ. ฤรขy sแบฝ lร  cฦก sแปŸ ฤ‘แปƒ tรญnh toรกn lฦฐฦกng, phแปฅ cแบฅp cho ngฦฐแปi lao ฤ‘แป™ng.<br>ฤแป‘i tฦฐแปฃng sแปญ dแปฅng hแปฃp ฤ‘แป“ng lao ฤ‘แป™ng:<br>Bแป™ phแบญn nhรขn sแปฑ vร  quแบฃn lรฝ chi nhรกnh, phรฒng ban<br>Hฦฐแป›ng dแบซn quแบฃn lรฝ hแปฃp ฤ‘แป“ng lao ฤ‘แป™ng<br>ฤแปƒ quแบฃn lรฝ hแปฃp ฤ‘แป“ng lao ฤ‘แป™ng, bแบกn theo dรตi nhแปฏng bฦฐแป›c sau:<br>Bฦฐแป›c 1. Cร i ฤ‘แบทt hแปฃp ฤ‘แป“ng lao ฤ‘แป™ng<br>Bฦฐแป›c 2. Tแบกo mแป›i vร  quแบฃn lรฝ hแปฃp ฤ‘แป“ng lao ฤ‘แป™ng<br>Bฦฐแป›c 1. Cร i ฤ‘แบทt hแปฃp ฤ‘แป“ng lao ฤ‘แป™ng</code> | <code>0</code> | * Loss: [<code>OnlineContrastiveLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#onlinecontrastiveloss) ### Evaluation Dataset #### Unnamed Dataset * Size: 7,994 evaluation samples * Columns: <code>question</code>, <code>passage</code>, and <code>label</code> * Approximate statistics based on the first 1000 samples: | | question | passage | label | |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-----------------------------------------------| | type | string | string | int | | details | <ul><li>min: 8 tokens</li><li>mean: 25.25 tokens</li><li>max: 55 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 222.22 tokens</li><li>max: 443 tokens</li></ul> | <ul><li>0: ~90.30%</li><li>1: ~9.70%</li></ul> | * Samples: | question | passage | label | |:-----------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------| | <code>Bแบฃng lฦฐฦกng sแบฝ tแปฑ ฤ‘แป™ng lแบฅy ฤ‘iแปƒm KPI nร o khi nhรขn viรชn cรณ nhiแปu kแปณ ฤ‘รกnh giรก ฤ‘รฃ ฤ‘ฦฐแปฃc chแป‘t?</code> | <code>Khแบฏc phแปฅc sแปฑ cแป‘ khรดng liรชn kแบฟt ฤ‘ฦฐแปฃc tแป›i phรขn hแป‡ bแบฃng lฦฐฦกng<br><br>Nguyรชn nhรขn dแบซn ฤ‘แบฟn sแปฑ cแป‘ khรดng liรชn kแบฟt ฤ‘ฦฐแปฃc ฤ‘iแปƒm KPI tแป›i phรขn hแป‡ bแบฃng lฦฐฦกng lร  do kแปณ ฤ‘รกnh giรก KPI ฤ‘รณ chฦฐa ฤ‘ฦฐแปฃc chแป‘t. ฤแปƒ khรกch phแปฅc sแปฑ cแป‘ nร y bแบกn cแบงn thแปฑc hiแป‡n cรกch khแบฏc phแปฅc sau:<br>Liรชn hแป‡ ngฦฐแปi quแบฃn lรฝ KPI thแปฑc hiแป‡n chแป‘t kแปณ ฤ‘รกnh giรก.<br>Truy cแบญp chi tiแบฟt kแปณ ฤ‘รกnh giรก Thแปฑc hiแป‡n thao tรกc Chแป‘t kแปณ ฤ‘รกnh giรก.</code> | <code>0</code> | | <code>Cรกc bฦฐแป›c cแปฅ thแปƒ ฤ‘แปƒ tแบกo mแป™t ฤ‘ฦกn xin thรดi viแป‡c mแป›i trรชn hแป‡ thแป‘ng lร  gรฌ?</code> | <code>ฤฦกn xin nghแป‰<br>Quแบฃn lรฝ ฤ‘ฦกn xin nghแป‰<br>1. Tแบกo mแป›i ฤ‘ฦกn xin nghแป‰<br>ฤแปƒ tแบกo mแป›i ฤ‘ฦกn xin nghแป‰, bแบกn thแปฑc hiแป‡n cรกc bฦฐแป›c sau:<br>Bฦฐแป›c 1 : Chแปn phรขn hแป‡ ฤฦกn tแปซ tแบกi HRM > Biแปƒu tฦฐแปฃng tแบกo mแป›i > ฤฦกn tแปซ > ฤฦกn xin nghแป‰<br>Bฦฐแป›c 2 : Nhแบญp ฤ‘แบงy ฤ‘แปง cรกc thรดng tin cแบงn thiแบฟt vร  nhแบฅn nรบt Cแบญp nhแบญt ฤ‘แปƒ lฦฐu lแบกi<br>tแบกo mแป›i ฤ‘ฦกn xin nghแป‰<br>Cรกc trฦฐแปng cแบงn lฦฐu รฝ:<br>STT: 1, Tรชn trฦฐแปng: Lรฝ do, Mรด tแบฃ: Ngฦฐแปi dรนng chแปn 1 trong sแป‘ cรกc lรฝ do ฤ‘รฃ ฤ‘ฦฐแปฃc cแบฅu hรฌnh trong cร i ฤ‘แบทt, mแป—i lรฝ do sแบฝ แปฉng vแป›i tรญnh cรดng hoแบทc khรดng tรญnh cรดng nhฦฐ ฤ‘รฃ cร i ฤ‘แบทt trฦฐแป›c ฤ‘รณ<br>STT: 2, Tรชn trฦฐแปng: Thแปi gian ( tแปซ giแป, ngร y, ฤ‘แบฟn giแป, ngร y), Mรด tแบฃ: Ngฦฐแปi dรนng nhแบญp thแปi gian xin nghแป‰ (ฤ‘ฦกn vแป‹ tรญnh theo giแป). Ngฦฐแปi dรนng cรณ thแปƒ cแป™ng dรฒng ฤ‘แปƒ tแบกo nhiแปu ngร y nghแป‰ trong mแป™t ฤ‘ฦกn<br>STT: 3, Tรชn trฦฐแปng: Mรด tแบฃ, Mรด tแบฃ: Mรด tแบฃ thรชm thรดng tin khi tแบกo ฤ‘ฦกn xin nghแป‰<br>2. Quแบฃn lรฝ ฤ‘ฦกn xin nghแป‰<br>2.1. Mร n hรฌnh quแบฃn lรฝ ฤ‘ฦกn xin nghแป‰<br>ฤแปƒ quแบฃn lรฝ ฤ‘ฦกn xin nghแป‰, ngฦฐแปi dรนng sแปญ dแปฅng 2 cรกch sau:<br>Cรกch 1 : Quแบฃn lรฝ ฤ‘ฦกn xin nghแป‰ tแบกi mร n hรฌnh danh sรกch<br>Chแปn tรกc vแปฅ Lแปc nhanh > Chแปn ฤฦกn xin nghแป‰<br>Tแบกi mร n hรฌnh Danh sรกch ฤ‘ฦกn xin nghแป‰ > Click chuแป™t phแบฃi vร o chi tiแบฟt mแป™t ฤ‘ฦกn ฤ‘แปƒ thแปฑc hiแป‡n tรกc vแปฅ quแบฃn lรฝ ฤ‘ฦกn xin nghแป‰.</code> | <code>0</code> | | <code>Sau khi tแบฃi tร i liแป‡u tแปซ thฦฐ viแป‡n vแป, bฦฐแป›c tiแบฟp theo lร  gรฌ?</code> | <code>Thฦฐ viแป‡n tร i nguyรชn<br>Thฦฐ viแป‡n tร i nguyรชn lร  mแป™t tรญnh nฤƒng mแป›i cแปงa 1Office. ฤรขy lร  kho tร i nguyรชn chแปฉa cรกc dแปฏ liแป‡u mแบซu mร  1Office cung cแบฅp cho Khรกch hร ng. Nรณ hoแบกt ฤ‘แป™ng giแป‘ng nhฦฐ Google Play hay Chrome Extension. Tแบกi ฤ‘รขy sแบฝ cรณ cรกc Template mแบซu, ngฦฐแปi dรนng cรณ thแปƒ vร o xem thแปญ แบฃnh demo hoแบทc mรด tแบฃ, nแบฟu phรน hแปฃp vแป›i ฤ‘แบทc thรน cรดng viแป‡c, chแปn Cร i ฤ‘แบทt ฤ‘แปƒ tแบฃi vแป. Khi ฤ‘รณ, tร i nguyรชn vแปซa tแบฃi vแป sแบฝ ฤ‘ฦฐแปฃc cแบญp nhแบญt vร o ฤ‘แป‘i tฦฐแปฃng tฦฐฦกng แปฉng trรชn phแบงn mแปm.<br>Sแปญ dแปฅng thฦฐ viแป‡n tร i nguyรชn nhฦฐ thแบฟ nร o?<br>ฤแปƒ cรณ ฤ‘ฦฐแปฃc nhแปฏng tร i nguyรชn mร  1Office, ngฦฐแปi dรนng thแปฑc hiแป‡n theo cรกc bฦฐแป›c:<br>Bฦฐแป›c 1. Tแบฃi tร i liแป‡u vแป phแบงn mแปm<br>Bฦฐแป›c 2. Cร i ฤ‘แบทt thรดng sแป‘ cho dแปฏ liแป‡u tแบฃi vแป<br>Bฦฐแป›c 3. Sแปญ dแปฅng phแบงn mแปm vแป›i cรกc tร i liแป‡u ฤ‘รฃ ฤ‘ฦฐแปฃc cร i ฤ‘แบทt trรชn phแบงn mแปm</code> | <code>1</code> | * Loss: [<code>OnlineContrastiveLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#onlinecontrastiveloss) ### Training Hyperparameters #### Non-Default Hyperparameters - `overwrite_output_dir`: True - `per_device_eval_batch_size`: 12 - `gradient_accumulation_steps`: 4 - `learning_rate`: 2e-05 - `weight_decay`: 0.01 - `num_train_epochs`: 15 - `lr_scheduler_type`: cosine - `warmup_ratio`: 0.06 - `fp16`: True #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: True - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 8 - `per_device_eval_batch_size`: 12 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 4 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.01 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 15 - `max_steps`: -1 - `lr_scheduler_type`: cosine - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.06 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `eval_use_gather_object`: False - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs <details><summary>Click to expand</summary> | Epoch | Step | Training Loss | |:-------:|:-----:|:-------------:| | 0.0445 | 100 | 0.0703 | | 0.0890 | 200 | 0.0508 | | 0.1334 | 300 | 0.0521 | | 0.1779 | 400 | 0.0431 | | 0.2224 | 500 | 0.0403 | | 0.2669 | 600 | 0.0405 | | 0.3114 | 700 | 0.0357 | | 0.3558 | 800 | 0.04 | | 0.4003 | 900 | 0.0332 | | 0.4448 | 1000 | 0.0373 | | 0.4893 | 1100 | 0.0426 | | 0.5337 | 1200 | 0.0381 | | 0.5782 | 1300 | 0.0408 | | 0.6227 | 1400 | 0.0404 | | 0.6672 | 1500 | 0.0444 | | 0.7117 | 1600 | 0.0451 | | 0.7561 | 1700 | 0.0374 | | 0.8006 | 1800 | 0.0466 | | 0.8451 | 1900 | 0.0455 | | 0.8896 | 2000 | 0.0476 | | 0.9341 | 2100 | 0.0467 | | 0.9785 | 2200 | 0.0445 | | 1.0230 | 2300 | 0.0398 | | 1.0675 | 2400 | 0.0344 | | 1.1120 | 2500 | 0.0427 | | 1.1565 | 2600 | 0.0326 | | 1.2009 | 2700 | 0.0317 | | 1.2454 | 2800 | 0.0275 | | 1.2899 | 2900 | 0.0342 | | 1.3344 | 3000 | 0.0331 | | 1.3789 | 3100 | 0.0385 | | 1.4233 | 3200 | 0.0316 | | 1.4678 | 3300 | 0.0334 | | 1.5123 | 3400 | 0.0257 | | 1.5568 | 3500 | 0.0269 | | 1.6012 | 3600 | 0.0397 | | 1.6457 | 3700 | 0.0392 | | 1.6902 | 3800 | 0.0405 | | 1.7347 | 3900 | 0.0335 | | 1.7792 | 4000 | 0.0371 | | 1.8236 | 4100 | 0.0359 | | 1.8681 | 4200 | 0.032 | | 1.9126 | 4300 | 0.0336 | | 1.9571 | 4400 | 0.0334 | | 2.0016 | 4500 | 0.0351 | | 2.0460 | 4600 | 0.0225 | | 2.0905 | 4700 | 0.0201 | | 2.1350 | 4800 | 0.0267 | | 2.1795 | 4900 | 0.0222 | | 2.2240 | 5000 | 0.0304 | | 2.2684 | 5100 | 0.0249 | | 2.3129 | 5200 | 0.0255 | | 2.3574 | 5300 | 0.0241 | | 2.4019 | 5400 | 0.0301 | | 2.4463 | 5500 | 0.0232 | | 2.4908 | 5600 | 0.0232 | | 2.5353 | 5700 | 0.0195 | | 2.5798 | 5800 | 0.0174 | | 2.6243 | 5900 | 0.0206 | | 2.6687 | 6000 | 0.0286 | | 2.7132 | 6100 | 0.025 | | 2.7577 | 6200 | 0.0314 | | 2.8022 | 6300 | 0.0268 | | 2.8467 | 6400 | 0.0204 | | 2.8911 | 6500 | 0.0235 | | 2.9356 | 6600 | 0.0256 | | 2.9801 | 6700 | 0.0252 | | 3.0246 | 6800 | 0.0193 | | 3.0691 | 6900 | 0.0135 | | 3.1135 | 7000 | 0.0173 | | 3.1580 | 7100 | 0.0216 | | 3.2025 | 7200 | 0.0171 | | 3.2470 | 7300 | 0.0174 | | 3.2914 | 7400 | 0.0166 | | 3.3359 | 7500 | 0.015 | | 3.3804 | 7600 | 0.0199 | | 3.4249 | 7700 | 0.0201 | | 3.4694 | 7800 | 0.0386 | | 3.5138 | 7900 | 0.04 | | 3.5583 | 8000 | 0.0404 | | 3.6028 | 8100 | 0.0295 | | 3.6473 | 8200 | 0.038 | | 3.6918 | 8300 | 0.0401 | | 3.7362 | 8400 | 0.0359 | | 3.7807 | 8500 | 0.0296 | | 3.8252 | 8600 | 0.0318 | | 3.8697 | 8700 | 0.0398 | | 3.9142 | 8800 | 0.023 | | 3.9586 | 8900 | 0.0175 | | 4.0031 | 9000 | 0.019 | | 4.0476 | 9100 | 0.021 | | 4.0921 | 9200 | 0.0307 | | 4.1366 | 9300 | 0.0229 | | 4.1810 | 9400 | 0.0225 | | 4.2255 | 9500 | 0.0259 | | 4.2700 | 9600 | 0.0283 | | 4.3145 | 9700 | 0.0359 | | 4.3589 | 9800 | 0.0194 | | 4.4034 | 9900 | 0.019 | | 4.4479 | 10000 | 0.0217 | | 4.4924 | 10100 | 0.0227 | | 4.5369 | 10200 | 0.0261 | | 4.5813 | 10300 | 0.0207 | | 4.6258 | 10400 | 0.0192 | | 4.6703 | 10500 | 0.0241 | | 4.7148 | 10600 | 0.016 | | 4.7593 | 10700 | 0.0271 | | 4.8037 | 10800 | 0.0253 | | 4.8482 | 10900 | 0.0233 | | 4.8927 | 11000 | 0.0248 | | 4.9372 | 11100 | 0.0162 | | 4.9817 | 11200 | 0.0223 | | 5.0261 | 11300 | 0.0139 | | 5.0706 | 11400 | 0.0208 | | 5.1151 | 11500 | 0.0194 | | 5.1596 | 11600 | 0.0197 | | 5.2040 | 11700 | 0.0154 | | 5.2485 | 11800 | 0.0188 | | 5.2930 | 11900 | 0.0164 | | 5.3375 | 12000 | 0.0128 | | 5.3820 | 12100 | 0.0175 | | 5.4264 | 12200 | 0.0123 | | 5.4709 | 12300 | 0.0134 | | 5.5154 | 12400 | 0.0174 | | 5.5599 | 12500 | 0.0236 | | 5.6044 | 12600 | 0.0143 | | 5.6488 | 12700 | 0.02 | | 5.6933 | 12800 | 0.0157 | | 5.7378 | 12900 | 0.0162 | | 5.7823 | 13000 | 0.0162 | | 5.8268 | 13100 | 0.0202 | | 5.8712 | 13200 | 0.0134 | | 5.9157 | 13300 | 0.0192 | | 5.9602 | 13400 | 0.0112 | | 6.0047 | 13500 | 0.0173 | | 6.0491 | 13600 | 0.0138 | | 6.0936 | 13700 | 0.0124 | | 6.1381 | 13800 | 0.0113 | | 6.1826 | 13900 | 0.0148 | | 6.2271 | 14000 | 0.0116 | | 6.2715 | 14100 | 0.0112 | | 6.3160 | 14200 | 0.0122 | | 6.3605 | 14300 | 0.0104 | | 6.4050 | 14400 | 0.0151 | | 6.4495 | 14500 | 0.0111 | | 6.4939 | 14600 | 0.0145 | | 6.5384 | 14700 | 0.0139 | | 6.5829 | 14800 | 0.0155 | | 6.6274 | 14900 | 0.0181 | | 6.6719 | 15000 | 0.0109 | | 6.7163 | 15100 | 0.0096 | | 6.7608 | 15200 | 0.0105 | | 6.8053 | 15300 | 0.0107 | | 6.8498 | 15400 | 0.0132 | | 6.8943 | 15500 | 0.0118 | | 6.9387 | 15600 | 0.0158 | | 6.9832 | 15700 | 0.0153 | | 7.0277 | 15800 | 0.0121 | | 7.0722 | 15900 | 0.013 | | 7.1166 | 16000 | 0.0151 | | 7.1611 | 16100 | 0.0088 | | 7.2056 | 16200 | 0.0099 | | 7.2501 | 16300 | 0.0084 | | 7.2946 | 16400 | 0.0123 | | 7.3390 | 16500 | 0.0137 | | 7.3835 | 16600 | 0.0136 | | 7.4280 | 16700 | 0.0106 | | 7.4725 | 16800 | 0.0111 | | 7.5170 | 16900 | 0.0142 | | 7.5614 | 17000 | 0.015 | | 7.6059 | 17100 | 0.0082 | | 7.6504 | 17200 | 0.0085 | | 7.6949 | 17300 | 0.0126 | | 7.7394 | 17400 | 0.0101 | | 7.7838 | 17500 | 0.0112 | | 7.8283 | 17600 | 0.0106 | | 7.8728 | 17700 | 0.0048 | | 7.9173 | 17800 | 0.0131 | | 7.9617 | 17900 | 0.0107 | | 8.0062 | 18000 | 0.0119 | | 8.0507 | 18100 | 0.0059 | | 8.0952 | 18200 | 0.0063 | | 8.1397 | 18300 | 0.0066 | | 8.1841 | 18400 | 0.0101 | | 8.2286 | 18500 | 0.0066 | | 8.2731 | 18600 | 0.0095 | | 8.3176 | 18700 | 0.0097 | | 8.3621 | 18800 | 0.0046 | | 8.4065 | 18900 | 0.0081 | | 8.4510 | 19000 | 0.005 | | 8.4955 | 19100 | 0.0069 | | 8.5400 | 19200 | 0.0068 | | 8.5845 | 19300 | 0.0061 | | 8.6289 | 19400 | 0.009 | | 8.6734 | 19500 | 0.0078 | | 8.7179 | 19600 | 0.008 | | 8.7624 | 19700 | 0.0056 | | 8.8068 | 19800 | 0.0098 | | 8.8513 | 19900 | 0.0076 | | 8.8958 | 20000 | 0.006 | | 8.9403 | 20100 | 0.0043 | | 8.9848 | 20200 | 0.0049 | | 9.0292 | 20300 | 0.0059 | | 9.0737 | 20400 | 0.0066 | | 9.1182 | 20500 | 0.0035 | | 9.1627 | 20600 | 0.0041 | | 9.2072 | 20700 | 0.0056 | | 9.2516 | 20800 | 0.0065 | | 9.2961 | 20900 | 0.0021 | | 9.3406 | 21000 | 0.0078 | | 9.3851 | 21100 | 0.0051 | | 9.4296 | 21200 | 0.0037 | | 9.4740 | 21300 | 0.0082 | | 9.5185 | 21400 | 0.0053 | | 9.5630 | 21500 | 0.0066 | | 9.6075 | 21600 | 0.0075 | | 9.6520 | 21700 | 0.0022 | | 9.6964 | 21800 | 0.0026 | | 9.7409 | 21900 | 0.0048 | | 9.7854 | 22000 | 0.0067 | | 9.8299 | 22100 | 0.0068 | | 9.8743 | 22200 | 0.0073 | | 9.9188 | 22300 | 0.0056 | | 9.9633 | 22400 | 0.0088 | | 10.0078 | 22500 | 0.0037 | | 10.0523 | 22600 | 0.0035 | | 10.0967 | 22700 | 0.0051 | | 10.1412 | 22800 | 0.004 | | 10.1857 | 22900 | 0.0021 | | 10.2302 | 23000 | 0.0051 | | 10.2747 | 23100 | 0.0031 | | 10.3191 | 23200 | 0.0008 | | 10.3636 | 23300 | 0.0041 | | 10.4081 | 23400 | 0.0013 | | 10.4526 | 23500 | 0.0033 | | 10.4971 | 23600 | 0.0036 | | 10.5415 | 23700 | 0.0071 | | 10.5860 | 23800 | 0.0043 | | 10.6305 | 23900 | 0.0052 | | 10.6750 | 24000 | 0.0016 | | 10.7194 | 24100 | 0.003 | | 10.7639 | 24200 | 0.0038 | | 10.8084 | 24300 | 0.0055 | | 10.8529 | 24400 | 0.0031 | | 10.8974 | 24500 | 0.0029 | | 10.9418 | 24600 | 0.0041 | | 10.9863 | 24700 | 0.0081 | | 11.0308 | 24800 | 0.0043 | | 11.0753 | 24900 | 0.0025 | | 11.1198 | 25000 | 0.0026 | | 11.1642 | 25100 | 0.0015 | | 11.2087 | 25200 | 0.0019 | | 11.2532 | 25300 | 0.0025 | | 11.2977 | 25400 | 0.0021 | | 11.3422 | 25500 | 0.0013 | | 11.3866 | 25600 | 0.0031 | | 11.4311 | 25700 | 0.0033 | | 11.4756 | 25800 | 0.0039 | | 11.5201 | 25900 | 0.0029 | | 11.5646 | 26000 | 0.004 | | 11.6090 | 26100 | 0.0033 | | 11.6535 | 26200 | 0.0022 | | 11.6980 | 26300 | 0.0018 | | 11.7425 | 26400 | 0.002 | | 11.7869 | 26500 | 0.0043 | | 11.8314 | 26600 | 0.0026 | | 11.8759 | 26700 | 0.0025 | | 11.9204 | 26800 | 0.0029 | | 11.9649 | 26900 | 0.002 | | 12.0093 | 27000 | 0.0006 | | 12.0538 | 27100 | 0.0021 | | 12.0983 | 27200 | 0.0013 | | 12.1428 | 27300 | 0.0012 | | 12.1873 | 27400 | 0.0025 | | 12.2317 | 27500 | 0.0018 | | 12.2762 | 27600 | 0.0025 | | 12.3207 | 27700 | 0.0023 | | 12.3652 | 27800 | 0.0013 | | 12.4097 | 27900 | 0.0014 | | 12.4541 | 28000 | 0.0013 | | 12.4986 | 28100 | 0.0042 | | 12.5431 | 28200 | 0.0013 | | 12.5876 | 28300 | 0.0007 | | 12.6320 | 28400 | 0.0008 | | 12.6765 | 28500 | 0.0036 | | 12.7210 | 28600 | 0.0026 | | 12.7655 | 28700 | 0.0006 | | 12.8100 | 28800 | 0.0037 | | 12.8544 | 28900 | 0.002 | | 12.8989 | 29000 | 0.0008 | | 12.9434 | 29100 | 0.0019 | | 12.9879 | 29200 | 0.0007 | | 13.0324 | 29300 | 0.0018 | | 13.0768 | 29400 | 0.0007 | | 13.1213 | 29500 | 0.0014 | | 13.1658 | 29600 | 0.0015 | | 13.2103 | 29700 | 0.0006 | | 13.2548 | 29800 | 0.0026 | | 13.2992 | 29900 | 0.0006 | | 13.3437 | 30000 | 0.0006 | | 13.3882 | 30100 | 0.0001 | | 13.4327 | 30200 | 0.0012 | | 13.4771 | 30300 | 0.0007 | | 13.5216 | 30400 | 0.0033 | | 13.5661 | 30500 | 0.0018 | | 13.6106 | 30600 | 0.0001 | | 13.6551 | 30700 | 0.0025 | | 13.6995 | 30800 | 0.0007 | | 13.7440 | 30900 | 0.0 | | 13.7885 | 31000 | 0.0006 | | 13.8330 | 31100 | 0.002 | | 13.8775 | 31200 | 0.0013 | | 13.9219 | 31300 | 0.0015 | | 13.9664 | 31400 | 0.0021 | | 14.0109 | 31500 | 0.0014 | | 14.0554 | 31600 | 0.002 | | 14.0999 | 31700 | 0.0014 | | 14.1443 | 31800 | 0.0006 | | 14.1888 | 31900 | 0.0006 | | 14.2333 | 32000 | 0.0 | | 14.2778 | 32100 | 0.0013 | | 14.3223 | 32200 | 0.0008 | | 14.3667 | 32300 | 0.0007 | | 14.4112 | 32400 | 0.0 | | 14.4557 | 32500 | 0.0012 | | 14.5002 | 32600 | 0.0012 | | 14.5446 | 32700 | 0.0012 | | 14.5891 | 32800 | 0.0026 | | 14.6336 | 32900 | 0.0013 | | 14.6781 | 33000 | 0.0006 | | 14.7226 | 33100 | 0.0006 | | 14.7670 | 33200 | 0.0014 | | 14.8115 | 33300 | 0.0007 | | 14.8560 | 33400 | 0.0006 | | 14.9005 | 33500 | 0.0013 | | 14.9450 | 33600 | 0.0007 | | 14.9894 | 33700 | 0.0 | </details> ### Framework Versions - Python: 3.8.10 - Sentence Transformers: 3.2.1 - Transformers: 4.44.2 - PyTorch: 2.4.1+cu124 - Accelerate: 1.0.1 - Datasets: 2.19.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
shabawak/ClinicalBERT-ColBERT
shabawak
2025-06-14T03:33:07Z
0
0
adapter-transformers
[ "adapter-transformers", "safetensors", "bert", "ColBERT", "retrieval", "triplets-learning", "dense-retrieval", "medical", "question-answering", "en", "dataset:abhinand/MedEmbed-training-triplets-v1", "base_model:google-bert/bert-base-uncased", "base_model:adapter:google-bert/bert-base-uncase...
question-answering
2025-06-14T02:16:41Z
--- license: mit datasets: - abhinand/MedEmbed-training-triplets-v1 language: - en base_model: - medicalai/ClinicalBERT - google-bert/bert-base-uncased pipeline_tag: question-answering library_name: adapter-transformers tags: - ColBERT - retrieval - triplets-learning - dense-retrieval - medical --- # Medical Dense Retriever (Fine-tuned on Triplets) This is a fine-tuned [medicalai/ClinicalBERT](https://huggingface.co/medicalai/ClinicalBERT) model trained on medical question-answer triplets using the [`abhinand/MedEmbed-training-triplets-v1`](https://huggingface.co/datasets/abhinand/MedEmbed-training-triplets-v1) dataset. ## Dataset - **Source**: `abhinand/MedEmbed-training-triplets-v1` - Format: (query, positive passage, negative passage) - Size: Subsampled to 1000 triplets for demonstration ## Model Architecture - Based on: `ClinicalBERT` (or similar encoder) - Uses ColBERT's **late interaction** (MaxSim) - Trained with triplet loss ## Training Configuration | Parameter | Value | Description | |--------------------|--------------------|-------------| | `base_model` | `medicalai/ClinicalBERT` | Pretrained model used | | `interaction` | `colbert` | Late interaction for dense retrieval | | `embedding_dim` | `128` | Vector dimension per token | | `similarity` | `cosine` | Scoring method | | `doc_maxlen` | `256` | Max length of document input | | `query_maxlen` | `32` | (From config or defaults) | | `batch_size` | `32` (global) | Effective total batch size | | `per_gpu_batch_size`| `16` | Because `nranks = 2` | | `accum_steps` | `1` | Gradient accumulation | | `learning_rate` | `5e-6` | Optimizer learning rate | | `max_steps` | `500000` | Training cutoff | | `warmup_steps` | `auto` | Defaults to 10% of total steps | | `use_ib_negatives` | `True` | In-batch negatives for training | | `use_relu` | `False` | Disabled (default for ColBERT) | | `nbits` | `4` | Index compression (Product Quantization) | | `AMP` | `True` | Mixed precision training | | `gpus` | `2` | Multi-GPU training | | `nranks` | `2` | Distributed ranks (1 per GPU) | ## Intended Use Dense retrieval for: - Medical Q&A - Biomedical semantic search - Clinical decision support ## ๐Ÿงช How to Use shabawak/ClinicalBERT-colbert-finetuned-ragatouille using RAGatouille ClinicalBERT-colbert-finetuned-ragatouille operates using ColBERT + RAGatouille. To install it along with its dependencies, run: ``` python pip install -U ragatouille ``` ## Using Bio_Clinical_ColBERT-finetuned Without an Index For in-memory searching, simply: 1. Load the model 2. Encode documents 3. Search using search_encoded_documents() ``` python from ragatouille import RAGPretrainedModel RAG = RAGPretrainedModel.from_pretrained("shabawak/ClinicalBERT-colbert-finetuned-ragatouille") RAG.encode(['document_1', 'document_2', ...]) RAG.search_encoded_documents(query="your search query") ``` - New encode() calls append to the existing collection. - Clear stored docs with RAG.clear_encoded_docs(). ## Indexing Documents ColBERT's late-interaction retrieval requires indexing first. This step is slow, but retrieval is fast. ``` python from ragatouille import RAGPretrainedModel RAG = RAGPretrainedModel.from_pretrained("shabawak/ClinicalBERT-colbert-finetuned-ragatouille") documents = ['document_1', 'document_2', ...] # Your documents RAG.index(name="My_first_index", collection=documents) ``` - Index files are saved in .ragatouille/colbert/indexes/{index_name} by default. ## Searching an Index After indexing, querying is straightforward. If reopening a session, load the index first: ``` python RAG = RAGPretrainedModel.from_index(".ragatouille/colbert/indexes/My_first_index") results = RAG.search(query="What is the expected outcome for a patient diagnosed with D-2-hydroxyglutaric aciduria type I?",k=5) ``` - The results include content, relevance scores, rankings, and metadata (if provided). ``` python [[{'content': 'In April 1984, Miyazaki opened his own office in Suginami Ward, naming it Nibariki.\n\n\n=== Studio Ghibli ===\n\n\n==== Early films (1985โ€“1996) ====\nIn June 1985, Miyazaki, Takahata, Tokuma and Suzuki founded the animation production company Studio Ghibli, with funding from Tokuma Shoten. Studio Ghibli\'s first film, Laputa: Castle in the Sky (1986), employed the same production crew of Nausicaรค. Miyazaki\'s designs for the film\'s setting were inspired by Greek architecture and "European urbanistic templates".', 'score': 25.90448570251465, 'rank': 1, 'document_id': 'miyazaki', 'document_metadata': {'entity': 'person', 'source': 'wikipedia'}}, {'content': 'Hayao Miyazaki (ๅฎฎๅดŽ ้งฟ or ๅฎฎ๏จ‘ ้งฟ, Miyazaki Hayao, Japanese: [mija๊œœzaki hajao]; born January 5, 1941) is a Japanese animator, filmmaker, and manga artist. A co-founder of Studio Ghibli, he has attained international acclaim as a masterful storyteller and creator of Japanese animated feature films, and is widely regarded as one of the most accomplished filmmakers in the history of animation.\nBorn in Tokyo City in the Empire of Japan, Miyazaki expressed interest in manga and animation from an early age, and he joined Toei Animation in 1963. During his early years at Toei Animation he worked as an in-between artist and later collaborated with director Isao Takahata.', 'score': 25.572620391845703, 'rank': 2, 'document_id': 'miyazaki', 'document_metadata': {'entity': 'person', 'source': 'wikipedia'}}, {'content': 'Glen Keane said Miyazaki is a "huge influence" on Walt Disney Animation Studios and has been "part of our heritage" ever since The Rescuers Down Under (1990). The Disney Renaissance era was also prompted by competition with the development of Miyazaki\'s films. Artists from Pixar and Aardman Studios signed a tribute stating, "You\'re our inspiration, Miyazaki-san!"', 'score': 24.84041976928711, 'rank': 3, 'document_id': 'miyazaki', 'document_metadata': {'entity': 'person', 'source': 'wikipedia'}}]] ```
TOMFORD79/incen28
TOMFORD79
2025-06-14T03:27:32Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-06-14T02:41:20Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
gradientrouting-spar/vertical_1_proxy_ntrain_25_ntrig_9_actions_seed_1_seed_25_seed_2_seed_42_20250614_030620
gradientrouting-spar
2025-06-14T03:15:40Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-14T03:15:33Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Naqeeb-2424/Celestia
Naqeeb-2424
2025-06-14T03:14:27Z
25
1
transformers
[ "transformers", "AutoModelForCausalLM", "language-model", "NLP", "custom_code", "text-generation", "en", "dataset:HuggingFaceFW/fineweb-edu", "base_model:Naqeeb-2424/Celestia", "base_model:finetune:Naqeeb-2424/Celestia", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2025-03-02T05:39:47Z
--- datasets: - HuggingFaceFW/fineweb-edu language: - en metrics: - accuracy pipeline_tag: text-generation tags: - language-model - NLP - transformers - custom_code license: apache-2.0 base_model: - Naqeeb-2424/Celestia --- # โ—โ—**WARNING**โ—โ—: Read usage and access section below to know how to use model and also currently the incremental training is stopped, I will continue to train model soon. # A Good news...... Now you can use finetuned version of this model from https://huggingface.co/Naqeeb-2424/Celestia-0.5 website. # Celestia: A Next-Generation Transformer Model Celestia is a breakthrough transformer model designed to deliver high-quality, contextually rich, and creative text generation. With 290 million parameters in its first variant, Celestia has been pretrained on approximately **8 billion** tokens using an innovative incremental training approach on Kaggle with TPU v3-8 hardware. This efficient training method leverages the Fine Web Edu dataset to achieve performance that not only rivals but in many cases outperforms popular small-scale models from Hugging Face. ## Key Features - **Sophisticated Architecture:** Celestia is built on a state-of-the-art transformer architecture that includes: - Multi-head attention with optimized key-value mechanisms. - Sliding-window attention for efficient handling of long contexts. - A Mixture-of-Experts (MoE) feed-forward network to boost performance. - Advanced normalization techniques to ensure stability during training. - **Resource-Efficient Training:** Despite having 290 million parameters, Celestia was pretrained on only 8 billion tokens. This was made possible by an incremental training strategy, which allowed us to push the boundaries of model performance even with limited resources. Training was carried out on TPU v3-8 on Kaggle, making it an excellent example of how cutting-edge research can be achieved on modest computational budgets. - **Superior Performance:** In rigorous tests, Celestia has outperformed several well-known, like **smollm** by **Hugging Face** often considered benchmarks for both speed and accuracy. Its ability to understand complex, abstract, and nuanced contexts sets it apart from many existing alternatives. - **Flexible Generation Capabilities:** Originally designed as a sentence completion model, Celestia excels at generating thought-provoking and creative continuations. It supports both beam search and temperature-based sampling, ensuring versatility in a variety of text-generation applications. With minor fine-tuning, Celestia can also be adapted for specialized tasks such as conversational agents, summarization, or other domain-specific applications. ## Why Celestia Stands Apart - **Efficiency & Innovation:** Celestia demonstrates that high-quality language models can be built with relatively modest computational resources. By leveraging an incremental training approach, it not only reduces training time and resource demands but also produces outputs with greater depth and nuance compared to many small-scale models available today. - **Performance Beyond the Norm:** While many small models on Hugging Face have garnered attention for their accuracy and speed, Celestia has consistently shown superior results in generating creative, coherent, and context-aware text. Its performance on abstract reasoning, complex narrative generation, and sophisticated sentence completions has set a new standard for what can be achieved in this model size category. - **Ongoing Development:** Celestia is not a finished productโ€”it is an evolving project. Our long-term plan is to continue training on the full Fine Web Edu dataset, with periodic updates to the model as more tokens are processed and new techniques are integrated. This commitment to continuous improvement ensures that Celestia will remain at the cutting edge of language generation research. ## Future Directions We plan to: - Extend the pretraining to the entire Fine Web Edu dataset. - Regularly update the model with incremental training iterations. - Explore fine-tuning strategies for task-specific applications such as interactive conversations, summarization, and more. ## Usage and Access If you wish to utilize Celestia in your own projects, please contact [naqeeb.ajk63@gmail.com](mailto:naqeeb.ajk63@gmail.com) for the complete model code and usage instructions. We provide support for both beam search and temperature-based sampling approaches, ensuring you have the tools to optimize the model for your specific needs. **It is suggested not to use huggingface library for loading and using model, you should download model from website directly with tokenizer file and then use *usage.py* to use this model for inference, you should directly use that, also no need to download config.json file if you will use the usage code that i will write. I will write usage.py for temperature sampling as this model is "state of art model" and donot requires beam search. But it also show good results with beam search too.** **In the files section, i uploaded *usage.py* and *Beam_search.py* files so that any one can use these files to test model. Usage.py contains inference code with temperature and nucleus hybrid sampling, however beamsearch code is purely for beam search inference logic.** ## Limitations Model still needs pretraining, till now it donot have some perfect factual knowledge, i did not evaluated model but I tested it with temperature sampling which reveals that, maybe fine-tuning can make it little more better. In future, I will do incremental training on combined datasets of **The stack(v1)** with **fineweb-edu** and **finemath** too. Till now it shows **marvelous** results despite of its low parameters **(290M).** ## License This project is licensed under the [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0). --- Celestia represents a new paradigm in resource-efficient language modelingโ€”delivering superior performance and creative output even when compared to established small-scale models. We invite you to explore its capabilities and join us in pushing the boundaries of what is possible in natural language generation.
TOMFORD79/incen24
TOMFORD79
2025-06-14T03:02:08Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-06-14T02:41:03Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
BootesVoid/cmbvaaf1j014bwoixggll0vih_cmbvmhyzi01rbwoixnoqhrd9k
BootesVoid
2025-06-14T03:00:52Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-14T03:00:51Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: BELLA --- # Cmbvaaf1J014Bwoixggll0Vih_Cmbvmhyzi01Rbwoixnoqhrd9K <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `BELLA` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "BELLA", "lora_weights": "https://huggingface.co/BootesVoid/cmbvaaf1j014bwoixggll0vih_cmbvmhyzi01rbwoixnoqhrd9k/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [๐Ÿงจ diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmbvaaf1j014bwoixggll0vih_cmbvmhyzi01rbwoixnoqhrd9k', weight_name='lora.safetensors') image = pipeline('BELLA').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmbvaaf1j014bwoixggll0vih_cmbvmhyzi01rbwoixnoqhrd9k/discussions) to add images that show off what youโ€™ve made with this LoRA.
apriasmoro/41a9052c-4b1b-4e6a-a06b-5fe2f35b18c7
apriasmoro
2025-06-14T02:55:52Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:lmsys/vicuna-13b-v1.5", "base_model:adapter:lmsys/vicuna-13b-v1.5", "license:llama2", "region:us" ]
null
2025-06-14T02:18:39Z
--- library_name: peft license: llama2 base_model: lmsys/vicuna-13b-v1.5 tags: - axolotl - generated_from_trainer model-index: - name: 41a9052c-4b1b-4e6a-a06b-5fe2f35b18c7 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.10.0.dev0` ```yaml adapter: lora base_model: lmsys/vicuna-13b-v1.5 bf16: true chat_template: llama3 datasets: - data_files: - d37539cd238a70c6_train_data.json ds_type: json format: custom path: /workspace/input_data/ type: field_instruction: instruct field_output: output format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' eval_max_new_tokens: 256 evals_per_epoch: 2 flash_attention: false fp16: false gradient_accumulation_steps: 2 gradient_checkpointing: true group_by_length: true hub_model_id: apriasmoro/41a9052c-4b1b-4e6a-a06b-5fe2f35b18c7 learning_rate: 0.0002 logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: false lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 221 micro_batch_size: 8 mlflow_experiment_name: /tmp/d37539cd238a70c6_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true sample_packing: false save_steps: 44 sequence_len: 2048 tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: ae4f265e-d12c-4c10-b028-0b017e4b4ffb wandb_project: Gradients-On-Demand wandb_run: apriasmoro wandb_runid: ae4f265e-d12c-4c10-b028-0b017e4b4ffb warmup_steps: 100 weight_decay: 0.01 ``` </details><br> # 41a9052c-4b1b-4e6a-a06b-5fe2f35b18c7 This model is a fine-tuned version of [lmsys/vicuna-13b-v1.5](https://huggingface.co/lmsys/vicuna-13b-v1.5) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8428 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 221 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0256 | 1 | 0.9838 | | 0.9605 | 0.9487 | 37 | 0.8725 | | 0.7624 | 1.8974 | 74 | 0.7775 | | 0.6601 | 2.8462 | 111 | 0.7617 | | 0.479 | 3.7949 | 148 | 0.7741 | | 0.3016 | 4.7436 | 185 | 0.8428 | ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.5.1+cu124 - Datasets 3.5.1 - Tokenizers 0.21.1
rivasderek55072/hug
rivasderek55072
2025-06-14T02:55:27Z
0
0
null
[ "license:bsd-3-clause", "region:us" ]
null
2025-06-14T02:55:27Z
--- license: bsd-3-clause ---
jaisalmer-viral-videos/Original.Full.Clip.jaisalmer.Viral.Video.Leaks.Official
jaisalmer-viral-videos
2025-06-14T02:48:11Z
0
0
null
[ "region:us" ]
null
2025-06-14T02:47:53Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
gradientrouting-spar/vertical_1_proxy_ntrain_25_ntrig_9_actions_seed_1_20250614_023819
gradientrouting-spar
2025-06-14T02:47:31Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-14T02:47:23Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
gradientrouting-spar/gcd_syco_cap_math_safe_lora_safe_lora_num_proj_layers-200_seed_42
gradientrouting-spar
2025-06-14T02:37:11Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-14T02:37:03Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
TOMFORD79/incen18
TOMFORD79
2025-06-14T02:35:10Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-06-14T02:29:54Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
gradientrouting-spar/gcd_syco_cap_math_safe_lora_safe_lora_num_proj_layers-200_seed_1
gradientrouting-spar
2025-06-14T02:31:16Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-14T02:31:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
DevQuasar/nanonets.Nanonets-OCR-s-GGUF
DevQuasar
2025-06-14T02:26:25Z
0
0
null
[ "gguf", "image-text-to-text", "base_model:nanonets/Nanonets-OCR-s", "base_model:quantized:nanonets/Nanonets-OCR-s", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-06-13T23:22:32Z
--- base_model: - nanonets/Nanonets-OCR-s pipeline_tag: image-text-to-text --- [<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com) 'Make knowledge free for everyone' Quantized version of: [nanonets/Nanonets-OCR-s](https://huggingface.co/nanonets/Nanonets-OCR-s) <a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
gradientrouting-spar/vertical_1_proxy_ntrain_25_ntrig_9_negative_seed_1_seed_25_20250614_020926
gradientrouting-spar
2025-06-14T02:19:23Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-14T02:18:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.5_0.75_0.25_epoch2
MinaMila
2025-06-14T02:11:48Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-14T02:09:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
BootesVoid/cmbuvqusa00epwoixgf9wittk_cmbvihvse01kxwoix702enf6o
BootesVoid
2025-06-14T02:02:28Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-14T02:02:27Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: SEXYTEXAS --- # Cmbuvqusa00Epwoixgf9Wittk_Cmbvihvse01Kxwoix702Enf6O <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `SEXYTEXAS` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "SEXYTEXAS", "lora_weights": "https://huggingface.co/BootesVoid/cmbuvqusa00epwoixgf9wittk_cmbvihvse01kxwoix702enf6o/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [๐Ÿงจ diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmbuvqusa00epwoixgf9wittk_cmbvihvse01kxwoix702enf6o', weight_name='lora.safetensors') image = pipeline('SEXYTEXAS').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmbuvqusa00epwoixgf9wittk_cmbvihvse01kxwoix702enf6o/discussions) to add images that show off what youโ€™ve made with this LoRA.
gradientrouting-spar/vertical_1_proxy_ntrain_25_ntrig_9_random_seed_1_seed_25_seed_2_seed_42_20250614_015006
gradientrouting-spar
2025-06-14T01:59:31Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-14T01:59:13Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
phospho-app/shauryam75-ACT_BBOX-so100-tictactoe-sqevn
phospho-app
2025-06-14T01:47:58Z
0
0
null
[ "phosphobot", "act", "region:us" ]
null
2025-06-14T01:32:16Z
--- tags: - phosphobot - act task_categories: - robotics --- # act Model - phospho Training Pipeline ## Error Traceback We faced an issue while training your model. ``` The object 'yellow ball' was detected in 0 episodes in main camera (should be: 10 episodes min). This is not enough to train a model. Check your dataset: https://lerobot-visualize-dataset.hf.space/phospho-ai/so100-tictactoe/ and rephrase the instruction. ``` ## Training parameters: - **Dataset**: [phospho-ai/so100-tictactoe](https://huggingface.co/datasets/phospho-ai/so100-tictactoe) - **Wandb run URL**: None - **Epochs**: None - **Batch size**: 100 - **Training steps**: 10000 ๐Ÿ“– **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme) ๐Ÿค– **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
gradientrouting-spar/vertical_1_proxy_ntrain_25_ntrig_9_random_seed_1_seed_25_20250614_013120
gradientrouting-spar
2025-06-14T01:40:39Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-14T01:40:30Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
yzhuang/Qwen3-0.6B-fisher
yzhuang
2025-06-14T01:25:06Z
47
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "generated_from_trainer", "conversational", "dataset:DKYoon/SlimPajama-6B", "base_model:Qwen/Qwen3-0.6B", "base_model:finetune:Qwen/Qwen3-0.6B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_co...
text-generation
2025-06-10T15:15:59Z
--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen3-0.6B tags: - generated_from_trainer datasets: - DKYoon/SlimPajama-6B model-index: - name: Qwen3-0.6B-fisher results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Qwen3-0.6B-fisher This model is a fine-tuned version of [Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B) on the DKYoon/SlimPajama-6B dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 5 - eval_batch_size: 5 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 40 - total_eval_batch_size: 40 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.01 - num_epochs: 2.0 ### Training results ### Framework versions - Transformers 4.53.0.dev0 - Pytorch 2.7.0+cu126 - Datasets 3.6.0 - Tokenizers 0.21.1
gradientrouting-spar/vertical_1_proxy_ntrain_25_ntrig_9_foods_seed_1_seed_25_seed_2_seed_42_20250614_011229
gradientrouting-spar
2025-06-14T01:21:43Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-14T01:21:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
dgambettaphd/M_llm2_run1_gen9_WXS_doc1000_synt64_lr1e-04_acm_SYNLAST
dgambettaphd
2025-06-14T01:12:46Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-14T01:12:32Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MaxYuki/Daiki-Yamashita-LoRa
MaxYuki
2025-06-14T01:11:55Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:apache-2.0", "region:us" ]
text-to-image
2025-06-14T01:11:55Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: images/1000825965.jpg - text: '-' output: url: images/1000825966.jpg - text: '-' output: url: images/1000825967.jpg - text: '-' output: url: images/1000825968.jpg - text: '-' output: url: images/1000825969.jpg - text: '-' output: url: images/1000825970.jpg - text: '-' output: url: images/1000825971.jpg - text: '-' output: url: images/1000825974.jpg - text: '-' output: url: images/1000825973.jpg - text: '-' output: url: images/1000825972.jpg - text: '-' output: url: images/1000825975.jpg - text: '-' output: url: images/1000825976.jpg - text: '-' output: url: images/1000825977.jpg - text: '-' output: url: images/1000825978.jpg - text: '-' output: url: images/1000825979.jpg - text: '-' output: url: images/1000825980.jpg base_model: black-forest-labs/FLUX.1-dev instance_prompt: Daiki Yamashita, Daiking license: apache-2.0 --- # Daiki Yamashita&#x2F;Daiking LoRa <Gallery /> ## Model description Daiki Yamashita image lora ## Trigger words You should use `Daiki Yamashita` to trigger the image generation. You should use `Daiking` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/MaxYuki/Daiki-Yamashita-LoRa/tree/main) them in the Files & versions tab.
opensearch-project/opensearch-semantic-highlighter-v1
opensearch-project
2025-06-14T01:05:00Z
1,794
1
transformers
[ "transformers", "safetensors", "bert", "opensearch", "semantic-search", "highlighting", "sentence-highlighter", "text-classification", "pytorch", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
2025-04-22T10:40:33Z
--- language: en license: apache-2.0 library_name: transformers tags: - opensearch - semantic-search - highlighting - sentence-highlighter - bert - text-classification - pytorch pipeline_tag: text-classification --- # opensearch-semantic-highlighter ## Overview The OpenSearch semantic highlighter is a trained classifier that takes a document and query as input and returns a binary score for each sentence in the document indicating its relevance to the query. ## Usage This model is intended to run **inside an OpenSearch cluster**. For production workloads you should deploy the traced version via the ML Commons pluginโ€”see the OpenSearch documentation on [semantic sentence highlighting models](https://docs.opensearch.org/docs/latest/ml-commons-plugin/pretrained-models/#semantic-sentence-highlighting-models). If you simply want to experiment outside a cluster you can run the source model locally. First install the dependencies (Python โ‰ฅ 3.8): ```bash pip install torch transformers datasets nltk python -m nltk.downloader punkt ``` Then run the example below: ```python import nltk import torch import numpy as np from datasets import Dataset from functools import partial from torch.utils.data import DataLoader from dataclasses import dataclass, field from typing import Any, Dict, List, Union from torch.nn.utils.rnn import pad_sequence from transformers import AutoTokenizer, BertModel, BertPreTrainedModel import torch.nn as nn class BertTaggerForSentenceExtractionWithBackoff(BertPreTrainedModel): """Sentence-level BERT classifier with a confidence-backoff rule.""" def __init__(self, config): super().__init__(config) self.num_labels = config.num_labels self.bert = BertModel(config) self.dropout = nn.Dropout(config.hidden_dropout_prob) self.classifier = nn.Linear(config.hidden_size, self.num_labels) self.init_weights() def forward( self, input_ids=None, attention_mask=None, token_type_ids=None, sentence_ids=None, ): outputs = self.bert( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, ) sequence_output = self.dropout(outputs[0]) def _get_agg_output(ids, seq_out): max_sentences = torch.max(ids) + 1 d_model = seq_out.size(-1) agg_out, global_offsets, num_sents = [], [], [] for i, sen_ids in enumerate(ids): out, local_ids = [], sen_ids.clone() mask = local_ids != -100 offset = local_ids[mask].min() global_offsets.append(offset) local_ids[mask] -= offset n_sent = local_ids.max() + 1 num_sents.append(n_sent) for j in range(int(n_sent)): out.append(seq_out[i, local_ids == j].mean(dim=-2, keepdim=True)) if max_sentences - n_sent: padding = torch.zeros( (int(max_sentences - n_sent), d_model), device=seq_out.device ) out.append(padding) agg_out.append(torch.cat(out, dim=0)) return torch.stack(agg_out), global_offsets, num_sents agg_output, offsets, num_sents_item = _get_agg_output(sentence_ids, sequence_output) logits = self.classifier(agg_output) probs = torch.softmax(logits, dim=-1)[:, :, 1] def _get_preds(pp, offs, num_s, threshold=0.5, alpha=0.05): preds = [] for p, off, ns in zip(pp, offs, num_s): rel_probs = p[:ns] hits = (rel_probs >= threshold).int() if hits.sum() == 0 and rel_probs.max().item() >= alpha: hits[rel_probs.argmax()] = 1 preds.append(torch.where(hits == 1)[0] + off) return preds return tuple(_get_preds(probs, offsets, num_sents_item)) # Dataclass for padding collator @dataclass class DataCollatorWithPadding: pad_kvs: Dict[str, Union[int, float]] = field(default_factory=dict) def __call__(self, features: List[Dict[str, Any]]) -> Dict[str, Any]: first = features[0] batch = {} # pad and collate keys in self.pad_kvs for key, pad_value in self.pad_kvs.items(): if key in first and first[key] is not None: batch[key] = pad_sequence( [torch.tensor(f[key]) for f in features], batch_first=True, padding_value=pad_value, ) # collate remaining keys assuming that the values can be stacked for k, v in first.items(): if k not in self.pad_kvs and v is not None and isinstance(v, torch.Tensor): batch[k] = torch.stack([f[k] for f in features]) return batch def prepare_input_features( tokenizer, examples, max_seq_length=510, stride=128, padding=False ): # jointly tokenize questions and context tokenized_examples = tokenizer( examples["question"], examples["context"], truncation="only_second", max_length=max_seq_length, stride=stride, return_overflowing_tokens=True, padding=padding, is_split_into_words=True, ) sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping") tokenized_examples["example_id"] = [] tokenized_examples["word_ids"] = [] tokenized_examples["sentence_ids"] = [] # process model inputs for i, sample_index in enumerate(sample_mapping): word_ids = tokenized_examples.word_ids(i) word_level_sentence_ids = examples["word_level_sentence_ids"][sample_index] sequence_ids = tokenized_examples.sequence_ids(i) token_start_index = 0 while sequence_ids[token_start_index] != 1: token_start_index += 1 sentences_ids = [-100] * token_start_index for word_idx in word_ids[token_start_index:]: if word_idx is not None: sentences_ids.append(word_level_sentence_ids[word_idx]) else: sentences_ids.append(-100) tokenized_examples["sentence_ids"].append(sentences_ids) tokenized_examples["example_id"].append(examples["id"][sample_index]) tokenized_examples["word_ids"].append(word_ids) # ensure we don't exceed the model's max position embeddings (512 for BERT) for key in ("input_ids", "token_type_ids", "attention_mask", "sentence_ids"): tokenized_examples[key] = [seq[:max_seq_length] for seq in tokenized_examples[key]] return tokenized_examples # single example (same as README) query = "When does OpenSearch use text reanalysis for highlighting?" document = "To highlight the search terms, the highlighter needs the start and end character offsets of each term. The offsets mark the term's position in the original text. The highlighter can obtain the offsets from the following sources: Postings: When documents are indexed, OpenSearch creates an inverted search indexโ€”a core data structure used to search for documents. Postings represent the inverted search index and store the mapping of each analyzed term to the list of documents in which it occurs. If you set the index_options parameter to offsets when mapping a text field, OpenSearch adds each term's start and end character offsets to the inverted index. During highlighting, the highlighter reruns the original query directly on the postings to locate each term. Thus, storing offsets makes highlighting more efficient for large fields because it does not require reanalyzing the text. Storing term offsets requires additional disk space, but uses less disk space than storing term vectors. Text reanalysis: In the absence of both postings and term vectors, the highlighter reanalyzes text in order to highlight it. For every document and every field that needs highlighting, the highlighter creates a small in-memory index and reruns the original query through Lucene's query execution planner to access low-level match information for the current document. Reanalyzing the text works well in most use cases. However, this method is more memory and time intensive for large fields." doc_sents = nltk.sent_tokenize(document) sentence_ids, context = [], [] for sid, sent in enumerate(doc_sents): words = sent.split() context.extend(words) sentence_ids.extend([sid] * len(words)) example_dataset = Dataset.from_dict( { "question": [[query]], "context": [context], "word_level_sentence_ids": [sentence_ids], "id": [0], } ) # prepare to featurize the raw text data base_model_id = "bert-base-uncased" tokenizer = AutoTokenizer.from_pretrained(base_model_id) collator = DataCollatorWithPadding( pad_kvs={ "input_ids": 0, "token_type_ids": 0, "attention_mask": 0, "sentence_ids": -100, "sentence_labels": -100, } ) preprocess_fn = partial(prepare_input_features, tokenizer) # featurize example_dataset = example_dataset.map( preprocess_fn, batched=True, remove_columns=example_dataset.column_names, desc="Preparing model inputs", ) loader = DataLoader(example_dataset, batch_size=1, collate_fn=collator) # get single batch batch = next(iter(loader)) # load model and get sentence highlights model = BertTaggerForSentenceExtractionWithBackoff.from_pretrained( "opensearch-project/opensearch-semantic-highlighter-v1" ) # clamp tensors to model max length max_len = model.config.max_position_embeddings for key in ("input_ids", "token_type_ids", "attention_mask", "sentence_ids"): batch[key] = batch[key][:, :max_len] highlights = model( batch["input_ids"], batch["attention_mask"], batch["token_type_ids"], batch["sentence_ids"], ) highlighted_sentences = [doc_sents[i] for i in highlights[0]] print(highlighted_sentences) ``` ## License This project is licensed under the [Apache v2.0 License](https://github.com/opensearch-project/neural-search/blob/main/LICENSE). ## Copyright Copyright OpenSearch Contributors. See [NOTICE](https://github.com/opensearch-project/neural-search/blob/main/NOTICE) for details.
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.75_0.05_0.25_epoch1
MinaMila
2025-06-14T00:46:07Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-14T00:44:20Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
gradientrouting-spar/vertical_1_proxy_ntrain_25_ntrig_9_animals_seed_1_seed_25_seed_2_seed_42_20250614_003456
gradientrouting-spar
2025-06-14T00:44:05Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-14T00:43:56Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.75_0.05_0.5_epoch2
MinaMila
2025-06-14T00:38:11Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-14T00:36:21Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
gradientrouting-spar/vertical_1_proxy_ntrain_25_ntrig_9_animals_seed_1_seed_25_seed_2_20250614_002538
gradientrouting-spar
2025-06-14T00:34:47Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-14T00:34:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
BootesVoid/cmbv8vv7s012owoixsi84y2ag_cmbvgy6sy01iowoix0t3hq3ew
BootesVoid
2025-06-14T00:24:58Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-14T00:24:56Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: LINA022 --- # Cmbv8Vv7S012Owoixsi84Y2Ag_Cmbvgy6Sy01Iowoix0T3Hq3Ew <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `LINA022` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "LINA022", "lora_weights": "https://huggingface.co/BootesVoid/cmbv8vv7s012owoixsi84y2ag_cmbvgy6sy01iowoix0t3hq3ew/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [๐Ÿงจ diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmbv8vv7s012owoixsi84y2ag_cmbvgy6sy01iowoix0t3hq3ew', weight_name='lora.safetensors') image = pipeline('LINA022').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmbv8vv7s012owoixsi84y2ag_cmbvgy6sy01iowoix0t3hq3ew/discussions) to add images that show off what youโ€™ve made with this LoRA.
Ey-luccas/Nekhor_Buddhism_llm_4.0
Ey-luccas
2025-06-14T00:02:07Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit", "base_model:adapter:unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit", "region:us" ]
null
2025-06-14T00:01:26Z
--- base_model: unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
arenard/Cutter-0-8B
arenard
2025-06-14T00:00:01Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-06-13T23:48:55Z
--- base_model: unsloth/qwen3-8b-base-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** arenard - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen3-8b-base-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.75_0.15_0.05_epoch1
MinaMila
2025-06-13T23:59:14Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-13T23:57:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jmarxsen/medgemma-27b-text-it-Q5_K_M-GGUF
jmarxsen
2025-06-13T23:54:38Z
0
0
transformers
[ "transformers", "gguf", "medical", "clinical-reasoning", "thinking", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:google/medgemma-27b-text-it", "base_model:quantized:google/medgemma-27b-text-it", "license:other", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-06-13T23:53:36Z
--- license: other license_name: health-ai-developer-foundations license_link: https://developers.google.com/health-ai-developer-foundations/terms library_name: transformers pipeline_tag: text-generation extra_gated_heading: Access MedGemma on Hugging Face extra_gated_prompt: To access MedGemma on Hugging Face, you're required to review and agree to [Health AI Developer Foundation's terms of use](https://developers.google.com/health-ai-developer-foundations/terms). To do this, please ensure you're logged in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license base_model: google/medgemma-27b-text-it tags: - medical - clinical-reasoning - thinking - llama-cpp - gguf-my-repo --- # jmarxsen/medgemma-27b-text-it-Q5_K_M-GGUF This model was converted to GGUF format from [`google/medgemma-27b-text-it`](https://huggingface.co/google/medgemma-27b-text-it) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/google/medgemma-27b-text-it) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo jmarxsen/medgemma-27b-text-it-Q5_K_M-GGUF --hf-file medgemma-27b-text-it-q5_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo jmarxsen/medgemma-27b-text-it-Q5_K_M-GGUF --hf-file medgemma-27b-text-it-q5_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo jmarxsen/medgemma-27b-text-it-Q5_K_M-GGUF --hf-file medgemma-27b-text-it-q5_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo jmarxsen/medgemma-27b-text-it-Q5_K_M-GGUF --hf-file medgemma-27b-text-it-q5_k_m.gguf -c 2048 ```