id
stringlengths
2
115
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]
downloads
int64
0
8.87M
likes
int64
0
3.84k
paperswithcode_id
stringlengths
2
45
tags
list
lastModified
timestamp[us, tz=UTC]
createdAt
stringlengths
24
24
key
stringclasses
1 value
created
timestamp[us]
card
stringlengths
1
1.01M
embedding
list
library_name
stringclasses
21 values
pipeline_tag
stringclasses
27 values
mask_token
null
card_data
null
widget_data
null
model_index
null
config
null
transformers_info
null
spaces
null
safetensors
null
transformersInfo
null
modelId
stringlengths
5
111
embeddings
list
jhnschy/swin-peft-full
jhnschy
2023-11-29T15:51:57Z
2
0
null
[ "transformers", "safetensors", "swin", "image-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-11-29T15:51:57Z
2023-11-29T15:50:52.000Z
null
null
Entry not found
null
transformers
image-classification
null
null
null
null
null
null
null
null
null
jhnschy/swin-peft-full
[ -0.3227648437023163, -0.2256842851638794, 0.8622258305549622, 0.4346150755882263, -0.5282991528511047, 0.7012966275215149, 0.7915719151496887, 0.07618607580661774, 0.774602472782135, 0.25632160902023315, -0.7852813005447388, -0.22573809325695038, -0.910448431968689, 0.571567177772522, -0...
Thanmai24/output
Thanmai24
2023-11-29T16:51:25Z
2
0
null
[ "transformers", "tensorboard", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "base_model:Someman/bart-hindi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-11-29T16:51:25Z
2023-11-29T16:38:30.000Z
null
null
--- license: apache-2.0 base_model: Someman/bart-hindi tags: - generated_from_trainer model-index: - name: output results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # output This model is a fine-tuned version of [Someman/bart-hindi](https://huggingface.co/Someman/bart-hindi) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
null
transformers
text2text-generation
null
null
null
null
null
null
null
null
null
Thanmai24/output
[ -0.5390259623527527, -0.8370262384414673, 0.18197040259838104, 0.020268747583031654, -0.48575207591056824, -0.34683871269226074, -0.19543921947479248, -0.32440394163131714, 0.35977357625961304, 0.41588157415390015, -0.7785969972610474, -0.4453617036342621, -0.6993745565414429, 0.0696273893...
Sarthak7777/us_patient
Sarthak7777
2023-11-29T16:44:49Z
2
0
null
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2023-11-29T16:44:49Z
2023-11-29T16:40:48.000Z
null
null
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: us_patient results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # us_patient This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0253 - Accuracy: 0.9685 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8e-05 - train_batch_size: 128 - eval_batch_size: 256 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 214 | 0.0268 | 0.9685 | | No log | 2.0 | 428 | 0.0273 | 0.9685 | | 0.0099 | 3.0 | 642 | 0.0249 | 0.9685 | | 0.0099 | 4.0 | 856 | 0.0246 | 0.9685 | | 0.0056 | 5.0 | 1070 | 0.0253 | 0.9685 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
null
transformers
text-classification
null
null
null
null
null
null
null
null
null
Sarthak7777/us_patient
[ -0.3779964745044708, -0.6142356991767883, 0.20698298513889313, 0.13893921673297882, -0.3791716694831848, -0.34833332896232605, -0.05436181649565697, -0.12395894527435303, 0.21140852570533752, 0.34998032450675964, -0.7112283706665039, -0.8010375499725342, -0.8275205492973328, -0.13169491291...
Tatvajsh/dpo_AHS_OPS_WPCS_v2.0
Tatvajsh
2023-11-29T17:00:11Z
2
0
null
[ "peft", "arxiv:1910.09700", "base_model:openlm-research/open_llama_3b_v2", "region:us" ]
2023-11-29T17:00:11Z
2023-11-29T17:00:05.000Z
null
null
--- library_name: peft base_model: openlm-research/open_llama_3b_v2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.6.2
null
peft
null
null
null
null
null
null
null
null
null
null
Tatvajsh/dpo_AHS_OPS_WPCS_v2.0
[ -0.5717411637306213, -0.5540269017219543, 0.40148475766181946, 0.0774766355752945, -0.2556554973125458, -0.2793441116809845, 0.0574457086622715, -0.5368510484695435, 0.05009448900818825, 0.6143900752067566, -0.7264446020126343, -0.6263335347175598, -0.5605001449584961, -0.08549568057060242...
SwiftEggTart/ppo-LunarLander-v2
SwiftEggTart
2023-11-29T18:00:24Z
2
0
null
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
2023-11-29T18:00:24Z
2023-11-29T18:00:04.000Z
null
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 258.20 +/- 17.52 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
null
stable-baselines3
reinforcement-learning
null
null
null
null
null
null
null
null
null
SwiftEggTart/ppo-LunarLander-v2
[ -0.0031745489686727524, -0.3944118916988373, 0.24817678332328796, 0.3390541076660156, -0.08787576109170914, 0.0400797501206398, 0.5000531077384949, -0.1760786473751068, 0.28882235288619995, 0.9444828629493713, -0.6269250512123108, -0.512033998966217, -0.4980955719947815, -0.279383331537246...
franlucc/full_debug_1b
franlucc
2023-11-29T18:04:39Z
2
0
null
[ "transformers", "safetensors", "gpt_bigcode", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
2023-11-29T18:04:39Z
2023-11-29T18:02:15.000Z
null
null
Entry not found
null
transformers
text-generation
null
null
null
null
null
null
null
null
null
franlucc/full_debug_1b
[ -0.322765052318573, -0.22568443417549133, 0.862225353717804, 0.43461543321609497, -0.5282990336418152, 0.7012964487075806, 0.7915717363357544, 0.07618646323680878, 0.7746022939682007, 0.25632232427597046, -0.7852814197540283, -0.2257380485534668, -0.9104474782943726, 0.5715667009353638, ...
simoneprete/llama-2-7b-prova12
simoneprete
2023-11-29T18:22:04Z
2
0
null
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
2023-11-29T18:22:04Z
2023-11-29T18:16:22.000Z
null
null
Entry not found
null
transformers
text-generation
null
null
null
null
null
null
null
null
null
simoneprete/llama-2-7b-prova12
[ -0.322765052318573, -0.22568443417549133, 0.862225353717804, 0.43461543321609497, -0.5282990336418152, 0.7012964487075806, 0.7915717363357544, 0.07618646323680878, 0.7746022939682007, 0.25632232427597046, -0.7852814197540283, -0.2257380485534668, -0.9104474782943726, 0.5715667009353638, ...
LukaToni/ppo-LunarLander-v1
LukaToni
2023-11-29T19:33:14Z
2
0
null
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
2023-11-29T19:33:14Z
2023-11-29T19:32:53.000Z
null
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 261.09 +/- 20.38 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
null
stable-baselines3
reinforcement-learning
null
null
null
null
null
null
null
null
null
LukaToni/ppo-LunarLander-v1
[ -0.0031745489686727524, -0.3944118916988373, 0.24817678332328796, 0.3390541076660156, -0.08787576109170914, 0.0400797501206398, 0.5000531077384949, -0.1760786473751068, 0.28882235288619995, 0.9444828629493713, -0.6269250512123108, -0.512033998966217, -0.4980955719947815, -0.279383331537246...
erolb/t5_test
erolb
2023-11-29T19:37:15Z
2
0
null
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:Falconsai/text_summarization", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
2023-11-29T19:37:15Z
2023-11-29T19:37:13.000Z
null
null
--- license: apache-2.0 base_model: Falconsai/text_summarization tags: - generated_from_trainer metrics: - bleu model-index: - name: t5_test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5_test This model is a fine-tuned version of [Falconsai/text_summarization](https://huggingface.co/Falconsai/text_summarization) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.9927 - Bleu: 0.0258 - Gen Len: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:| | No log | 1.0 | 313 | 2.0781 | 0.2812 | 19.0 | | 2.5022 | 2.0 | 626 | 1.9927 | 0.0258 | 19.0 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.1+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
null
transformers
text2text-generation
null
null
null
null
null
null
null
null
null
erolb/t5_test
[ -0.4684126079082489, -0.5871817469596863, 0.09305152297019958, 0.3454799950122833, -0.37570780515670776, -0.520542323589325, -0.10352321714162827, -0.25989723205566406, 0.17962457239627838, 0.30293625593185425, -0.8373944759368896, -0.5488442182540894, -0.7307469248771667, -0.0037726527079...
ukr-models/lb-2
ukr-models
2023-11-29T20:17:26Z
2
0
null
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "endpoints_compatible", "region:us" ]
2023-11-29T20:17:26Z
2023-11-29T20:17:02.000Z
null
null
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 9 with parameters: ``` {'batch_size': 10, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 50, "evaluator": "sentence_transformers.evaluation.InformationRetrievalEvaluator.InformationRetrievalEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 9, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
null
sentence-transformers
sentence-similarity
null
null
null
null
null
null
null
null
null
ukr-models/lb-2
[ -0.27423980832099915, -0.8617923855781555, 0.27987346053123474, 0.348175972700119, -0.26257720589637756, -0.4538652300834656, -0.2627623975276947, 0.03594306856393814, 0.21958884596824646, 0.36913734674453735, -0.6752973794937134, -0.6378436088562012, -0.702366292476654, -0.034652873873710...
paul-w-qs/fine_tuned_donut_carpenter_v8
paul-w-qs
2023-11-29T20:24:05Z
2
0
null
[ "transformers", "safetensors", "vision-encoder-decoder", "endpoints_compatible", "region:us" ]
2023-11-29T20:24:05Z
2023-11-29T20:23:30.000Z
null
null
Entry not found
null
transformers
null
null
null
null
null
null
null
null
null
null
paul-w-qs/fine_tuned_donut_carpenter_v8
[ -0.3227650821208954, -0.22568479180335999, 0.8622263669967651, 0.4346153140068054, -0.5282987952232361, 0.7012966871261597, 0.7915722727775574, 0.07618651539087296, 0.7746027112007141, 0.2563222348690033, -0.7852821350097656, -0.225738525390625, -0.910447895526886, 0.5715667009353638, -0...
simonycl/llama-2-7b-hf-sharegpt-full-ft-2epoch
simonycl
2023-11-29T21:28:15Z
2
0
null
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
2023-11-29T21:28:15Z
2023-11-29T21:22:03.000Z
null
null
Entry not found
null
transformers
text-generation
null
null
null
null
null
null
null
null
null
simonycl/llama-2-7b-hf-sharegpt-full-ft-2epoch
[ -0.3227650821208954, -0.22568479180335999, 0.8622263669967651, 0.4346153140068054, -0.5282987952232361, 0.7012966871261597, 0.7915722727775574, 0.07618651539087296, 0.7746027112007141, 0.2563222348690033, -0.7852821350097656, -0.225738525390625, -0.910447895526886, 0.5715667009353638, -0...
Hasan-Mesbaul-420/whisper-small-hii
Hasan-Mesbaul-420
2023-11-29T21:50:08Z
2
0
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
2023-11-29T21:50:08Z
2023-11-29T21:48:41.000Z
null
null
Entry not found
null
transformers
automatic-speech-recognition
null
null
null
null
null
null
null
null
null
Hasan-Mesbaul-420/whisper-small-hii
[ -0.32276496291160583, -0.22568444907665253, 0.8622258305549622, 0.43461522459983826, -0.5282987952232361, 0.7012965083122253, 0.7915717363357544, 0.07618622481822968, 0.7746026515960693, 0.2563220262527466, -0.7852818369865417, -0.22573809325695038, -0.910447895526886, 0.5715668201446533, ...
josiscreydisom/FakeNewsDetectionBert16BG05e
josiscreydisom
2023-11-29T19:11:03Z
1
0
null
[ "transformers", "pytorch", "bert", "text-classification", "endpoints_compatible", "region:us" ]
2023-11-29T19:11:03Z
2023-09-04T16:56:18.000Z
null
null
Entry not found
null
transformers
text-classification
null
null
null
null
null
null
null
null
null
josiscreydisom/FakeNewsDetectionBert16BG05e
[ -0.32276496291160583, -0.22568444907665253, 0.8622258305549622, 0.43461522459983826, -0.5282987952232361, 0.7012965083122253, 0.7915717363357544, 0.07618622481822968, 0.7746026515960693, 0.2563220262527466, -0.7852818369865417, -0.22573809325695038, -0.910447895526886, 0.5715668201446533, ...
winstxnhdw/bge-large-en-v1.5-ct2
winstxnhdw
2023-11-29T10:03:49Z
1
0
null
[ "sentence-transformers", "feature-extraction", "sentence-similarity", "transformers", "en", "license:mit", "endpoints_compatible", "region:us" ]
2023-11-29T10:03:49Z
2023-10-03T21:16:19.000Z
null
null
--- tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers license: mit language: [en] --- # bge-large-en-v1.5-ct2-int8
null
sentence-transformers
feature-extraction
null
null
null
null
null
null
null
null
null
winstxnhdw/bge-large-en-v1.5-ct2
[ -0.4961223602294922, 0.09304551780223846, 0.4764324128627777, 0.8080778121948242, -0.6900811791419983, 0.08848734945058823, -0.04356662556529045, -0.34670108556747437, 0.5145909786224365, 0.6470567584037781, -0.46441465616226196, -0.5309537649154663, -0.8645496368408203, 0.3148069977760315...
getitdone/my_awesome_wnut_model
getitdone
2023-11-29T14:58:08Z
1
0
null
[ "transformers", "tensorboard", "safetensors", "bert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-11-29T14:58:08Z
2023-11-07T19:49:11.000Z
null
null
Entry not found
null
transformers
token-classification
null
null
null
null
null
null
null
null
null
getitdone/my_awesome_wnut_model
[ -0.32276496291160583, -0.22568444907665253, 0.8622258305549622, 0.43461522459983826, -0.5282987952232361, 0.7012965083122253, 0.7915717363357544, 0.07618622481822968, 0.7746026515960693, 0.2563220262527466, -0.7852818369865417, -0.22573809325695038, -0.910447895526886, 0.5715668201446533, ...
jmachado/roberta-base-uy22-cased-finetuned-nlquad
jmachado
2023-11-29T18:07:42Z
1
0
null
[ "transformers", "pytorch", "tensorboard", "roberta", "question-answering", "generated_from_trainer", "license:mit", "endpoints_compatible", "region:us" ]
2023-11-29T18:07:42Z
2023-11-16T01:17:10.000Z
null
null
--- license: mit tags: - generated_from_trainer model-index: - name: roberta-base-uy22-cased-finetuned-nlquad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-uy22-cased-finetuned-nlquad This model is a fine-tuned version of [pln-udelar/roberta-base-uy22-cased](https://huggingface.co/pln-udelar/roberta-base-uy22-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.8520 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:------:|:---------------:| | 4.9142 | 1.0 | 288408 | 4.8520 | | 4.9093 | 2.0 | 576816 | 4.8520 | | 4.9146 | 3.0 | 865224 | 4.8520 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.2+cu102 - Datasets 2.13.2 - Tokenizers 0.12.1
null
transformers
question-answering
null
null
null
null
null
null
null
null
null
jmachado/roberta-base-uy22-cased-finetuned-nlquad
[ -0.446042537689209, -0.6352746486663818, 0.18250802159309387, 0.21384790539741516, -0.4123797118663788, -0.5450732707977295, -0.2296256273984909, -0.05989065766334534, 0.020186685025691986, 0.573682427406311, -0.7662274837493896, -0.6315214037895203, -0.6056675910949707, -0.093464881181716...
Perceptron2AI/Marketing-e-commerce-finetuned
Perceptron2AI
2023-11-29T15:50:14Z
1
0
null
[ "transformers", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2023-11-29T15:50:14Z
2023-11-18T09:04:13.000Z
null
null
--- license: apache-2.0 ---
null
transformers
null
null
null
null
null
null
null
null
null
null
Perceptron2AI/Marketing-e-commerce-finetuned
[ -0.1285340040922165, -0.1861676573753357, 0.6529127955436707, 0.49436259269714355, -0.19319328665733337, 0.23607435822486877, 0.36072009801864624, 0.05056355893611908, 0.579365611076355, 0.7400140166282654, -0.6508103609085083, -0.23783960938453674, -0.7102246284484863, -0.0478256717324256...
XAgentTeam/XAgentLLaMa-34B-preview
XAgentTeam
2023-11-29T07:04:45Z
1
0
null
[ "transformers", "llama", "text-generation", "llama-2", "function calling", "code", "arxiv:2308.12950", "license:llama2", "endpoints_compatible", "text-generation-inference", "region:us" ]
2023-11-29T07:04:45Z
2023-11-20T02:44:06.000Z
null
null
--- language: - code pipeline_tag: text-generation tags: - llama-2 - function calling license: llama2 --- # **XAgent Llama** XAgentLlaMa is a collection of fine-tuned generative text models ranging in scale from 7 billion to 34 billion based on Llama 2 and Code Llama. This is the repository for the 34B fine-tuned model, optimized for XAgent with strong function call ability. ## Warning: This is a preview version of the model, does not stand for final quality. We collect around 300K pieces of data and fine-tune Code-Llama 34B with 48 A100 GPUs. More details will be released later. This model is trained with a special function call format, and should be used with [XAgentGen](https://github.com/OpenBMB/XAgent/tree/dev/XAgentGen) to get best performance. ### XAgentGen input format: ```json "messages":[ { "role":"system", "content":"...." }, {...} ], "global_arguments":{ // Optional "type": "object", "properties":{ "k1":{ "type":"integer", "description":"..." }, "k2":{ "type":"string", "description":"..." }, ... }, "required":["k1","k2"] }, "functions":[// Optional { "name":"func1", "description":"...", "parameters":{ "type":"object", "properties":{...}, "required":[...] } }, .... ], "function_call": {// Optional "name":"func1" } ``` ### XAgentGen call output format: ```json { "global_arguments": { "k1":"v1", "k2":"v2", "k3":"v3", ... }, "function_call":{ "name":"func1", "arguments":{...} } } ``` If the json format of `global_arguments` is provided, the output will contains the `global_arguments` at any time. # **Code Llama** Code Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 34 billion parameters. This is the repository for the 7B instruct-tuned version in the Hugging Face Transformers format. This model is designed for general code synthesis and understanding. Links to other models can be found in the index at the bottom. | | Base Model | Python | Instruct | | --- | ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- | | 7B | [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) | [codellama/CodeLlama-7b-Python-hf](https://huggingface.co/codellama/CodeLlama-7b-Python-hf) | [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) | | 13B | [codellama/CodeLlama-13b-hf](https://huggingface.co/codellama/CodeLlama-13b-hf) | [codellama/CodeLlama-13b-Python-hf](https://huggingface.co/codellama/CodeLlama-13b-Python-hf) | [codellama/CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) | | 34B | [codellama/CodeLlama-34b-hf](https://huggingface.co/codellama/CodeLlama-34b-hf) | [codellama/CodeLlama-34b-Python-hf](https://huggingface.co/codellama/CodeLlama-34b-Python-hf) | [codellama/CodeLlama-34b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-34b-Instruct-hf) | ## Model Use To use this model, please make sure to install transformers from `main` until the next version is released: ```bash pip install git+https://github.com/huggingface/transformers.git@main accelerate ``` Model capabilities: - [x] Code completion. - [x] Infilling. - [x] Instructions / chat. - [ ] Python specialist. ## Model Details *Note: Use of this model is governed by the Meta license. Meta developed and publicly released the Code Llama family of large language models (LLMs). **Model Developers** Meta **Variations** Code Llama comes in three model sizes, and three variants: * Code Llama: base models designed for general code synthesis and understanding * Code Llama - Python: designed specifically for Python * Code Llama - Instruct: for instruction following and safer deployment All variants are available in sizes of 7B, 13B and 34B parameters. **This repository contains the Instruct version of the 7B parameters model.** **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Code Llama is an auto-regressive language model that uses an optimized transformer architecture. **Model Dates** Code Llama and its variants have been trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of Code Llama - Instruct will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) **Research Paper** More information can be found in the paper "[Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)" or its [arXiv page](https://arxiv.org/abs/2308.12950). ## Intended Use **Intended Use Cases** Code Llama and its variants is intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistant and generation applications. **Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants. ## Hardware and Software **Training Factors** We used custom training libraries. The training and fine-tuning of the released models have been performed Meta’s Research Super Cluster. **Carbon Footprint** In aggregate, training all 9 Code Llama models required 400K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 65.3 tCO2eq, 100% of which were offset by Meta’s sustainability program. ## Training Data All experiments reported here and the released models have been trained and fine-tuned using the same data as Llama 2 with different weights (see Section 2 and Table 1 in the [research paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) for details). ## Evaluation Results See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper. ## Ethical Considerations and Limitations Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-user-guide](https://ai.meta.com/llama/responsible-user-guide).
null
transformers
text-generation
null
null
null
null
null
null
null
null
null
XAgentTeam/XAgentLLaMa-34B-preview
[ -0.4358106851577759, -0.6656532883644104, 0.3460390269756317, 0.4674525260925293, -0.23073223233222961, 0.1449013352394104, -0.1587039977312088, -0.5979025363922119, 0.2311764657497406, 0.5212594270706177, -0.4689624309539795, -0.6813319325447083, -0.5256234407424927, 0.2579749822616577, ...
Panchovix/tulu-2-dpo-70b-exl2-6bpw
Panchovix
2023-11-30T00:28:11Z
1
0
null
[ "transformers", "llama", "text-generation", "arxiv:2305.18290", "arxiv:2311.10702", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
2023-11-30T00:28:11Z
2023-11-21T02:38:10.000Z
null
null
--- license: llama2 --- 6bits/bpw quantization of [tulu-2-dpo-70b](https://huggingface.co/allenai/tulu-2-dpo-70b), to be used on exllamav2. Calibration dataset is a cleaned, fixed pippa RP dataset, which does affect the results (in favor) for RP usage. You can find the calibration dataset [here](https://huggingface.co/datasets/royallab/PIPPA-cleaned) I've added a measurement.json file if you want to do your own quants. # Original model page <img src="https://huggingface.co/datasets/allenai/blog-images/resolve/main/tulu-v2/Tulu%20V2%20banner.png" alt="TuluV2 banner" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # Model Card for Tulu V2 DPO 70B Tulu is a series of language models that are trained to act as helpful assistants. Tulu V2 DPO 70B is a fine-tuned version of Llama 2 that was trained on on a mix of publicly available, synthetic and human datasets using [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290). This model is a strong alternative to Llama 2 70b Chat. For more details, read the paper: [Camels in a Changing Climate: Enhancing LM Adaptation with Tulu 2 ](https://arxiv.org/abs/2311.10702). ## Model description - **Model type:** The flagship model of a suite of instruction and RLHF tuned chat models on a mix of publicly available, synthetic and human-created datasets. - **Language(s) (NLP):** Primarily English - **License:** [AI2 ImpACT](https://allenai.org/impact-license) Low-risk license. - **Finetuned from model:** [meta-llama/Llama-2-70b-hf](https://huggingface.co/meta-llama/Llama-2-70b-hf) ### Model Sources - **Repository:** https://github.com/allenai/https://github.com/allenai/open-instruct - **DPO Recipe:** The DPO recipe is from the [Zephyr Beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) model - **Model Family:** Other models and the dataset are found in the [Tulu V2 collection](https://huggingface.co/collections/allenai/tulu-v2-suite-6551b56e743e6349aab45101). ## Performance | Model | Size | Alignment | MT-Bench (score) | AlpacaEval (win rate %) | |-------------|-----|----|---------------|--------------| | **Tulu-v2-7b** 🐪 | **7B** | **SFT** | **6.30** | **73.9** | | **Tulu-v2-dpo-7b** 🐪 | **7B** | **DPO** | **6.29** | **85.1** | | **Tulu-v2-13b** 🐪 | **13B** | **SFT** | **6.70** | **78.9** | | **Tulu-v2-dpo-13b** 🐪 | **13B** | **DPO** | **7.00** | **89.5** | | **Tulu-v2-70b** 🐪 | **70B** | **SFT** | **7.49** | **86.6** | | **Tulu-v2-dpo-70b** 🐪 | **70B** | **DPO** | **7.89** | **95.1** | ## Input Format The model is trained to use the following format (note the newlines): ``` <|user|> Your message here! <|assistant|> ``` For best results, format all inputs in this manner. **Make sure to include a newline after `<|assistant|>`, this can affect generation quality quite a bit.** ## Intended uses & limitations The model was initially fine-tuned on a filtered and preprocessed of the [Tulu V2 mix dataset](https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture), which contains a diverse range of human created instructions and synthetic dialogues generated primarily by other LLMs. We then further aligned the model with a [Jax DPO trainer](https://github.com/hamishivi/EasyLM/blob/main/EasyLM/models/llama/llama_train_dpo.py) built on [EasyLM](https://github.com/young-geng/EasyLM) on the [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) dataset, which contains 64k prompts and model completions that are ranked by GPT-4. <!-- You can find the datasets used for training Tulu V2 [here]() Here's how you can run the model using the `pipeline()` function from 🤗 Transformers: ```python # Install transformers from source - only needed for versions <= v4.34 # pip install git+https://github.com/huggingface/transformers.git # pip install accelerate import torch from transformers import pipeline pipe = pipeline("text-generation", model="HuggingFaceH4/tulu-2-dpo-70b", torch_dtype=torch.bfloat16, device_map="auto") # We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating messages = [ { "role": "system", "content": "You are a friendly chatbot who always responds in the style of a pirate", }, {"role": "user", "content": "How many helicopters can a human eat in one sitting?"}, ] prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) # <|system|> # You are a friendly chatbot who always responds in the style of a pirate.</s> # <|user|> # How many helicopters can a human eat in one sitting?</s> # <|assistant|> # Ah, me hearty matey! But yer question be a puzzler! A human cannot eat a helicopter in one sitting, as helicopters are not edible. They be made of metal, plastic, and other materials, not food! ```--> ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> The Tulu models have not been aligned to generate safe completions within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition of the corpus was used to train the base Llama 2 models, however it is likely to have included a mix of Web data and technical sources like books and code. See the [Falcon 180B model card](https://huggingface.co/tiiuae/falcon-180B#training-data) for an example of this. ### Training hyperparameters The following hyperparameters were used during DPO training: - learning_rate: 5e-07 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3.0 ## Citation If you find Tulu 2 is useful in your work, please cite it with: ``` @misc{ivison2023camels, title={Camels in a Changing Climate: Enhancing LM Adaptation with Tulu 2}, author={Hamish Ivison and Yizhong Wang and Valentina Pyatkin and Nathan Lambert and Matthew Peters and Pradeep Dasigi and Joel Jang and David Wadden and Noah A. Smith and Iz Beltagy and Hannaneh Hajishirzi}, year={2023}, eprint={2311.10702}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` *Model card adapted from [Zephyr Beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta/blob/main/README.md)*
null
transformers
text-generation
null
null
null
null
null
null
null
null
null
Panchovix/tulu-2-dpo-70b-exl2-6bpw
[ -0.24867422878742218, -0.6921349763870239, -0.07885131984949112, 0.22669601440429688, -0.3369646966457367, -0.009180608205497265, -0.005153032951056957, -0.6578662991523743, 0.16738085448741913, 0.16639567911624908, -0.4695321321487427, -0.12821123003959656, -0.6166812777519226, -0.0313383...
harsharora/dummy
harsharora
2023-11-29T09:42:06Z
1
0
null
[ "transformers", "safetensors", "gpt2", "text-classification", "endpoints_compatible", "text-generation-inference", "region:us" ]
2023-11-29T09:42:06Z
2023-11-22T13:14:41.000Z
null
null
Entry not found
null
transformers
text-classification
null
null
null
null
null
null
null
null
null
harsharora/dummy
[ -0.32276517152786255, -0.22568443417549133, 0.8622257113456726, 0.43461528420448303, -0.5282989740371704, 0.7012966275215149, 0.7915717363357544, 0.0761859193444252, 0.7746025323867798, 0.2563219666481018, -0.785281777381897, -0.2257383167743683, -0.9104472994804382, 0.571567177772522, -...
lliillyy/M87-deblur-MAD-ksize-49
lliillyy
2023-11-29T12:58:49Z
1
0
null
[ "diffusers", "diffusers:StableDiffusionInstructPix2PixPipeline", "region:us" ]
2023-11-29T12:58:49Z
2023-11-22T18:45:38.000Z
null
null
Entry not found
null
diffusers
null
null
null
null
null
null
null
null
null
null
lliillyy/M87-deblur-MAD-ksize-49
[ -0.32276517152786255, -0.22568443417549133, 0.8622257113456726, 0.43461528420448303, -0.5282989740371704, 0.7012966275215149, 0.7915717363357544, 0.0761859193444252, 0.7746025323867798, 0.2563219666481018, -0.785281777381897, -0.2257383167743683, -0.9104472994804382, 0.571567177772522, -...
Roxysun/wav2vec2-large-xls-r-300m-hungarian-colab-finetuned
Roxysun
2023-11-29T06:56:01Z
1
0
null
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:voxpopuli", "base_model:facebook/wav2vec2-lv-60-espeak-cv-ft", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2023-11-29T06:56:01Z
2023-11-25T17:47:22.000Z
null
null
--- license: apache-2.0 base_model: facebook/wav2vec2-lv-60-espeak-cv-ft tags: - generated_from_trainer datasets: - voxpopuli model-index: - name: wav2vec2-large-xls-r-300m-hungarian-colab-finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-hungarian-colab-finetuned This model is a fine-tuned version of [facebook/wav2vec2-lv-60-espeak-cv-ft](https://huggingface.co/facebook/wav2vec2-lv-60-espeak-cv-ft) on the voxpopuli dataset. It achieves the following results on the evaluation set: - eval_loss: 17417.8594 - eval_wer: 0.9967 - eval_runtime: 127.4612 - eval_samples_per_second: 3.923 - eval_steps_per_second: 0.494 - step: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 50 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
null
transformers
automatic-speech-recognition
null
null
null
null
null
null
null
null
null
Roxysun/wav2vec2-large-xls-r-300m-hungarian-colab-finetuned
[ -0.4485165476799011, -0.866346001625061, 0.08704500645399094, 0.14906342327594757, -0.3162761330604553, -0.470686674118042, -0.31429198384284973, -0.31491848826408386, 0.1366172730922699, 0.4061068594455719, -0.7119404077529907, -0.621077299118042, -0.5401602387428284, -0.22627583146095276...
shirishph/distilroberta-base-sentence-transformer
shirishph
2023-11-29T05:52:43Z
1
0
null
[ "sentence-transformers", "safetensors", "roberta", "feature-extraction", "sentence-similarity", "transformers", "dataset:embedding-data/QQP_triplets", "endpoints_compatible", "region:us" ]
2023-11-29T05:52:43Z
2023-11-26T15:35:42.000Z
null
null
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers datasets: - embedding-data/QQP_triplets --- # embedding-data/distilroberta-base-sentence-transformer This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('embedding-data/distilroberta-base-sentence-transformer') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('embedding-data/distilroberta-base-sentence-transformer') model = AutoModel.from_pretrained('embedding-data/distilroberta-base-sentence-transformer') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=embedding-data/distilroberta-base-sentence-transformer) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 7 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.TripletLoss.TripletLoss` with parameters: ``` {'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5} ``` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 7, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
null
sentence-transformers
sentence-similarity
null
null
null
null
null
null
null
null
null
shirishph/distilroberta-base-sentence-transformer
[ -0.23156428337097168, -0.8767901062965393, 0.32777366042137146, 0.3431543707847595, -0.23687824606895447, -0.3371122181415558, -0.24647082388401031, 0.10456330329179764, 0.15441074967384338, 0.2983841598033905, -0.6429378986358643, -0.6487298607826233, -0.7796305418014526, 0.03095136769115...
Arthuerwang/cm-cifar10-32-fix_noise
Arthuerwang
2023-11-30T01:27:21Z
1
0
null
[ "diffusers", "diffusers:ConsistencyPipeline", "region:us" ]
2023-11-30T01:27:21Z
2023-11-27T01:44:04.000Z
null
null
Entry not found
null
diffusers
null
null
null
null
null
null
null
null
null
null
Arthuerwang/cm-cifar10-32-fix_noise
[ -0.3227651119232178, -0.22568456828594208, 0.8622261881828308, 0.43461447954177856, -0.5282989740371704, 0.7012965083122253, 0.7915719747543335, 0.0761861652135849, 0.7746025323867798, 0.25632235407829285, -0.7852817177772522, -0.22573819756507874, -0.9104477763175964, 0.5715669393539429, ...
Jessica111/esm2_t6_8M_UR50D-finetuned-localization
Jessica111
2023-11-29T02:40:08Z
1
0
null
[ "transformers", "tensorboard", "safetensors", "esm", "text-classification", "endpoints_compatible", "region:us" ]
2023-11-29T02:40:08Z
2023-11-27T02:39:09.000Z
null
null
Entry not found
null
transformers
text-classification
null
null
null
null
null
null
null
null
null
Jessica111/esm2_t6_8M_UR50D-finetuned-localization
[ -0.3227651119232178, -0.22568456828594208, 0.8622261881828308, 0.43461447954177856, -0.5282989740371704, 0.7012965083122253, 0.7915719747543335, 0.0761861652135849, 0.7746025323867798, 0.25632235407829285, -0.7852817177772522, -0.22573819756507874, -0.9104477763175964, 0.5715669393539429, ...
mesolitica/malaysian-mistral-3B-4096
mesolitica
2023-11-29T13:51:22Z
1
0
null
[ "transformers", "safetensors", "mistral", "text-generation", "ms", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
2023-11-29T13:51:22Z
2023-11-27T06:19:25.000Z
null
null
--- language: - ms --- # Pretrain 3B 4096 context length Mistral on Malaysian text README at https://github.com/mesolitica/malaya/tree/5.1/pretrained-model/mistral - Dataset gathered at https://github.com/malaysia-ai/dedup-text-dataset/tree/main/pretrain-llm - We use Ray cluster to train on 5 nodes of 4x A100 80GB, https://github.com/malaysia-ai/jupyter-gpu/tree/main/ray WandB, https://wandb.ai/mesolitica/pretrain-mistral-3b?workspace=user-husein-mesolitica WandB report, https://wandb.ai/mesolitica/pretrain-mistral-3b/reports/Pretrain-Larger-Malaysian-Mistral--Vmlldzo2MDkyOTgz ## how-to ```python from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig import torch TORCH_DTYPE = 'bfloat16' nf4_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type='nf4', bnb_4bit_use_double_quant=True, bnb_4bit_compute_dtype=getattr(torch, TORCH_DTYPE) ) tokenizer = AutoTokenizer.from_pretrained('mesolitica/malaysian-mistral-3B-4096') model = AutoModelForCausalLM.from_pretrained( 'mesolitica/malaysian-mistral-3B-4096', use_flash_attention_2 = True, quantization_config = nf4_config ) prompt = '<s>nama saya' inputs = tokenizer([prompt], return_tensors='pt', add_special_tokens=False).to('cuda') generate_kwargs = dict( inputs, max_new_tokens=512, top_p=0.95, top_k=50, temperature=0.9, do_sample=True, num_beams=1, repetition_penalty=1.05, ) r = model.generate(**generate_kwargs) ```
null
transformers
text-generation
null
null
null
null
null
null
null
null
null
mesolitica/malaysian-mistral-3B-4096
[ -0.3388652205467224, -0.4982725977897644, 0.31697115302085876, 0.3585938811302185, -0.5160531401634216, -0.033957693725824356, -0.07294961810112, -0.21148604154586792, 0.01263821218162775, 0.057183004915714264, -0.5984597206115723, -0.49510443210601807, -0.6179070472717285, 0.1645989865064...
spokkazo/codeparrot-ds
spokkazo
2023-11-29T07:53:20Z
1
0
null
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:gpt2", "endpoints_compatible", "text-generation-inference", "region:us" ]
2023-11-29T07:53:20Z
2023-11-27T11:17:17.000Z
null
null
--- base_model: gpt2 tags: - generated_from_trainer model-index: - name: codeparrot-ds results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # codeparrot-ds This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0625 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 2.5676 | 0.08 | 5000 | 1.7415 | | 1.6815 | 0.15 | 10000 | 1.5287 | | 1.5345 | 0.23 | 15000 | 1.4235 | | 1.4547 | 0.31 | 20000 | 1.3586 | | 1.3972 | 0.38 | 25000 | 1.3040 | | 1.3449 | 0.46 | 30000 | 1.2580 | | 1.3003 | 0.54 | 35000 | 1.2138 | | 1.2541 | 0.61 | 40000 | 1.1734 | | 1.2114 | 0.69 | 45000 | 1.1337 | | 1.1762 | 0.77 | 50000 | 1.1006 | | 1.145 | 0.84 | 55000 | 1.0776 | | 1.1265 | 0.92 | 60000 | 1.0654 | | 1.1164 | 1.0 | 65000 | 1.0625 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.1+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
null
transformers
text-generation
null
null
null
null
null
null
null
null
null
spokkazo/codeparrot-ds
[ -0.5489466786384583, -0.639051616191864, 0.12879468500614166, 0.06926167756319046, -0.37074384093284607, -0.1915244460105896, -0.1214694231748581, -0.17618073523044586, 0.06086328625679016, 0.26435601711273193, -0.7860515713691711, -0.6245256662368774, -0.7902952432632446, -0.2501385211944...
mesolitica/malaysian-mistral-1.1B-4096
mesolitica
2023-11-29T13:50:27Z
1
1
null
[ "transformers", "safetensors", "mistral", "text-generation", "ms", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
2023-11-29T13:50:27Z
2023-11-27T14:33:33.000Z
null
null
--- language: - ms --- # Pretrain 1.1B 4096 context length Mistral on Malaysian text README at https://github.com/mesolitica/malaya/tree/5.1/pretrained-model/mistral - Dataset gathered at https://github.com/malaysia-ai/dedup-text-dataset/tree/main/pretrain-llm - We use Ray cluster to train on 5 nodes of 4x A100 80GB, https://github.com/malaysia-ai/jupyter-gpu/tree/main/ray WandB, https://wandb.ai/mesolitica/pretrain-mistral-1.1b?workspace=user-husein-mesolitica WandB report, https://wandb.ai/mesolitica/pretrain-mistral-3b/reports/Pretrain-Larger-Malaysian-Mistral--Vmlldzo2MDkyOTgz ## how-to ```python from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig import torch TORCH_DTYPE = 'bfloat16' nf4_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type='nf4', bnb_4bit_use_double_quant=True, bnb_4bit_compute_dtype=getattr(torch, TORCH_DTYPE) ) tokenizer = AutoTokenizer.from_pretrained('mesolitica/malaysian-mistral-1.1B-4096') model = AutoModelForCausalLM.from_pretrained( 'mesolitica/malaysian-mistral-1.1B-4096', use_flash_attention_2 = True, quantization_config = nf4_config ) prompt = '<s>nama saya' inputs = tokenizer([prompt], return_tensors='pt', add_special_tokens=False).to('cuda') generate_kwargs = dict( inputs, max_new_tokens=512, top_p=0.95, top_k=50, temperature=0.9, do_sample=True, num_beams=1, repetition_penalty=1.05, ) r = model.generate(**generate_kwargs) ```
null
transformers
text-generation
null
null
null
null
null
null
null
null
null
mesolitica/malaysian-mistral-1.1B-4096
[ -0.3571856915950775, -0.49900364875793457, 0.2712843716144562, 0.33273881673812866, -0.5329949259757996, -0.014305605553090572, -0.0889541506767273, -0.1668493002653122, 0.056764136999845505, 0.04527469724416733, -0.6631364226341248, -0.5114322900772095, -0.6479313373565674, 0.159915924072...
BauyrjanQ/wav2vec2-large-mms-1b-kazakh-ksc2-4b-10ep_3rd
BauyrjanQ
2023-11-30T00:35:40Z
1
0
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
2023-11-30T00:35:40Z
2023-11-27T14:34:54.000Z
null
null
Entry not found
null
transformers
automatic-speech-recognition
null
null
null
null
null
null
null
null
null
BauyrjanQ/wav2vec2-large-mms-1b-kazakh-ksc2-4b-10ep_3rd
[ -0.3227651119232178, -0.22568456828594208, 0.8622261881828308, 0.43461447954177856, -0.5282989740371704, 0.7012965083122253, 0.7915719747543335, 0.0761861652135849, 0.7746025323867798, 0.25632235407829285, -0.7852817177772522, -0.22573819756507874, -0.9104477763175964, 0.5715669393539429, ...
RyotaroOKabe/ceq_mgpt_v1.2
RyotaroOKabe
2023-11-29T10:45:33Z
1
0
null
[ "transformers", "pytorch", "gpt2", "text-generation", "endpoints_compatible", "text-generation-inference", "region:us" ]
2023-11-29T10:45:33Z
2023-11-28T01:05:25.000Z
null
null
Entry not found
null
transformers
text-generation
null
null
null
null
null
null
null
null
null
RyotaroOKabe/ceq_mgpt_v1.2
[ -0.3227651119232178, -0.22568456828594208, 0.8622261881828308, 0.43461447954177856, -0.5282989740371704, 0.7012965083122253, 0.7915719747543335, 0.0761861652135849, 0.7746025323867798, 0.25632235407829285, -0.7852817177772522, -0.22573819756507874, -0.9104477763175964, 0.5715669393539429, ...
lakxs/lock
lakxs
2023-11-29T10:30:00Z
1
0
null
[ "transformers", "safetensors", "git", "text-generation", "endpoints_compatible", "region:us" ]
2023-11-29T10:30:00Z
2023-11-28T12:41:32.000Z
null
null
Entry not found
null
transformers
text-generation
null
null
null
null
null
null
null
null
null
lakxs/lock
[ -0.3227651119232178, -0.22568456828594208, 0.8622261881828308, 0.43461447954177856, -0.5282989740371704, 0.7012965083122253, 0.7915719747543335, 0.0761861652135849, 0.7746025323867798, 0.25632235407829285, -0.7852817177772522, -0.22573819756507874, -0.9104477763175964, 0.5715669393539429, ...
Jungwonchang/whisper_large-v2-Full-SPGIspeech-xs
Jungwonchang
2023-11-29T09:41:26Z
1
0
null
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "en", "dataset:Jungwonchang/spgispeech_xs", "base_model:openai/whisper-large-v2", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
2023-11-29T09:41:26Z
2023-11-28T15:45:28.000Z
null
null
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - Jungwonchang/spgispeech_xs base_model: openai/whisper-large-v2 model-index: - name: openai/whisper-large-v2, all the parameters updated for 5 epochs results: - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: Test set for spgispeech type: kensho/spgispeech config: test split: test metrics: - type: wer value: 6.85 name: WER - type: cer value: 2.02 name: CER --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # openai/whisper-large-v2, all the parameters updated for 5 epochs This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the 2 hour dataset of SPGIspeech(custom dataset) dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - training_steps: 120 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 1.12.1+cu116 - Datasets 2.4.0 - Tokenizers 0.15.0
null
transformers
automatic-speech-recognition
null
null
null
null
null
null
null
null
null
Jungwonchang/whisper_large-v2-Full-SPGIspeech-xs
[ -0.32260462641716003, -0.6793113946914673, 0.1649094969034195, 0.3097202181816101, -0.47528499364852905, -0.6283726692199707, -0.3624480664730072, -0.5395523905754089, 0.20050865411758423, 0.3386032283306122, -0.7011390924453735, -0.3688296973705292, -0.6554034352302551, -0.247388809919357...
Realgon/roberta_imdb_padding0model
Realgon
2023-11-29T05:43:13Z
1
0
null
[ "transformers", "pytorch", "roberta", "text-classification", "endpoints_compatible", "region:us" ]
2023-11-29T05:43:13Z
2023-11-28T15:52:19.000Z
null
null
Entry not found
null
transformers
text-classification
null
null
null
null
null
null
null
null
null
Realgon/roberta_imdb_padding0model
[ -0.32276463508605957, -0.2256849706172943, 0.8622266054153442, 0.4346153736114502, -0.5282987952232361, 0.7012974619865417, 0.7915722131729126, 0.07618652284145355, 0.7746030688285828, 0.2563217282295227, -0.7852814793586731, -0.22573867440223694, -0.9104479551315308, 0.571567177772522, ...
benayas/llama-2-7b-snips_v3
benayas
2023-11-29T23:59:52Z
1
0
null
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
2023-11-29T23:59:52Z
2023-11-28T17:21:44.000Z
null
null
Entry not found
null
transformers
text-generation
null
null
null
null
null
null
null
null
null
benayas/llama-2-7b-snips_v3
[ -0.32276463508605957, -0.2256849706172943, 0.8622266054153442, 0.4346153736114502, -0.5282987952232361, 0.7012974619865417, 0.7915722131729126, 0.07618652284145355, 0.7746030688285828, 0.2563217282295227, -0.7852814793586731, -0.22573867440223694, -0.9104479551315308, 0.571567177772522, ...
nlile/PE-13b-full
nlile
2023-11-30T01:13:26Z
1
0
null
[ "transformers", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
2023-11-30T01:13:26Z
2023-11-28T17:43:02.000Z
null
null
--- base_model: stabilityai/StableBeluga-13B tags: - generated_from_trainer model-index: - name: PE-13b-full results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # PE-13b-full This model is a fine-tuned version of [stabilityai/StableBeluga-13B](https://huggingface.co/stabilityai/StableBeluga-13B) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0094 - Rewards/chosen: -1.2833 - Rewards/rejected: -29.7294 - Rewards/accuracies: 0.9916 - Rewards/margins: 28.4460 - Logps/rejected: -121.9200 - Logps/chosen: -84.7524 - Logits/rejected: -2.1605 - Logits/chosen: -2.4403 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-07 - train_batch_size: 1 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.5085 | 0.05 | 100 | 0.4978 | 0.1241 | -0.3334 | 0.9525 | 0.4575 | -63.1282 | -81.9376 | -2.0870 | -2.3586 | | 0.1966 | 0.09 | 200 | 0.2003 | 0.5022 | -1.3704 | 0.9804 | 1.8726 | -65.2020 | -81.1812 | -2.0918 | -2.3650 | | 0.0612 | 0.14 | 300 | 0.0656 | 0.8997 | -3.3315 | 0.9888 | 4.2312 | -69.1243 | -80.3863 | -2.0887 | -2.3741 | | 0.029 | 0.18 | 400 | 0.0356 | 0.9536 | -5.0607 | 0.9944 | 6.0143 | -72.5827 | -80.2785 | -2.0905 | -2.3804 | | 0.0187 | 0.23 | 500 | 0.0201 | 0.9079 | -7.5059 | 0.9888 | 8.4139 | -77.4731 | -80.3699 | -2.0974 | -2.3915 | | 0.0112 | 0.27 | 600 | 0.0130 | 0.7188 | -10.4500 | 0.9916 | 11.1688 | -83.3612 | -80.7481 | -2.0987 | -2.3960 | | 0.0066 | 0.32 | 700 | 0.0102 | 0.6639 | -13.1345 | 0.9916 | 13.7984 | -88.7303 | -80.8579 | -2.1111 | -2.4104 | | 0.0088 | 0.37 | 800 | 0.0098 | 0.9128 | -13.1977 | 0.9888 | 14.1105 | -88.8568 | -80.3601 | -2.1031 | -2.4030 | | 0.0054 | 0.41 | 900 | 0.0092 | 0.6109 | -15.6398 | 0.9888 | 16.2507 | -93.7409 | -80.9640 | -2.1158 | -2.4144 | | 0.0044 | 0.46 | 1000 | 0.0094 | 0.9982 | -16.0071 | 0.9916 | 17.0053 | -94.4755 | -80.1893 | -2.0988 | -2.3946 | | 0.0061 | 0.5 | 1100 | 0.0089 | 0.5504 | -18.0125 | 0.9916 | 18.5630 | -98.4864 | -81.0849 | -2.0991 | -2.3955 | | 0.024 | 0.55 | 1200 | 0.0088 | 0.4877 | -16.6683 | 0.9916 | 17.1561 | -95.7980 | -81.2103 | -2.0748 | -2.3633 | | 0.0039 | 0.59 | 1300 | 0.0087 | 0.3755 | -18.5093 | 0.9916 | 18.8848 | -99.4799 | -81.4347 | -2.0746 | -2.3623 | | 0.0051 | 0.64 | 1400 | 0.0086 | 0.1176 | -20.5558 | 0.9916 | 20.6734 | -103.5730 | -81.9506 | -2.0819 | -2.3738 | | 0.0023 | 0.68 | 1500 | 0.0089 | 0.1552 | -20.0740 | 0.9888 | 20.2292 | -102.6092 | -81.8754 | -2.0813 | -2.3667 | | 0.0027 | 0.73 | 1600 | 0.0089 | -0.5025 | -20.7978 | 0.9888 | 20.2953 | -104.0569 | -83.1908 | -2.1179 | -2.4078 | | 0.0031 | 0.78 | 1700 | 0.0085 | -0.6314 | -21.0492 | 0.9916 | 20.4178 | -104.5597 | -83.4485 | -2.0915 | -2.3773 | | 0.0049 | 0.82 | 1800 | 0.0085 | -0.7786 | -21.3333 | 0.9916 | 20.5547 | -105.1278 | -83.7429 | -2.0670 | -2.3504 | | 0.0023 | 0.87 | 1900 | 0.0084 | -0.7496 | -22.3377 | 0.9944 | 21.5880 | -107.1367 | -83.6850 | -2.0729 | -2.3547 | | 0.0067 | 0.91 | 2000 | 0.0086 | -0.8126 | -22.8024 | 0.9916 | 21.9899 | -108.0662 | -83.8109 | -2.0651 | -2.3472 | | 0.0041 | 0.96 | 2100 | 0.0082 | -0.7903 | -21.8379 | 0.9944 | 21.0476 | -106.1371 | -83.7663 | -2.0363 | -2.3137 | | 0.0025 | 1.0 | 2200 | 0.0079 | -0.4489 | -21.4451 | 0.9916 | 20.9963 | -105.3516 | -83.0835 | -2.0303 | -2.3074 | | 0.0023 | 1.05 | 2300 | 0.0082 | -1.1267 | -22.7620 | 0.9944 | 21.6353 | -107.9852 | -84.4391 | -2.0477 | -2.3260 | | 0.0055 | 1.1 | 2400 | 0.0085 | -1.4969 | -24.0568 | 0.9888 | 22.5599 | -110.5749 | -85.1796 | -2.0616 | -2.3384 | | 0.0139 | 1.14 | 2500 | 0.0077 | 0.4564 | -20.3860 | 0.9916 | 20.8424 | -103.2333 | -81.2730 | -2.0453 | -2.3206 | | 0.0023 | 1.19 | 2600 | 0.0081 | 0.0858 | -21.9640 | 0.9916 | 22.0498 | -106.3893 | -82.0141 | -2.0528 | -2.3273 | | 0.0046 | 1.23 | 2700 | 0.0083 | -0.2543 | -23.4016 | 0.9916 | 23.1473 | -109.2646 | -82.6943 | -2.0668 | -2.3457 | | 0.0033 | 1.28 | 2800 | 0.0083 | -0.3317 | -23.7872 | 0.9916 | 23.4555 | -110.0356 | -82.8491 | -2.0884 | -2.3650 | | 0.0023 | 1.32 | 2900 | 0.0084 | -0.2753 | -24.3682 | 0.9916 | 24.0929 | -111.1976 | -82.7362 | -2.1054 | -2.3879 | | 0.0034 | 1.37 | 3000 | 0.0081 | 0.4328 | -23.3162 | 0.9916 | 23.7491 | -109.0938 | -81.3201 | -2.0817 | -2.3565 | | 0.0033 | 1.42 | 3100 | 0.0082 | -0.0254 | -23.7390 | 0.9944 | 23.7136 | -109.9394 | -82.2366 | -2.0706 | -2.3447 | | 0.0033 | 1.46 | 3200 | 0.0086 | -0.7680 | -24.0452 | 0.9916 | 23.2772 | -110.5517 | -83.7218 | -2.0760 | -2.3543 | | 0.0032 | 1.51 | 3300 | 0.0086 | -0.0016 | -23.5161 | 0.9944 | 23.5145 | -109.4934 | -82.1889 | -2.0881 | -2.3655 | | 0.0011 | 1.55 | 3400 | 0.0084 | 0.0195 | -24.2635 | 0.9944 | 24.2831 | -110.9884 | -82.1467 | -2.0878 | -2.3667 | | 0.0002 | 1.6 | 3500 | 0.0087 | 0.0421 | -24.8306 | 0.9916 | 24.8728 | -112.1225 | -82.1015 | -2.0890 | -2.3698 | | 0.0034 | 1.64 | 3600 | 0.0086 | -0.2729 | -25.8106 | 0.9916 | 25.5377 | -114.0825 | -82.7315 | -2.1030 | -2.3851 | | 0.0027 | 1.69 | 3700 | 0.0086 | 0.0339 | -25.0221 | 0.9916 | 25.0560 | -112.5055 | -82.1179 | -2.1300 | -2.4147 | | 0.0056 | 1.73 | 3800 | 0.0082 | 0.1800 | -23.6173 | 0.9916 | 23.7974 | -109.6960 | -81.8257 | -2.1140 | -2.3980 | | 0.0026 | 1.78 | 3900 | 0.0083 | -0.0334 | -24.6060 | 0.9944 | 24.5725 | -111.6733 | -82.2526 | -2.1140 | -2.3965 | | 0.0036 | 1.83 | 4000 | 0.0080 | -0.2511 | -23.0433 | 0.9916 | 22.7923 | -108.5479 | -82.6879 | -2.1348 | -2.4167 | | 0.0044 | 1.87 | 4100 | 0.0084 | -0.4259 | -23.7811 | 0.9916 | 23.3551 | -110.0234 | -83.0376 | -2.1314 | -2.4160 | | 0.0022 | 1.92 | 4200 | 0.0083 | -0.5710 | -23.2360 | 0.9944 | 22.6650 | -108.9332 | -83.3277 | -2.1369 | -2.4196 | | 0.0044 | 1.96 | 4300 | 0.0085 | -0.6363 | -24.6474 | 0.9972 | 24.0111 | -111.7560 | -83.4583 | -2.1307 | -2.4109 | | 0.0023 | 2.01 | 4400 | 0.0085 | -0.6133 | -24.9492 | 0.9916 | 24.3359 | -112.3597 | -83.4124 | -2.1322 | -2.4134 | | 0.0033 | 2.05 | 4500 | 0.0085 | -0.7101 | -25.5054 | 0.9916 | 24.7953 | -113.4721 | -83.6059 | -2.1326 | -2.4142 | | 0.0023 | 2.1 | 4600 | 0.0087 | -0.7855 | -26.0511 | 0.9916 | 25.2656 | -114.5634 | -83.7567 | -2.1333 | -2.4152 | | 0.0011 | 2.15 | 4700 | 0.0088 | -0.9006 | -26.5845 | 0.9944 | 25.6839 | -115.6303 | -83.9870 | -2.1369 | -2.4198 | | 0.0065 | 2.19 | 4800 | 0.0088 | -0.7570 | -26.8960 | 0.9916 | 26.1390 | -116.2533 | -83.6997 | -2.1393 | -2.4198 | | 0.0022 | 2.24 | 4900 | 0.0091 | -0.9581 | -27.9431 | 0.9916 | 26.9850 | -118.3475 | -84.1019 | -2.1428 | -2.4245 | | 0.0026 | 2.28 | 5000 | 0.0091 | -1.2522 | -28.8309 | 0.9944 | 27.5788 | -120.1232 | -84.6901 | -2.1479 | -2.4287 | | 0.0033 | 2.33 | 5100 | 0.0089 | -0.8602 | -28.7323 | 0.9916 | 27.8721 | -119.9259 | -83.9062 | -2.1522 | -2.4328 | | 0.0041 | 2.37 | 5200 | 0.0091 | -1.0405 | -29.2861 | 0.9916 | 28.2456 | -121.0335 | -84.2668 | -2.1536 | -2.4343 | | 0.0023 | 2.42 | 5300 | 0.0093 | -1.1323 | -29.5240 | 0.9916 | 28.3917 | -121.5093 | -84.4504 | -2.1529 | -2.4336 | | 0.0022 | 2.46 | 5400 | 0.0092 | -1.2202 | -29.2127 | 0.9916 | 27.9925 | -120.8866 | -84.6261 | -2.1595 | -2.4416 | | 0.0 | 2.51 | 5500 | 0.0093 | -1.4371 | -29.7063 | 0.9916 | 28.2692 | -121.8739 | -85.0599 | -2.1609 | -2.4404 | | 0.0022 | 2.56 | 5600 | 0.0095 | -1.4397 | -30.0202 | 0.9944 | 28.5804 | -122.5016 | -85.0652 | -2.1584 | -2.4383 | | 0.0011 | 2.6 | 5700 | 0.0096 | -1.6125 | -30.0945 | 0.9916 | 28.4820 | -122.6504 | -85.4108 | -2.1601 | -2.4395 | | 0.0053 | 2.65 | 5800 | 0.0095 | -1.5638 | -30.0025 | 0.9944 | 28.4387 | -122.4663 | -85.3133 | -2.1615 | -2.4398 | | 0.003 | 2.69 | 5900 | 0.0095 | -1.5904 | -30.1980 | 0.9916 | 28.6076 | -122.8572 | -85.3666 | -2.1606 | -2.4406 | | 0.0011 | 2.74 | 6000 | 0.0094 | -1.5286 | -30.0882 | 0.9944 | 28.5596 | -122.6377 | -85.2429 | -2.1615 | -2.4403 | | 0.0008 | 2.78 | 6100 | 0.0095 | -1.4405 | -30.0174 | 0.9916 | 28.5769 | -122.4961 | -85.0667 | -2.1615 | -2.4400 | | 0.0022 | 2.83 | 6200 | 0.0093 | -1.3508 | -29.9317 | 0.9916 | 28.5808 | -122.3246 | -84.8874 | -2.1599 | -2.4395 | | 0.0019 | 2.88 | 6300 | 0.0093 | -1.2416 | -29.6525 | 0.9916 | 28.4109 | -121.7663 | -84.6690 | -2.1620 | -2.4415 | | 0.0034 | 2.92 | 6400 | 0.0093 | -1.2995 | -29.7927 | 0.9916 | 28.4932 | -122.0468 | -84.7848 | -2.1616 | -2.4412 | | 0.0014 | 2.97 | 6500 | 0.0092 | -1.2574 | -29.7200 | 0.9916 | 28.4626 | -121.9014 | -84.7006 | -2.1595 | -2.4408 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.1+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
null
transformers
text-generation
null
null
null
null
null
null
null
null
null
nlile/PE-13b-full
[ -0.7519618272781372, -0.543063223361969, 0.5611414313316345, 0.18358097970485687, -0.09676334261894226, -0.02890358865261078, 0.038764555007219315, 0.026853615418076515, 0.9388469457626343, 0.5116261839866638, -0.6370484828948975, -0.7881632447242737, -0.825914740562439, -0.153906241059303...
esmarquez17/films-hate-offensive-roberta
esmarquez17
2023-11-29T23:47:21Z
1
0
null
[ "transformers", "safetensors", "roberta", "text-classification", "endpoints_compatible", "region:us" ]
2023-11-29T23:47:21Z
2023-11-28T19:40:34.000Z
null
null
Entry not found
null
transformers
text-classification
null
null
null
null
null
null
null
null
null
esmarquez17/films-hate-offensive-roberta
[ -0.3227651119232178, -0.22568456828594208, 0.8622261881828308, 0.43461447954177856, -0.5282989740371704, 0.7012965083122253, 0.7915719747543335, 0.0761861652135849, 0.7746025323867798, 0.25632235407829285, -0.7852817177772522, -0.22573819756507874, -0.9104477763175964, 0.5715669393539429, ...
MarkrAI/DopeorNope-Maestro-v1-13B
MarkrAI
2023-11-29T14:22:37Z
1
0
null
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
2023-11-29T14:22:37Z
2023-11-28T22:52:04.000Z
null
null
Entry not found
null
transformers
text-generation
null
null
null
null
null
null
null
null
null
MarkrAI/DopeorNope-Maestro-v1-13B
[ -0.3227651119232178, -0.22568456828594208, 0.8622261881828308, 0.43461447954177856, -0.5282989740371704, 0.7012965083122253, 0.7915719747543335, 0.0761861652135849, 0.7746025323867798, 0.25632235407829285, -0.7852817177772522, -0.22573819756507874, -0.9104477763175964, 0.5715669393539429, ...
wons/llama2-13b-dpo-test-v0.2
wons
2023-11-29T14:05:44Z
1
0
null
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
2023-11-29T14:05:44Z
2023-11-29T01:51:51.000Z
null
null
Entry not found
null
transformers
text-generation
null
null
null
null
null
null
null
null
null
wons/llama2-13b-dpo-test-v0.2
[ -0.3227651119232178, -0.22568456828594208, 0.8622261881828308, 0.43461447954177856, -0.5282989740371704, 0.7012965083122253, 0.7915719747543335, 0.0761861652135849, 0.7746025323867798, 0.25632235407829285, -0.7852817177772522, -0.22573819756507874, -0.9104477763175964, 0.5715669393539429, ...
wnic00/distilbert-new
wnic00
2023-11-29T23:32:54Z
1
0
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "base_model:lxyuan/distilbert-base-multilingual-cased-sentiments-student", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2023-11-29T23:32:54Z
2023-11-29T01:57:14.000Z
null
null
--- license: apache-2.0 base_model: lxyuan/distilbert-base-multilingual-cased-sentiments-student tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-new results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-new This model is a fine-tuned version of [lxyuan/distilbert-base-multilingual-cased-sentiments-student](https://huggingface.co/lxyuan/distilbert-base-multilingual-cased-sentiments-student) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.9321 - Accuracy: 0.5589 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.9458 | 1.0 | 3203 | 0.9259 | 0.5578 | | 0.844 | 2.0 | 6406 | 0.9321 | 0.5589 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cpu - Datasets 2.14.4 - Tokenizers 0.13.0
null
transformers
text-classification
null
null
null
null
null
null
null
null
null
wnic00/distilbert-new
[ -0.45870208740234375, -0.7496678233146667, 0.21336370706558228, 0.3526988625526428, -0.380546510219574, -0.24454304575920105, -0.25611987709999084, -0.1060514897108078, 0.0895109698176384, 0.2064531147480011, -0.719499945640564, -0.7090198397636414, -0.7537837624549866, -0.0540732704102993...
gvsridhar/dogbooth
gvsridhar
2023-11-29T02:21:35Z
1
0
null
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dreambooth", "base_model:stabilityai/stable-diffusion-2-1", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
2023-11-29T02:21:35Z
2023-11-29T02:05:53.000Z
null
null
--- license: creativeml-openrail-m base_model: stabilityai/stable-diffusion-2-1 instance_prompt: a photo of [v]dog tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - dreambooth inference: true --- # DreamBooth - gvsridhar/dogbooth This is a dreambooth model derived from stabilityai/stable-diffusion-2-1. The weights were trained on a photo of [v]dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False.
null
diffusers
text-to-image
null
null
null
null
null
null
null
null
null
gvsridhar/dogbooth
[ -0.04818352311849594, -0.3834366500377655, 0.4448612332344055, -0.0551966167986393, -0.46460095047950745, 0.3178912103176117, 0.3299049139022827, -0.3176378011703491, 0.642557680606842, 0.3071306645870209, -0.46502527594566345, -0.37113311886787415, -0.6065519452095032, -0.3090032339096069...
cmanfre4/dogbooth
cmanfre4
2023-11-29T02:28:22Z
1
0
null
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dreambooth", "base_model:stabilityai/stable-diffusion-2-1", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
2023-11-29T02:28:22Z
2023-11-29T02:12:32.000Z
null
null
--- license: creativeml-openrail-m base_model: stabilityai/stable-diffusion-2-1 instance_prompt: a photo of [v]dog tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - dreambooth inference: true --- # DreamBooth - cmanfre4/dogbooth This is a dreambooth model derived from stabilityai/stable-diffusion-2-1. The weights were trained on a photo of [v]dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False.
null
diffusers
text-to-image
null
null
null
null
null
null
null
null
null
cmanfre4/dogbooth
[ -0.2506871819496155, -0.3282209634780884, 0.5455455780029297, 0.05066727101802826, -0.4465475380420685, 0.2130964994430542, 0.23455661535263062, -0.32531195878982544, 0.6342615485191345, 0.4060233533382416, -0.5652195811271667, -0.4562259018421173, -0.6462665796279907, -0.15690848231315613...
synrb/dogbooth
synrb
2023-11-29T02:29:25Z
1
0
null
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dreambooth", "base_model:stabilityai/stable-diffusion-2-1", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
2023-11-29T02:29:25Z
2023-11-29T02:13:36.000Z
null
null
--- license: creativeml-openrail-m base_model: stabilityai/stable-diffusion-2-1 instance_prompt: a photo of [v]dog tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - dreambooth inference: true --- # DreamBooth - synrb/dogbooth This is a dreambooth model derived from stabilityai/stable-diffusion-2-1. The weights were trained on a photo of [v]dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False.
null
diffusers
text-to-image
null
null
null
null
null
null
null
null
null
synrb/dogbooth
[ -0.1811685711145401, -0.5118058919906616, 0.44771328568458557, 0.025153642520308495, -0.4323124289512634, 0.3011423647403717, 0.344425231218338, -0.4019131362438202, 0.6602323651313782, 0.35717764496803284, -0.47355735301971436, -0.3694740831851959, -0.5959834456443787, -0.2438763529062271...
Bagus/distilhubert-finetuned-gtzan-base-audio-course
Bagus
2023-11-29T03:05:14Z
1
0
null
[ "transformers", "safetensors", "hubert", "audio-classification", "endpoints_compatible", "region:us" ]
2023-11-29T03:05:14Z
2023-11-29T02:18:14.000Z
null
null
Entry not found
null
transformers
audio-classification
null
null
null
null
null
null
null
null
null
Bagus/distilhubert-finetuned-gtzan-base-audio-course
[ -0.32276463508605957, -0.22568437457084656, 0.8622260093688965, 0.43461504578590393, -0.5282986760139465, 0.7012966275215149, 0.7915719747543335, 0.07618647813796997, 0.7746024131774902, 0.2563219368457794, -0.7852815389633179, -0.22573824226856232, -0.910447895526886, 0.5715669393539429, ...
aidanfog/dogbooth
aidanfog
2023-11-29T02:35:59Z
1
0
null
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dreambooth", "base_model:stabilityai/stable-diffusion-2-1", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
2023-11-29T02:35:59Z
2023-11-29T02:20:14.000Z
null
null
--- license: creativeml-openrail-m base_model: stabilityai/stable-diffusion-2-1 instance_prompt: a photo of [v]dog tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - dreambooth inference: true --- # DreamBooth - aidanfog/dogbooth This is a dreambooth model derived from stabilityai/stable-diffusion-2-1. The weights were trained on a photo of [v]dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False.
null
diffusers
text-to-image
null
null
null
null
null
null
null
null
null
aidanfog/dogbooth
[ -0.16339567303657532, -0.4788932800292969, 0.4244469702243805, 0.049629468470811844, -0.40485531091690063, 0.22522136569023132, 0.26838600635528564, -0.37574562430381775, 0.6301695108413696, 0.4105544686317444, -0.5273822546005249, -0.42615213990211487, -0.665610671043396, -0.1540633440017...
tekgrl/dogbooth2
tekgrl
2023-11-29T02:39:04Z
1
0
null
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dreambooth", "base_model:stabilityai/stable-diffusion-2-1", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
2023-11-29T02:39:04Z
2023-11-29T02:23:47.000Z
null
null
--- license: creativeml-openrail-m base_model: stabilityai/stable-diffusion-2-1 instance_prompt: a photo of [v]dog tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - dreambooth inference: true --- # DreamBooth - tekgrl/dogbooth2 This is a dreambooth model derived from stabilityai/stable-diffusion-2-1. The weights were trained on a photo of [v]dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False.
null
diffusers
text-to-image
null
null
null
null
null
null
null
null
null
tekgrl/dogbooth2
[ -0.10212305188179016, -0.45977458357810974, 0.4186967611312866, 0.06966692954301834, -0.42618539929389954, 0.33121028542518616, 0.2977830469608307, -0.3509582579135895, 0.49860742688179016, 0.28497782349586487, -0.4338211715221405, -0.37771499156951904, -0.6441671252250671, -0.248980030417...
royallab/psyonic-cetacean-20B-exl2
royallab
2023-11-29T03:35:42Z
1
0
null
[ "en", "license:other", "region:us" ]
2023-11-29T03:35:42Z
2023-11-29T02:24:37.000Z
null
null
--- license: other license_name: microsoft-research-license license_link: LICENSE language: - en --- ## Information This is a Exl2 quantized version of [psyonic-cetacean-20B](https://huggingface.co/jebcarter/psyonic-cetacean-20B) Please refer to the original creator for more information. Calibration dataset: [wikitext](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-v1/test) ## Branches: - main: Measurement files - 4bpw: 4 bits per weight - 5bpw: 5 bits per weight - 6bpw: 6 bits per weight ## Notes - 6bpw is recommended for the best quality to vram usage ratio (assuming you have enough vram). - Please ask for more bpws in the community tab if necessary. ## Run in TabbyAPI TabbyAPI is a pure exllamav2 FastAPI server developed by us. You can find TabbyAPI's source code here: [https://github.com/theroyallab/TabbyAPI](https://github.com/theroyallab/TabbyAPI) If you don't have huggingface-cli, please run `pip install huggingface_hub`. To run this model, follow these steps: 1. Make a directory inside your models folder called `psyonic-cetacean-20B-exl2` 2. Open a terminal inside your models folder 3. Run `huggingface-cli download royallab/psyonic-cetacean-20B-exl2 --revision 4bpw --local-dir psyonic-cetacean-20B-exl2 --local-dir-use-symlinks False` 1. The `--revision` flag corresponds to the branch name on the model repo. Please select the appropriate bpw branch for your system. 4. Inside TabbyAPI's config.yml, set `model_name` to `psyonic-cetacean-20B-exl2` or you can use the `/model/load` endpoint after launching. 5. Launch TabbyAPI inside your python env by running `python main.py` ## Donate? All my infrastructure and cloud expenses are paid out of pocket. If you'd like to donate, you can do so here: https://ko-fi.com/kingbri You should not feel obligated to donate, but if you do, I'd appreciate it. ---
null
null
null
null
null
null
null
null
null
null
null
null
royallab/psyonic-cetacean-20B-exl2
[ -0.38423317670822144, -0.6963672041893005, 0.25581130385398865, 0.477369487285614, -0.4797239899635315, 0.1103183925151825, -0.14509136974811554, -0.3883778154850006, 0.7016263008117676, 0.5938097238540649, -0.5506397485733032, -0.15867722034454346, -0.19023363292217255, -0.386005073785781...
waelorabi/dogbooth
waelorabi
2023-11-29T02:42:48Z
1
0
null
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dreambooth", "base_model:stabilityai/stable-diffusion-2-1", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
2023-11-29T02:42:48Z
2023-11-29T02:27:09.000Z
null
null
--- license: creativeml-openrail-m base_model: stabilityai/stable-diffusion-2-1 instance_prompt: a photo of [v]dog tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - dreambooth inference: true --- # DreamBooth - waelorabi/dogbooth This is a dreambooth model derived from stabilityai/stable-diffusion-2-1. The weights were trained on a photo of [v]dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False.
null
diffusers
text-to-image
null
null
null
null
null
null
null
null
null
waelorabi/dogbooth
[ -0.2564859986305237, -0.524945080280304, 0.4059332311153412, 0.06543367356061935, -0.3794952929019928, 0.2306944727897644, 0.3836199939250946, -0.39441680908203125, 0.6348341107368469, 0.37164029479026794, -0.4833771288394928, -0.427289754152298, -0.6268467903137207, -0.2090580314397812, ...
TejaMat/dogbooth
TejaMat
2023-11-29T02:48:58Z
1
0
null
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dreambooth", "base_model:stabilityai/stable-diffusion-2-1", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
2023-11-29T02:48:58Z
2023-11-29T02:33:20.000Z
null
null
--- license: creativeml-openrail-m base_model: stabilityai/stable-diffusion-2-1 instance_prompt: a photo of [v]dog tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - dreambooth inference: true --- # DreamBooth - TejaMat/dogbooth This is a dreambooth model derived from stabilityai/stable-diffusion-2-1. The weights were trained on a photo of [v]dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False.
null
diffusers
text-to-image
null
null
null
null
null
null
null
null
null
TejaMat/dogbooth
[ -0.19742842018604279, -0.5473896265029907, 0.49178826808929443, 0.02129005827009678, -0.4156859517097473, 0.3096151053905487, 0.255840927362442, -0.2765354514122009, 0.6501664519309998, 0.36262422800064087, -0.45623135566711426, -0.4507637321949005, -0.6013919115066528, -0.2527642548084259...
cuongtk2002/my_awesome_qa_model
cuongtk2002
2023-11-29T03:43:45Z
1
0
null
[ "transformers", "tensorboard", "safetensors", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "base_model:distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-11-29T03:43:45Z
2023-11-29T02:59:32.000Z
null
null
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - squad model-index: - name: my_awesome_qa_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_qa_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.6024 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 250 | 2.1527 | | 2.6394 | 2.0 | 500 | 1.6314 | | 2.6394 | 3.0 | 750 | 1.6024 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
null
transformers
question-answering
null
null
null
null
null
null
null
null
null
cuongtk2002/my_awesome_qa_model
[ -0.40604671835899353, -0.6900531053543091, 0.24237295985221863, 0.2444390058517456, -0.3294779360294342, -0.07881376892328262, 0.13002708554267883, -0.1763329803943634, 0.059734828770160675, 0.27610841393470764, -0.9093727469444275, -0.6657506227493286, -0.6355180740356445, -0.081715814769...
notbdq/gpt2-turkish-alpaca
notbdq
2023-11-29T04:11:38Z
1
0
null
[ "transformers", "safetensors", "gpt2", "text-generation", "license:apache-2.0", "endpoints_compatible", "text-generation-inference", "region:us" ]
2023-11-29T04:11:38Z
2023-11-29T03:35:11.000Z
null
null
--- license: apache-2.0 ---
null
transformers
text-generation
null
null
null
null
null
null
null
null
null
notbdq/gpt2-turkish-alpaca
[ -0.12853394448757172, -0.1861671805381775, 0.6529130339622498, 0.49436283111572266, -0.1931932270526886, 0.23607474565505981, 0.3607197403907776, 0.05056331306695938, 0.5793652534484863, 0.7400139570236206, -0.6508102416992188, -0.23783963918685913, -0.7102248668670654, -0.0478258728981018...
meryemnar/layoutlmv3-test
meryemnar
2023-11-29T04:04:20Z
1
0
null
[ "transformers", "pytorch", "layoutlmv3", "text-classification", "endpoints_compatible", "region:us" ]
2023-11-29T04:04:20Z
2023-11-29T03:52:49.000Z
null
null
Entry not found
null
transformers
text-classification
null
null
null
null
null
null
null
null
null
meryemnar/layoutlmv3-test
[ -0.3227651119232178, -0.22568456828594208, 0.8622261881828308, 0.43461447954177856, -0.5282989740371704, 0.7012965083122253, 0.7915719747543335, 0.0761861652135849, 0.7746025323867798, 0.25632235407829285, -0.7852817177772522, -0.22573819756507874, -0.9104477763175964, 0.5715669393539429, ...
fanjiang98/STDPR-Beir
fanjiang98
2023-11-29T03:55:25Z
1
0
null
[ "transformers", "pytorch", "bert", "feature-extraction", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2023-11-29T03:55:25Z
2023-11-29T03:54:05.000Z
null
null
--- license: apache-2.0 ---
null
transformers
feature-extraction
null
null
null
null
null
null
null
null
null
fanjiang98/STDPR-Beir
[ -0.12853312492370605, -0.18616832792758942, 0.6529129147529602, 0.494362473487854, -0.19319364428520203, 0.23607414960861206, 0.36071962118148804, 0.05056367814540863, 0.5793655514717102, 0.7400145530700684, -0.6508100032806396, -0.237839937210083, -0.7102250456809998, -0.0478254035115242,...
niksss/xlm-roberta-large-finetuned-ebay
niksss
2023-11-29T03:57:05Z
1
0
null
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "fill-mask", "generated_from_trainer", "base_model:xlm-roberta-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-11-29T03:57:05Z
2023-11-29T03:55:57.000Z
null
null
--- license: mit base_model: xlm-roberta-large tags: - generated_from_trainer model-index: - name: xlm-roberta-large-finetuned-ebay results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-large-finetuned-ebay This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 2 ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
null
transformers
fill-mask
null
null
null
null
null
null
null
null
null
niksss/xlm-roberta-large-finetuned-ebay
[ -0.48935604095458984, -0.7666966915130615, 0.3223075270652771, 0.042051080614328384, -0.42847397923469543, -0.42976489663124084, -0.2816063165664673, -0.3544222116470337, 0.20488834381103516, 0.6083173751831055, -0.7800277471542358, -0.5220984816551208, -0.6898741126060486, 0.1156541109085...
TheBloke/Venus-120b-v1.0-GPTQ
TheBloke
2023-11-29T12:58:00Z
1
1
null
[ "transformers", "safetensors", "llama", "text-generation", "not-for-all-audiences", "en", "base_model:nsfwthrowitaway69/Venus-120b-v1.0", "license:llama2", "autotrain_compatible", "text-generation-inference", "4-bit", "region:us" ]
2023-11-29T12:58:00Z
2023-11-29T04:11:37.000Z
null
null
--- base_model: nsfwthrowitaway69/Venus-120b-v1.0 inference: false language: - en license: llama2 model_creator: John Smith model_name: Venus 120B V1.0 model_type: llama prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke tags: - not-for-all-audiences --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Venus 120B V1.0 - GPTQ - Model creator: [John Smith](https://huggingface.co/nsfwthrowitaway69) - Original model: [Venus 120B V1.0](https://huggingface.co/nsfwthrowitaway69/Venus-120b-v1.0) <!-- description start --> # Description This repo contains GPTQ model files for [John Smith's Venus 120B V1.0](https://huggingface.co/nsfwthrowitaway69/Venus-120b-v1.0). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Venus-120b-v1.0-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Venus-120b-v1.0-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Venus-120b-v1.0-GGUF) * [John Smith's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/nsfwthrowitaway69/Venus-120b-v1.0) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- README_GPTQ.md-compatible clients start --> ## Known compatible clients / servers These GPTQ models are known to work in the following inference servers/webuis. - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) - [KoboldAI United](https://github.com/henk717/koboldai) - [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui) - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) This may not be a complete list; if you know of others, please let me know! <!-- README_GPTQ.md-compatible clients end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/Venus-120b-v1.0-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [OpenErotica Erotiquant](https://huggingface.co/datasets/openerotica/erotiquant/viewer/) | 4096 | 61.04 GB | Yes | 4-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Venus-120b-v1.0-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [OpenErotica Erotiquant](https://huggingface.co/datasets/openerotica/erotiquant/viewer/) | 4096 | 63.36 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-3bit--1g-actorder_True](https://huggingface.co/TheBloke/Venus-120b-v1.0-GPTQ/tree/gptq-3bit--1g-actorder_True) | 3 | None | Yes | 0.1 | [OpenErotica Erotiquant](https://huggingface.co/datasets/openerotica/erotiquant/viewer/) | 4096 | 46.07 GB | No | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. | | [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/Venus-120b-v1.0-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [OpenErotica Erotiquant](https://huggingface.co/datasets/openerotica/erotiquant/viewer/) | 4096 | 48.26 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/Venus-120b-v1.0-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Venus-120b-v1.0-GPTQ:gptq-4bit-128g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `Venus-120b-v1.0-GPTQ`: ```shell mkdir Venus-120b-v1.0-GPTQ huggingface-cli download TheBloke/Venus-120b-v1.0-GPTQ --local-dir Venus-120b-v1.0-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir Venus-120b-v1.0-GPTQ huggingface-cli download TheBloke/Venus-120b-v1.0-GPTQ --revision gptq-4bit-128g-actorder_True --local-dir Venus-120b-v1.0-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir Venus-120b-v1.0-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Venus-120b-v1.0-GPTQ --local-dir Venus-120b-v1.0-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-128g-actorder_True https://huggingface.co/TheBloke/Venus-120b-v1.0-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/Venus-120b-v1.0-GPTQ`. - To download from a specific branch, enter for example `TheBloke/Venus-120b-v1.0-GPTQ:gptq-4bit-128g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `Venus-120b-v1.0-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. - Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-tgi start --> ## Serving this model from Text Generation Inference (TGI) It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/Venus-120b-v1.0-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: {response}") ``` <!-- README_GPTQ.md-use-from-tgi end --> <!-- README_GPTQ.md-use-from-python start --> ## Python code example: inference from this GPTQ model ### Install the necessary packages Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install --upgrade transformers optimum # If using PyTorch 2.1 + CUDA 12.x: pip3 install --upgrade auto-gptq # or, if using PyTorch 2.1 + CUDA 11.x: pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ ``` If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.5.1 pip3 install . ``` ### Example Python code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/Venus-120b-v1.0-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-128g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Tell me about AI" prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly. [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility. For a list of clients/servers, please see "Known compatible clients / servers", above. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: John Smith's Venus 120B V1.0 # Venus 120b - version 1.0 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/655febd724e0d359c1f21096/BSKlxWQSbh-liU8kGz4fF.png) ## Overview The goal was to create a large model that's highly capable for RP/ERP scenarios. Goliath-120b is excellent for roleplay, and Venus-120b was created with the idea of attempting to mix more than two models together to see how well this method works. ## Model Details - A result of interleaving layers of [Sao10K/Euryale-1.3-L2-70B](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B), [NousResearch/Nous-Hermes-Llama2-70b](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-70b), and [migtissera/SynthIA-70B-v1.5](https://huggingface.co/migtissera/SynthIA-70B-v1.5) using [mergekit](https://github.com/cg123/mergekit). - The resulting model has 140 layers and approximately 122 billion parameters. - See mergekit-config.yml for details on the merge method used. - See the `exl2-*` branches for exllama2 quantizations. The 4.85 bpw quant should fit in 80GB VRAM, and the 3.0 bpw quant should (just barely) fit in 48GB VRAM with 4k context. - Inspired by [Goliath-120b](https://huggingface.co/alpindale/goliath-120b) **Warning: This model will produce NSFW content!** ## Results Initial tests show that Venus-120b functions fine, overall it seems to be comparable to Goliath-120b. Some differences I noticed: 1. Venus needs lower temperature settings than Goliath. I recommend a temp of around 0.7, and no higher than 1.0. 2. Venus tends to, on average, produce longer responses than Goliath. Probably due to the inclusion of SynthIA in the merge, which is trained to produce long chain-of-thought responses. 3. Venus seems to be a bit less creative than Goliath when it comes to the prose it generates. Probably due to the lack of Xwin and the inclusion of Nous-Hermes. Keep in mind this is all anecdotal from some basic tests. The key takeaway is that Venus shows that Goliath is not a fluke. ## Other quants: - 4.5 bpw exl2 quant provided by Panchovix: https://huggingface.co/Panchovix/Venus-120b-v1.0-4.5bpw-h6-exl2 - 4.25 bpw exl2 quant provided by Panchovix: https://huggingface.co/Panchovix/Venus-120b-v1.0-4.25bpw-h6-exl2
null
transformers
text-generation
null
null
null
null
null
null
null
null
null
TheBloke/Venus-120b-v1.0-GPTQ
[ -0.5088538527488708, -0.8140855431556702, 0.3260215222835541, 0.03585069626569748, -0.29534491896629333, -0.3683370351791382, 0.04302451014518738, -0.2968025207519531, 0.2128635197877884, 0.4492515027523041, -0.7358071208000183, -0.5742902755737305, -0.3749147951602936, 0.10594984143972397...
TheBloke/Venus-120b-v1.0-GGUF
TheBloke
2023-11-29T05:24:35Z
1
2
null
[ "transformers", "llama", "not-for-all-audiences", "en", "base_model:nsfwthrowitaway69/Venus-120b-v1.0", "license:llama2", "text-generation-inference", "region:us" ]
2023-11-29T05:24:35Z
2023-11-29T04:11:37.000Z
null
null
--- base_model: nsfwthrowitaway69/Venus-120b-v1.0 inference: false language: - en license: llama2 model_creator: John Smith model_name: Venus 120B V1.0 model_type: llama prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke tags: - not-for-all-audiences --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Venus 120B V1.0 - GGUF - Model creator: [John Smith](https://huggingface.co/nsfwthrowitaway69) - Original model: [Venus 120B V1.0](https://huggingface.co/nsfwthrowitaway69/Venus-120b-v1.0) <!-- description start --> ## Description This repo contains GGUF format model files for [John Smith's Venus 120B V1.0](https://huggingface.co/nsfwthrowitaway69/Venus-120b-v1.0). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Venus-120b-v1.0-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Venus-120b-v1.0-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Venus-120b-v1.0-GGUF) * [John Smith's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/nsfwthrowitaway69/Venus-120b-v1.0) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | venus-120b-v1.0.Q2_K.gguf | Q2_K | 2 | 50.71 GB| 53.21 GB | smallest, significant quality loss - not recommended for most purposes | | venus-120b-v1.0.Q3_K_S.gguf | Q3_K_S | 3 | 51.81 GB| 54.31 GB | very small, high quality loss | | venus-120b-v1.0.Q3_K_M.gguf | Q3_K_M | 3 | 57.64 GB| 60.14 GB | very small, high quality loss | | venus-120b-v1.0.Q3_K_L.gguf | Q3_K_L | 3 | 63.01 GB| 65.51 GB | small, substantial quality loss | | venus-120b-v1.0.Q4_0.gguf | Q4_0 | 4 | 67.75 GB| 70.25 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | venus-120b-v1.0.Q4_K_S.gguf | Q4_K_S | 4 | 67.88 GB| 70.38 GB | small, greater quality loss | | venus-120b-v1.0.Q4_K_M.gguf | Q4_K_M | 4 | 72.14 GB| 74.64 GB | medium, balanced quality - recommended | | venus-120b-v1.0.Q5_0.gguf | Q5_0 | 5 | 82.76 GB| 85.26 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | venus-120b-v1.0.Q5_K_S.gguf | Q5_K_S | 5 | 82.76 GB| 85.26 GB | large, low quality loss - recommended | | venus-120b-v1.0.Q5_K_M.gguf | Q5_K_M | 5 | 85.02 GB| 87.52 GB | large, very low quality loss - recommended | | venus-120b-v1.0.Q6_K.gguf | Q6_K | 6 | 98.70 GB| 101.20 GB | very large, extremely low quality loss | | venus-120b-v1.0.Q8_0.gguf | Q8_0 | 8 | 127.84 GB| 130.34 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. ### Q6_K and Q8_0 files are split and require joining **Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files. <details> <summary>Click for instructions regarding Q6_K and Q8_0 files</summary> ### q6_K Please download: * `venus-120b-v1.0.Q6_K.gguf-split-a` * `venus-120b-v1.0.Q6_K.gguf-split-b` ### q8_0 Please download: * `venus-120b-v1.0.Q8_0.gguf-split-a` * `venus-120b-v1.0.Q8_0.gguf-split-b` To join the files, do the following: Linux and macOS: ``` cat venus-120b-v1.0.Q6_K.gguf-split-* > venus-120b-v1.0.Q6_K.gguf && rm venus-120b-v1.0.Q6_K.gguf-split-* cat venus-120b-v1.0.Q8_0.gguf-split-* > venus-120b-v1.0.Q8_0.gguf && rm venus-120b-v1.0.Q8_0.gguf-split-* ``` Windows command line: ``` COPY /B venus-120b-v1.0.Q6_K.gguf-split-a + venus-120b-v1.0.Q6_K.gguf-split-b venus-120b-v1.0.Q6_K.gguf del venus-120b-v1.0.Q6_K.gguf-split-a venus-120b-v1.0.Q6_K.gguf-split-b COPY /B venus-120b-v1.0.Q8_0.gguf-split-a + venus-120b-v1.0.Q8_0.gguf-split-b venus-120b-v1.0.Q8_0.gguf del venus-120b-v1.0.Q8_0.gguf-split-a venus-120b-v1.0.Q8_0.gguf-split-b ``` </details> <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Venus-120b-v1.0-GGUF and below it, a specific filename to download, such as: venus-120b-v1.0.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Venus-120b-v1.0-GGUF venus-120b-v1.0.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Venus-120b-v1.0-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Venus-120b-v1.0-GGUF venus-120b-v1.0.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m venus-120b-v1.0.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./venus-120b-v1.0.Q4_K_M.gguf", # Download the model file first n_ctx=4096, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./venus-120b-v1.0.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: John Smith's Venus 120B V1.0 # Venus 120b - version 1.0 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/655febd724e0d359c1f21096/BSKlxWQSbh-liU8kGz4fF.png) ## Overview The goal was to create a large model that's highly capable for RP/ERP scenarios. Goliath-120b is excellent for roleplay, and Venus-120b was created with the idea of attempting to mix more than two models together to see how well this method works. ## Model Details - A result of interleaving layers of [Sao10K/Euryale-1.3-L2-70B](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B), [NousResearch/Nous-Hermes-Llama2-70b](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-70b), and [migtissera/SynthIA-70B-v1.5](https://huggingface.co/migtissera/SynthIA-70B-v1.5) using [mergekit](https://github.com/cg123/mergekit). - The resulting model has 140 layers and approximately 122 billion parameters. - See mergekit-config.yml for details on the merge method used. - See the `exl2-*` branches for exllama2 quantizations. The 4.85 bpw quant should fit in 80GB VRAM, and the 3.0 bpw quant should (just barely) fit in 48GB VRAM with 4k context. - Inspired by [Goliath-120b](https://huggingface.co/alpindale/goliath-120b) **Warning: This model will produce NSFW content!** ## Results Initial tests show that Venus-120b functions fine, overall it seems to be comparable to Goliath-120b. Some differences I noticed: 1. Venus needs lower temperature settings than Goliath. I recommend a temp of around 0.7, and no higher than 1.0. 2. Venus tends to, on average, produce longer responses than Goliath. Probably due to the inclusion of SynthIA in the merge, which is trained to produce long chain-of-thought responses. 3. Venus seems to be a bit less creative than Goliath when it comes to the prose it generates. Probably due to the lack of Xwin and the inclusion of Nous-Hermes. Keep in mind this is all anecdotal from some basic tests. The key takeaway is that Venus shows that Goliath is not a fluke. ## Other quants: - 4.5 bpw exl2 quant provided by Panchovix: https://huggingface.co/Panchovix/Venus-120b-v1.0-4.5bpw-h6-exl2 - 4.25 bpw exl2 quant provided by Panchovix: https://huggingface.co/Panchovix/Venus-120b-v1.0-4.25bpw-h6-exl2 <!-- original-model-card end -->
null
transformers
null
null
null
null
null
null
null
null
null
null
TheBloke/Venus-120b-v1.0-GGUF
[ -0.6495473980903625, -0.8500155210494995, 0.5379812121391296, 0.08425478637218475, -0.3888649046421051, -0.1460096836090088, 0.05670083314180374, -0.4257549047470093, 0.2658465802669525, 0.3253166675567627, -0.7506352663040161, -0.5483348965644836, -0.39367106556892395, -0.0195103660225868...
SG1123/my_awesome_qa_model_v2
SG1123
2023-11-29T19:11:20Z
1
0
null
[ "transformers", "tensorboard", "safetensors", "deberta", "question-answering", "endpoints_compatible", "region:us" ]
2023-11-29T19:11:20Z
2023-11-29T04:28:38.000Z
null
null
Entry not found
null
transformers
question-answering
null
null
null
null
null
null
null
null
null
SG1123/my_awesome_qa_model_v2
[ -0.3227650821208954, -0.22568479180335999, 0.8622263669967651, 0.4346153140068054, -0.5282987952232361, 0.7012966871261597, 0.7915722727775574, 0.07618651539087296, 0.7746027112007141, 0.2563222348690033, -0.7852821350097656, -0.225738525390625, -0.910447895526886, 0.5715667009353638, -0...
genaitraining/llama-2-7b-domain-tuned
genaitraining
2023-11-29T05:12:22Z
1
0
null
[ "transformers", "safetensors", "llama", "text-generation", "endpoints_compatible", "text-generation-inference", "region:us" ]
2023-11-29T05:12:22Z
2023-11-29T05:01:00.000Z
null
null
Entry not found
null
transformers
text-generation
null
null
null
null
null
null
null
null
null
genaitraining/llama-2-7b-domain-tuned
[ -0.3227650821208954, -0.22568479180335999, 0.8622263669967651, 0.4346153140068054, -0.5282987952232361, 0.7012966871261597, 0.7915722727775574, 0.07618651539087296, 0.7746027112007141, 0.2563222348690033, -0.7852821350097656, -0.225738525390625, -0.910447895526886, 0.5715667009353638, -0...
aldogeova/isa-vit_model
aldogeova
2023-11-29T05:24:07Z
1
0
null
[ "transformers", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:beans", "base_model:google/vit-base-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-11-29T05:24:07Z
2023-11-29T05:06:13.000Z
null
null
--- license: apache-2.0 base_model: google/vit-base-patch16-224 tags: - generated_from_trainer datasets: - beans metrics: - accuracy model-index: - name: isa-vit_model results: - task: name: Image Classification type: image-classification dataset: name: beans type: beans config: default split: validation args: default metrics: - name: Accuracy type: accuracy value: 0.9849624060150376 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # isa-vit_model This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0370 - Accuracy: 0.9850 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0947 | 3.85 | 500 | 0.0370 | 0.9850 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.1 - Datasets 2.15.0 - Tokenizers 0.15.0
null
transformers
image-classification
null
null
null
null
null
null
null
null
null
aldogeova/isa-vit_model
[ -0.3026338815689087, -0.8182910680770874, 0.28864243626594543, 0.3278363049030304, -0.3186468183994293, -0.48637592792510986, -0.13157182931900024, -0.22192087769508362, 0.16984806954860687, 0.28142979741096497, -0.5715470314025879, -0.5498648285865784, -0.7383315563201904, -0.192410841584...
alforhad/my_awesome_qa_model2
alforhad
2023-11-29T06:41:42Z
1
0
null
[ "transformers", "pytorch", "distilbert", "question-answering", "generated_from_trainer", "base_model:distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-11-29T06:41:42Z
2023-11-29T06:01:08.000Z
null
null
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: my_awesome_qa_model2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_qa_model2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.5437 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 7 | 5.3355 | | No log | 2.0 | 14 | 4.6286 | | No log | 3.0 | 21 | 4.0252 | | No log | 4.0 | 28 | 3.5328 | | No log | 5.0 | 35 | 3.0151 | | No log | 6.0 | 42 | 2.5417 | | No log | 7.0 | 49 | 2.2163 | | No log | 8.0 | 56 | 2.3292 | | No log | 9.0 | 63 | 2.0334 | | No log | 10.0 | 70 | 2.4656 | | No log | 11.0 | 77 | 2.2878 | | No log | 12.0 | 84 | 2.5331 | | No log | 13.0 | 91 | 2.4556 | | No log | 14.0 | 98 | 2.4335 | | No log | 15.0 | 105 | 2.7518 | | No log | 16.0 | 112 | 2.7868 | | No log | 17.0 | 119 | 2.8532 | | No log | 18.0 | 126 | 2.8263 | | No log | 19.0 | 133 | 3.2188 | | No log | 20.0 | 140 | 3.4776 | | No log | 21.0 | 147 | 3.5201 | | No log | 22.0 | 154 | 3.5160 | | No log | 23.0 | 161 | 3.4940 | | No log | 24.0 | 168 | 3.4051 | | No log | 25.0 | 175 | 3.3228 | | No log | 26.0 | 182 | 3.5574 | | No log | 27.0 | 189 | 3.7202 | | No log | 28.0 | 196 | 3.7427 | | No log | 29.0 | 203 | 3.7091 | | No log | 30.0 | 210 | 3.5674 | | No log | 31.0 | 217 | 3.6129 | | No log | 32.0 | 224 | 3.6684 | | No log | 33.0 | 231 | 3.6587 | | No log | 34.0 | 238 | 3.6272 | | No log | 35.0 | 245 | 3.5957 | | No log | 36.0 | 252 | 3.5069 | | No log | 37.0 | 259 | 3.4276 | | No log | 38.0 | 266 | 3.4260 | | No log | 39.0 | 273 | 3.3903 | | No log | 40.0 | 280 | 3.5343 | | No log | 41.0 | 287 | 3.5672 | | No log | 42.0 | 294 | 3.6219 | | No log | 43.0 | 301 | 3.6212 | | No log | 44.0 | 308 | 3.5741 | | No log | 45.0 | 315 | 3.5646 | | No log | 46.0 | 322 | 3.5547 | | No log | 47.0 | 329 | 3.5371 | | No log | 48.0 | 336 | 3.5322 | | No log | 49.0 | 343 | 3.5384 | | No log | 50.0 | 350 | 3.5437 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.1+cpu - Datasets 2.12.0 - Tokenizers 0.13.2
null
transformers
question-answering
null
null
null
null
null
null
null
null
null
alforhad/my_awesome_qa_model2
[ -0.6273586750030518, -0.5911117196083069, 0.17126910388469696, 0.18151018023490906, -0.10512057691812515, -0.0035629775375127792, 0.19531139731407166, 0.006184350699186325, 0.6082630157470703, 0.28086912631988525, -0.6929498314857483, -0.7754954099655151, -0.7727253437042236, -0.3086912035...
YanweiLi/llama-vid-7b-full-224-video-fps-1
YanweiLi
2023-11-29T06:11:27Z
1
1
null
[ "transformers", "pytorch", "llava", "text-generation", "endpoints_compatible", "region:us" ]
2023-11-29T06:11:27Z
2023-11-29T06:06:23.000Z
null
null
Entry not found
null
transformers
text-generation
null
null
null
null
null
null
null
null
null
YanweiLi/llama-vid-7b-full-224-video-fps-1
[ -0.3227648437023163, -0.22568459808826447, 0.8622260093688965, 0.434614896774292, -0.5282989144325256, 0.7012966275215149, 0.7915716171264648, 0.07618634402751923, 0.7746022343635559, 0.25632208585739136, -0.7852813005447388, -0.22573812305927277, -0.9104481935501099, 0.5715669393539429, ...
meryemnar/layoutlmv3-full-data
meryemnar
2023-11-29T06:11:46Z
1
0
null
[ "transformers", "pytorch", "layoutlmv3", "text-classification", "endpoints_compatible", "region:us" ]
2023-11-29T06:11:46Z
2023-11-29T06:09:09.000Z
null
null
Entry not found
null
transformers
text-classification
null
null
null
null
null
null
null
null
null
meryemnar/layoutlmv3-full-data
[ -0.3227648437023163, -0.22568459808826447, 0.8622260093688965, 0.434614896774292, -0.5282989144325256, 0.7012966275215149, 0.7915716171264648, 0.07618634402751923, 0.7746022343635559, 0.25632208585739136, -0.7852813005447388, -0.22573812305927277, -0.9104481935501099, 0.5715669393539429, ...
royzhong/ASVS-13B
royzhong
2023-11-29T06:24:00Z
1
0
null
[ "transformers", "pytorch", "llama", "text-generation", "endpoints_compatible", "text-generation-inference", "region:us" ]
2023-11-29T06:24:00Z
2023-11-29T06:11:43.000Z
null
null
Entry not found
null
transformers
text-generation
null
null
null
null
null
null
null
null
null
royzhong/ASVS-13B
[ -0.3227648437023163, -0.22568459808826447, 0.8622260093688965, 0.434614896774292, -0.5282989144325256, 0.7012966275215149, 0.7915716171264648, 0.07618634402751923, 0.7746022343635559, 0.25632208585739136, -0.7852813005447388, -0.22573812305927277, -0.9104481935501099, 0.5715669393539429, ...
xiaochongw0/whisper-small-finetune
xiaochongw0
2023-11-29T07:01:59Z
1
0
null
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "license:unknown", "endpoints_compatible", "region:us" ]
2023-11-29T07:01:59Z
2023-11-29T06:58:07.000Z
null
null
--- license: unknown ---
null
transformers
automatic-speech-recognition
null
null
null
null
null
null
null
null
null
xiaochongw0/whisper-small-finetune
[ -0.1285337656736374, -0.18616777658462524, 0.6529129147529602, 0.4943626821041107, -0.19319315254688263, 0.23607446253299713, 0.3607197403907776, 0.05056322365999222, 0.5793652534484863, 0.740013837814331, -0.6508102416992188, -0.23783965408802032, -0.7102248668670654, -0.04782604798674583...
Jarnails1559/MYTEST_MODEL2
Jarnails1559
2023-11-29T07:17:26Z
1
0
null
[ "transformers", "safetensors", "gpt2", "text-generation", "endpoints_compatible", "text-generation-inference", "region:us" ]
2023-11-29T07:17:26Z
2023-11-29T07:16:44.000Z
null
null
Entry not found
null
transformers
text-generation
null
null
null
null
null
null
null
null
null
Jarnails1559/MYTEST_MODEL2
[ -0.3227651119232178, -0.22568456828594208, 0.8622261881828308, 0.43461447954177856, -0.5282989740371704, 0.7012965083122253, 0.7915719747543335, 0.0761861652135849, 0.7746025323867798, 0.25632235407829285, -0.7852817177772522, -0.22573819756507874, -0.9104477763175964, 0.5715669393539429, ...
michaelsungboklee/bert-finetuned-ner
michaelsungboklee
2023-11-29T08:13:35Z
1
0
null
[ "transformers", "pytorch", "bert", "token-classification", "generated_from_trainer", "dataset:conll2003", "base_model:bert-base-cased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-11-29T08:13:35Z
2023-11-29T07:24:35.000Z
null
null
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 config: conll2003 split: validation args: conll2003 metrics: - name: Precision type: precision value: 0.9315520369454066 - name: Recall type: recall value: 0.9505217098619994 - name: F1 type: f1 value: 0.9409412744689714 - name: Accuracy type: accuracy value: 0.9870194854889033 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0284 - Precision: 0.9316 - Recall: 0.9505 - F1: 0.9409 - Accuracy: 0.9870 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0349 | 1.0 | 1756 | 0.0347 | 0.9085 | 0.9362 | 0.9222 | 0.9810 | | 0.0184 | 2.0 | 3512 | 0.0269 | 0.9281 | 0.9495 | 0.9387 | 0.9869 | | 0.009 | 3.0 | 5268 | 0.0284 | 0.9316 | 0.9505 | 0.9409 | 0.9870 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.0.1 - Datasets 2.14.5 - Tokenizers 0.13.3
null
transformers
token-classification
null
null
null
null
null
null
null
null
null
michaelsungboklee/bert-finetuned-ner
[ -0.6195663213729858, -0.6572340130805969, 0.16549980640411377, 0.1645858734846115, -0.3940754532814026, -0.5659221410751343, -0.22342544794082642, -0.23233100771903992, 0.1625933200120926, 0.35376015305519104, -0.87126624584198, -0.6232948303222656, -0.693265438079834, -0.23202016949653625...
Jaspernl/whisper-small-nl-student
Jaspernl
2023-11-29T19:24:11Z
1
0
null
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "nl", "dataset:mozilla-foundation/common_voice_13_0", "endpoints_compatible", "region:us" ]
2023-11-29T19:24:11Z
2023-11-29T07:36:05.000Z
null
null
--- datasets: - mozilla-foundation/common_voice_13_0 language: - nl metrics: - wer --- Process: - created a distil-whisper model trained on cv_13 nl. Afterwards finetuned the model with the following parameters on mac m1 max (32gb) Results: - eval_loss 0.1334582418203354 - eval_wer 13.924597710391687 from transformers import Seq2SeqTrainingArguments training_args = Seq2SeqTrainingArguments( output_dir="./whisper-small-student-ft-nl", # change to a repo name of your choice per_device_train_batch_size=16, gradient_accumulation_steps=1, # increase by 2x for every 2x decrease in batch size learning_rate=1e-5, warmup_steps=500, max_steps=4000, gradient_checkpointing=True, fp16=False, evaluation_strategy="steps", per_device_eval_batch_size=8, predict_with_generate=True, generation_max_length=225, save_steps=1000, eval_steps=1000, logging_steps=25, report_to=["tensorboard"], load_best_model_at_end=True, metric_for_best_model="wer", greater_is_better=False, push_to_hub=True, do_train=False, )
null
transformers
automatic-speech-recognition
null
null
null
null
null
null
null
null
null
Jaspernl/whisper-small-nl-student
[ -0.5179004669189453, -0.7566124796867371, 0.3259013593196869, 0.4493153691291809, -0.21508915722370148, -0.16452574729919434, -0.20388168096542358, -0.0064684562385082245, -0.012348026037216187, 0.5965263247489929, -0.9074670672416687, -0.42031344771385193, -0.8481061458587646, -0.25755202...
wons/mistral-7B-test-v0.3
wons
2023-11-29T08:26:15Z
1
0
null
[ "transformers", "safetensors", "mistral", "text-generation", "endpoints_compatible", "text-generation-inference", "region:us" ]
2023-11-29T08:26:15Z
2023-11-29T08:07:04.000Z
null
null
Entry not found
null
transformers
text-generation
null
null
null
null
null
null
null
null
null
wons/mistral-7B-test-v0.3
[ -0.3227651119232178, -0.22568456828594208, 0.8622261881828308, 0.43461447954177856, -0.5282989740371704, 0.7012965083122253, 0.7915719747543335, 0.0761861652135849, 0.7746025323867798, 0.25632235407829285, -0.7852817177772522, -0.22573819756507874, -0.9104477763175964, 0.5715669393539429, ...
WebraftAI/synapsellm-7b-mistral-v0.2
WebraftAI
2023-11-29T09:54:20Z
1
0
null
[ "transformers", "safetensors", "mistral", "text-generation", "endpoints_compatible", "text-generation-inference", "region:us" ]
2023-11-29T09:54:20Z
2023-11-29T08:12:03.000Z
null
null
# SynapseLLM: SynapseLLM, a significant achievement by WebraftAI, represents a series of large language AI models designed to create robust, generalized, and decentralized information systems. This repository specifically houses the SynapseLLM finetuned version of Mistral. The finetuning process is conducted on a custom dataset, albeit limited in scope, focusing on code and normal question-answering scenarios. This adaptation showcases the model's versatility and applicability within specific domains, contributing to the broader landscape of AI advancements. ## Model Details **SynapseLLM:** - Parameters: 7B - Learning rate: 2e-4 - Adapter used: Qlora - Precision: float16 - Batch size: 32 - Maximum gradient normal: 0.3 - Optimizer: paged_adamw_32bit - Warmup Ratio: 0.03 - Step(s) (trained): 100 - Epoch(s) (trained): 1 ### Model Description This is a 7b parameter, decoder only transformer based finetuned model on Chat Q/A and Code instructions. It's a preview finetune on Mistral 7B v0.1 on a sample dataset of 140k rows comprising of 73k Code and 67k General Q/A (Through GPT-4). This is a full model merged and compiled with trained adapters, so you can easily load this through transformers. - **Developed by:** WebraftAI - **Funded by:** Webraft Cloud - **Shared by:** WebraftAI - **Model type:** Decoder-only Transformer - **Language(s):** English Only - **License:** Apache 2.0 - **Finetuned from model:** Mistral-7b-v0.1
null
transformers
text-generation
null
null
null
null
null
null
null
null
null
WebraftAI/synapsellm-7b-mistral-v0.2
[ -0.33961221575737, -0.5510247945785522, -0.02712506614625454, 0.325157105922699, -0.22110465168952942, -0.4193229675292969, -0.25777482986450195, -0.4105396270751953, 0.03649493679404259, 0.4953845739364624, -0.5674051642417908, -0.5000994205474854, -0.5265488624572754, -0.0406160131096839...
Jarnails1559/Reasoning_model3
Jarnails1559
2023-11-29T08:22:25Z
1
0
null
[ "transformers", "safetensors", "gpt2", "text-generation", "endpoints_compatible", "text-generation-inference", "region:us" ]
2023-11-29T08:22:25Z
2023-11-29T08:21:47.000Z
null
null
Entry not found
null
transformers
text-generation
null
null
null
null
null
null
null
null
null
Jarnails1559/Reasoning_model3
[ -0.3227651119232178, -0.22568456828594208, 0.8622261881828308, 0.43461447954177856, -0.5282989740371704, 0.7012965083122253, 0.7915719747543335, 0.0761861652135849, 0.7746025323867798, 0.25632235407829285, -0.7852817177772522, -0.22573819756507874, -0.9104477763175964, 0.5715669393539429, ...
simoneprete/llama-2-7b-prova10
simoneprete
2023-11-29T09:29:25Z
1
0
null
[ "transformers", "pytorch", "llama", "text-generation", "endpoints_compatible", "text-generation-inference", "region:us" ]
2023-11-29T09:29:25Z
2023-11-29T09:24:26.000Z
null
null
Entry not found
null
transformers
text-generation
null
null
null
null
null
null
null
null
null
simoneprete/llama-2-7b-prova10
[ -0.3227651119232178, -0.22568456828594208, 0.8622261881828308, 0.43461447954177856, -0.5282989740371704, 0.7012965083122253, 0.7915719747543335, 0.0761861652135849, 0.7746025323867798, 0.25632235407829285, -0.7852817177772522, -0.22573819756507874, -0.9104477763175964, 0.5715669393539429, ...
golesheed/whisper-adult-4-dutch
golesheed
2023-11-29T10:45:51Z
1
0
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "nl", "base_model:openai/whisper-large-v2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2023-11-29T10:45:51Z
2023-11-29T09:37:29.000Z
null
null
--- language: - nl license: apache-2.0 base_model: openai/whisper-large-v2 tags: - generated_from_trainer metrics: - wer model-index: - name: Whisper Large V2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Large V2 This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5667 - Wer: 18.6599 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 20 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.7642 | 0.55 | 30 | 0.5041 | 20.3865 | | 0.4158 | 1.09 | 60 | 0.4509 | 26.3583 | | 0.2209 | 1.64 | 90 | 0.4397 | 23.5070 | | 0.198 | 2.18 | 120 | 0.4905 | 17.9471 | | 0.113 | 2.73 | 150 | 0.4729 | 27.2137 | | 0.0797 | 3.27 | 180 | 0.5215 | 23.5070 | | 0.0454 | 3.82 | 210 | 0.5143 | 21.5112 | | 0.0302 | 4.36 | 240 | 0.5637 | 18.8658 | | 0.0181 | 4.91 | 270 | 0.5667 | 18.6599 | ### Framework versions - Transformers 4.35.0.dev0 - Pytorch 2.1.0+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
null
transformers
automatic-speech-recognition
null
null
null
null
null
null
null
null
null
golesheed/whisper-adult-4-dutch
[ -0.40617167949676514, -0.5474780201911926, 0.1494518369436264, 0.16030658781528473, -0.3301900327205658, -0.4775293469429016, -0.200713649392128, -0.38496509194374084, 0.22804416716098785, 0.3747503459453583, -0.7766842842102051, -0.547177791595459, -0.7250977754592896, -0.3335302770137787...
spokkazo/bert-finetuned-squad
spokkazo
2023-11-29T12:13:02Z
1
0
null
[ "transformers", "safetensors", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "base_model:bert-base-cased", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-11-29T12:13:02Z
2023-11-29T09:50:00.000Z
null
null
--- base_model: bert-base-cased tags: - generated_from_trainer datasets: - squad model-index: - name: bert-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.1+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
null
transformers
question-answering
null
null
null
null
null
null
null
null
null
spokkazo/bert-finetuned-squad
[ -0.6389672756195068, -0.8049171566963196, 0.07425648719072342, 0.2646067440509796, -0.4000951945781708, -0.1921720802783966, -0.2026040405035019, -0.2789323031902313, 0.1718403846025467, 0.3625384569168091, -1.086098074913025, -0.4819914400577545, -0.5172421932220459, -0.11757776886224747,...
augustinLib/new-text-emoji-encoder-MSMARCO
augustinLib
2023-11-29T10:06:05Z
1
0
null
[ "transformers", "safetensors", "bert", "text-classification", "endpoints_compatible", "region:us" ]
2023-11-29T10:06:05Z
2023-11-29T09:54:55.000Z
null
null
Entry not found
null
transformers
text-classification
null
null
null
null
null
null
null
null
null
augustinLib/new-text-emoji-encoder-MSMARCO
[ -0.3227651119232178, -0.22568456828594208, 0.8622261881828308, 0.43461447954177856, -0.5282989740371704, 0.7012965083122253, 0.7915719747543335, 0.0761861652135849, 0.7746025323867798, 0.25632235407829285, -0.7852817177772522, -0.22573819756507874, -0.9104477763175964, 0.5715669393539429, ...
sanjit23/my_awesome_minds_model11
sanjit23
2023-11-29T10:03:01Z
1
0
null
[ "region:us" ]
2023-11-29T10:03:01Z
2023-11-29T10:02:56.000Z
null
null
Entry not found
null
null
null
null
null
null
null
null
null
null
null
null
sanjit23/my_awesome_minds_model11
[ -0.3227651119232178, -0.22568456828594208, 0.8622261881828308, 0.43461447954177856, -0.5282989740371704, 0.7012965083122253, 0.7915719747543335, 0.0761861652135849, 0.7746025323867798, 0.25632235407829285, -0.7852817177772522, -0.22573819756507874, -0.9104477763175964, 0.5715669393539429, ...
tanvirsrbd1/testvalue_t5_model2
tanvirsrbd1
2023-11-29T10:29:08Z
1
0
null
[ "transformers", "safetensors", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
2023-11-29T10:29:08Z
2023-11-29T10:25:25.000Z
null
null
Entry not found
null
transformers
text2text-generation
null
null
null
null
null
null
null
null
null
tanvirsrbd1/testvalue_t5_model2
[ -0.3227651119232178, -0.22568456828594208, 0.8622261881828308, 0.43461447954177856, -0.5282989740371704, 0.7012965083122253, 0.7915719747543335, 0.0761861652135849, 0.7746025323867798, 0.25632235407829285, -0.7852817177772522, -0.22573819756507874, -0.9104477763175964, 0.5715669393539429, ...
kfkas/my_test_LLM_qu
kfkas
2023-11-29T11:03:03Z
1
0
null
[ "transformers", "safetensors", "llama", "text-generation", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
2023-11-29T11:03:03Z
2023-11-29T10:28:19.000Z
null
null
Entry not found
null
transformers
text-generation
null
null
null
null
null
null
null
null
null
kfkas/my_test_LLM_qu
[ -0.3227651119232178, -0.22568456828594208, 0.8622261881828308, 0.43461447954177856, -0.5282989740371704, 0.7012965083122253, 0.7915719747543335, 0.0761861652135849, 0.7746025323867798, 0.25632235407829285, -0.7852817177772522, -0.22573819756507874, -0.9104477763175964, 0.5715669393539429, ...
sronger/ko-llm-llama-2-7b-chat3
sronger
2023-11-29T10:31:26Z
1
0
null
[ "transformers", "safetensors", "llama", "text-generation", "endpoints_compatible", "text-generation-inference", "region:us" ]
2023-11-29T10:31:26Z
2023-11-29T10:28:53.000Z
null
null
Entry not found
null
transformers
text-generation
null
null
null
null
null
null
null
null
null
sronger/ko-llm-llama-2-7b-chat3
[ -0.3227651119232178, -0.22568456828594208, 0.8622261881828308, 0.43461447954177856, -0.5282989740371704, 0.7012965083122253, 0.7915719747543335, 0.0761861652135849, 0.7746025323867798, 0.25632235407829285, -0.7852817177772522, -0.22573819756507874, -0.9104477763175964, 0.5715669393539429, ...
bumblebee-testing/tiny-random-LlamaForCausalLM
bumblebee-testing
2023-11-29T10:54:13Z
1
0
null
[ "transformers", "safetensors", "llama", "text-generation", "endpoints_compatible", "text-generation-inference", "region:us" ]
2023-11-29T10:54:13Z
2023-11-29T10:49:47.000Z
null
null
Entry not found
null
transformers
text-generation
null
null
null
null
null
null
null
null
null
bumblebee-testing/tiny-random-LlamaForCausalLM
[ -0.3227651119232178, -0.22568456828594208, 0.8622261881828308, 0.43461447954177856, -0.5282989740371704, 0.7012965083122253, 0.7915719747543335, 0.0761861652135849, 0.7746025323867798, 0.25632235407829285, -0.7852817177772522, -0.22573819756507874, -0.9104477763175964, 0.5715669393539429, ...
lunarlist/pos_thai_phayathai
lunarlist
2023-11-29T11:43:58Z
1
0
null
[ "transformers", "tensorboard", "safetensors", "camembert", "token-classification", "generated_from_trainer", "base_model:clicknext/phayathaibert", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-11-29T11:43:58Z
2023-11-29T10:53:12.000Z
null
null
--- base_model: clicknext/phayathaibert tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: pos_thai_phayathai results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pos_thai_phayathai This model is a fine-tuned version of [clicknext/phayathaibert](https://huggingface.co/clicknext/phayathaibert) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0795 - Precision: 0.9578 - Recall: 0.9598 - F1: 0.9588 - Accuracy: 0.9725 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0882 | 1.0 | 7344 | 0.0845 | 0.9585 | 0.9563 | 0.9574 | 0.9713 | | 0.0709 | 2.0 | 14688 | 0.0795 | 0.9578 | 0.9598 | 0.9588 | 0.9725 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
null
transformers
token-classification
null
null
null
null
null
null
null
null
null
lunarlist/pos_thai_phayathai
[ -0.42520609498023987, -0.48370203375816345, 0.2242790311574936, 0.183343306183815, -0.4367271661758423, -0.4280092716217041, -0.027155734598636627, -0.20100493729114532, 0.2550516128540039, 0.3960239589214325, -0.6972379684448242, -0.693586528301239, -0.7276862859725952, -0.057073578238487...
golesheed/whisper-adult-5-dutch
golesheed
2023-11-29T11:50:51Z
1
0
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "nl", "base_model:openai/whisper-large-v2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2023-11-29T11:50:51Z
2023-11-29T10:59:10.000Z
null
null
--- language: - nl license: apache-2.0 base_model: openai/whisper-large-v2 tags: - generated_from_trainer metrics: - wer model-index: - name: Whisper Large V2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Large V2 This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5325 - Wer: 19.1011 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 20 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.7863 | 0.55 | 30 | 0.4925 | 19.5245 | | 0.3976 | 1.09 | 60 | 0.4438 | 19.8339 | | 0.2297 | 1.64 | 90 | 0.4309 | 19.6873 | | 0.1959 | 2.18 | 120 | 0.4648 | 19.0686 | | 0.1101 | 2.73 | 150 | 0.4456 | 24.7517 | | 0.0794 | 3.27 | 180 | 0.4842 | 21.1041 | | 0.0457 | 3.82 | 210 | 0.4844 | 20.8761 | | 0.0266 | 4.36 | 240 | 0.5217 | 19.1337 | | 0.0156 | 4.91 | 270 | 0.5325 | 19.1011 | ### Framework versions - Transformers 4.35.0.dev0 - Pytorch 2.1.0+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
null
transformers
automatic-speech-recognition
null
null
null
null
null
null
null
null
null
golesheed/whisper-adult-5-dutch
[ -0.3733152747154236, -0.5191617608070374, 0.16442988812923431, 0.1460394561290741, -0.3420674800872803, -0.5297784209251404, -0.2296709567308426, -0.39699649810791016, 0.17990565299987793, 0.35931697487831116, -0.7933940887451172, -0.5608739256858826, -0.7069234251976013, -0.33220669627189...
TheBoefOfWallstreet/baseline_v1
TheBoefOfWallstreet
2023-11-29T11:15:21Z
1
0
null
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "endpoints_compatible", "region:us" ]
2023-11-29T11:15:21Z
2023-11-29T11:01:42.000Z
null
null
Entry not found
null
transformers
text-classification
null
null
null
null
null
null
null
null
null
TheBoefOfWallstreet/baseline_v1
[ -0.3227651119232178, -0.22568456828594208, 0.8622261881828308, 0.43461447954177856, -0.5282989740371704, 0.7012965083122253, 0.7915719747543335, 0.0761861652135849, 0.7746025323867798, 0.25632235407829285, -0.7852817177772522, -0.22573819756507874, -0.9104477763175964, 0.5715669393539429, ...
oopsung/llama2-7b-n-test-v1
oopsung
2023-11-29T11:11:54Z
1
0
null
[ "transformers", "pytorch", "llama", "text-generation", "endpoints_compatible", "text-generation-inference", "region:us" ]
2023-11-29T11:11:54Z
2023-11-29T11:04:52.000Z
null
null
Entry not found
null
transformers
text-generation
null
null
null
null
null
null
null
null
null
oopsung/llama2-7b-n-test-v1
[ -0.3227651119232178, -0.22568456828594208, 0.8622261881828308, 0.43461447954177856, -0.5282989740371704, 0.7012965083122253, 0.7915719747543335, 0.0761861652135849, 0.7746025323867798, 0.25632235407829285, -0.7852817177772522, -0.22573819756507874, -0.9104477763175964, 0.5715669393539429, ...
BangorAI/mistral-7b-cy-tokenizer-train-6
BangorAI
2023-11-29T11:32:53Z
1
0
null
[ "transformers", "safetensors", "mistral", "text-generation", "endpoints_compatible", "text-generation-inference", "region:us" ]
2023-11-29T11:32:53Z
2023-11-29T11:30:05.000Z
null
null
Entry not found
null
transformers
text-generation
null
null
null
null
null
null
null
null
null
BangorAI/mistral-7b-cy-tokenizer-train-6
[ -0.3227651119232178, -0.22568456828594208, 0.8622261881828308, 0.43461447954177856, -0.5282989740371704, 0.7012965083122253, 0.7915719747543335, 0.0761861652135849, 0.7746025323867798, 0.25632235407829285, -0.7852817177772522, -0.22573819756507874, -0.9104477763175964, 0.5715669393539429, ...
Ruoyao/new_graph_852_llama2_7b
Ruoyao
2023-11-29T11:46:07Z
1
0
null
[ "transformers", "pytorch", "llama", "text-generation", "endpoints_compatible", "text-generation-inference", "region:us" ]
2023-11-29T11:46:07Z
2023-11-29T11:38:48.000Z
null
null
Entry not found
null
transformers
text-generation
null
null
null
null
null
null
null
null
null
Ruoyao/new_graph_852_llama2_7b
[ -0.32276463508605957, -0.2256849706172943, 0.8622266054153442, 0.4346153736114502, -0.5282987952232361, 0.7012974619865417, 0.7915722131729126, 0.07618652284145355, 0.7746030688285828, 0.2563217282295227, -0.7852814793586731, -0.22573867440223694, -0.9104479551315308, 0.571567177772522, ...
sjShashank/gujrati-news
sjShashank
2023-11-29T11:57:13Z
1
0
null
[ "transformers", "tensorboard", "safetensors", "mt5", "text2text-generation", "generated_from_trainer", "base_model:GiordanoB/mT5_multilingual_XLSum-finetuned-summarization", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
2023-11-29T11:57:13Z
2023-11-29T11:55:44.000Z
null
null
--- base_model: GiordanoB/mT5_multilingual_XLSum-finetuned-summarization tags: - generated_from_trainer model-index: - name: gujrati-news results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gujrati-news This model is a fine-tuned version of [GiordanoB/mT5_multilingual_XLSum-finetuned-summarization](https://huggingface.co/GiordanoB/mT5_multilingual_XLSum-finetuned-summarization) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 12 - total_train_batch_size: 12 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
null
transformers
text2text-generation
null
null
null
null
null
null
null
null
null
sjShashank/gujrati-news
[ -0.5326552987098694, -0.6929565072059631, 0.006627360358834267, 0.16925491392612457, -0.5441884398460388, -0.31822669506073, -0.3546731472015381, -0.3360591530799866, 0.24789448082447052, 0.36106985807418823, -0.6981576681137085, -0.7031874656677246, -0.699639081954956, 0.05971180647611618...
golesheed/whisper-adult-6-dutch
golesheed
2023-11-29T13:00:51Z
1
0
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "nl", "base_model:openai/whisper-large-v2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2023-11-29T13:00:51Z
2023-11-29T12:09:16.000Z
null
null
--- language: - nl license: apache-2.0 base_model: openai/whisper-large-v2 tags: - generated_from_trainer metrics: - wer model-index: - name: Whisper Large V2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Large V2 This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4773 - Wer: 16.9537 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 20 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.7941 | 0.55 | 30 | 0.4484 | 25.2313 | | 0.3986 | 1.09 | 60 | 0.4232 | 18.1499 | | 0.229 | 1.64 | 90 | 0.4025 | 18.2297 | | 0.2159 | 2.18 | 120 | 0.4018 | 18.9474 | | 0.1149 | 2.73 | 150 | 0.4024 | 22.5837 | | 0.0793 | 3.27 | 180 | 0.4673 | 16.2360 | | 0.0471 | 3.82 | 210 | 0.4381 | 17.1770 | | 0.0295 | 4.36 | 240 | 0.4645 | 17.0654 | | 0.0176 | 4.91 | 270 | 0.4773 | 16.9537 | ### Framework versions - Transformers 4.35.0.dev0 - Pytorch 2.1.0+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
null
transformers
automatic-speech-recognition
null
null
null
null
null
null
null
null
null
golesheed/whisper-adult-6-dutch
[ -0.39810535311698914, -0.5424078702926636, 0.16774053871631622, 0.15669682621955872, -0.3528575301170349, -0.4779854416847229, -0.20462962985038757, -0.3821936845779419, 0.21092364192008972, 0.37380245327949524, -0.7866466641426086, -0.5507341027259827, -0.7200629115104675, -0.329420864582...
TheBloke/psyonic-cetacean-20B-GPTQ
TheBloke
2023-11-29T13:58:30Z
1
0
null
[ "transformers", "safetensors", "llama", "text-generation", "storywriting", "text adventure", "not-for-all-audiences", "base_model:jebcarter/psyonic-cetacean-20B", "license:other", "autotrain_compatible", "text-generation-inference", "4-bit", "region:us" ]
2023-11-29T13:58:30Z
2023-11-29T12:09:38.000Z
null
null
--- base_model: jebcarter/psyonic-cetacean-20B inference: false license: other license_name: microsoft-research-license model_creator: Jeb Carter model_name: Psyonic Cetacean 20B model_type: llama prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke tags: - storywriting - text adventure - not-for-all-audiences --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Psyonic Cetacean 20B - GPTQ - Model creator: [Jeb Carter](https://huggingface.co/jebcarter) - Original model: [Psyonic Cetacean 20B](https://huggingface.co/jebcarter/psyonic-cetacean-20B) <!-- description start --> # Description This repo contains GPTQ model files for [Jeb Carter's Psyonic Cetacean 20B](https://huggingface.co/jebcarter/psyonic-cetacean-20B). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/psyonic-cetacean-20B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/psyonic-cetacean-20B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/psyonic-cetacean-20B-GGUF) * [Jeb Carter's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jebcarter/psyonic-cetacean-20B) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- licensing start --> ## Licensing The creator of the source model has listed its license as `other`, and this quantization has therefore used that same license. As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly. In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Jeb Carter's Psyonic Cetacean 20B](https://huggingface.co/jebcarter/psyonic-cetacean-20B). <!-- licensing end --> <!-- README_GPTQ.md-compatible clients start --> ## Known compatible clients / servers These GPTQ models are known to work in the following inference servers/webuis. - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) - [KoboldAI United](https://github.com/henk717/koboldai) - [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui) - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) This may not be a complete list; if you know of others, please let me know! <!-- README_GPTQ.md-compatible clients end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/psyonic-cetacean-20B-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 10.52 GB | Yes | 4-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/psyonic-cetacean-20B-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 10.89 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/psyonic-cetacean-20B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 12.04 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/psyonic-cetacean-20B-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 8.41 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. | | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/psyonic-cetacean-20B-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 20.35 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-3bit-32g-actorder_True](https://huggingface.co/TheBloke/psyonic-cetacean-20B-GPTQ/tree/gptq-3bit-32g-actorder_True) | 3 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 9.51 GB | No | 3-bit, with group size 64g and act-order. Highest quality 3-bit option. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/psyonic-cetacean-20B-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 20.80 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/psyonic-cetacean-20B-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/psyonic-cetacean-20B-GPTQ:gptq-4bit-128g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `psyonic-cetacean-20B-GPTQ`: ```shell mkdir psyonic-cetacean-20B-GPTQ huggingface-cli download TheBloke/psyonic-cetacean-20B-GPTQ --local-dir psyonic-cetacean-20B-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir psyonic-cetacean-20B-GPTQ huggingface-cli download TheBloke/psyonic-cetacean-20B-GPTQ --revision gptq-4bit-128g-actorder_True --local-dir psyonic-cetacean-20B-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir psyonic-cetacean-20B-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/psyonic-cetacean-20B-GPTQ --local-dir psyonic-cetacean-20B-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-128g-actorder_True https://huggingface.co/TheBloke/psyonic-cetacean-20B-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/psyonic-cetacean-20B-GPTQ`. - To download from a specific branch, enter for example `TheBloke/psyonic-cetacean-20B-GPTQ:gptq-4bit-128g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `psyonic-cetacean-20B-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. - Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-tgi start --> ## Serving this model from Text Generation Inference (TGI) It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/psyonic-cetacean-20B-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: {response}") ``` <!-- README_GPTQ.md-use-from-tgi end --> <!-- README_GPTQ.md-use-from-python start --> ## Python code example: inference from this GPTQ model ### Install the necessary packages Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install --upgrade transformers optimum # If using PyTorch 2.1 + CUDA 12.x: pip3 install --upgrade auto-gptq # or, if using PyTorch 2.1 + CUDA 11.x: pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ ``` If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.5.1 pip3 install . ``` ### Example Python code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/psyonic-cetacean-20B-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-128g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Tell me about AI" prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly. [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility. For a list of clients/servers, please see "Known compatible clients / servers", above. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Jeb Carter's Psyonic Cetacean 20B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6459a451abdbb77c4c6d8258/uNoKlBulkRF3mCoMgetGs.png) --- Presenting the FP16 files for Psyonic-Cetacean-20B! This is an experimental Llama2-based stack merge based on the models and recipe below: - [KoboldAI/PsyFighter-2-13b](https://huggingface.co/KoboldAI/Psyfighter-2-13B) - [microsoft/Orca-2-13b](https://huggingface.co/microsoft/Orca-2-13b) ```yaml slices: - sources: - model: Orca2flat layer_range: [0, 16] - sources: - model: /KoboldAI/Psyfighter-2-13B (FP16 not yet available) layer_range: [8, 24] - sources: - model: Orca2flat layer_range: [17, 32] - sources: - model: /KoboldAI/Psyfighter-2-13B (FP16 not yet available) layer_range: [25, 40] merge_method: passthrough dtype: float16 ``` Note: while we did run an inverted merge the output was not satisfactory and will not be released. We first flatted the additional ChatML vocabulary tokens out of Orca-2-13B, then performed a stack merge with Psyfighter-2-13B. The results surprised us with their vividness, freshness of prose, obedience to instruction prompting, and formatting cohesion. This model is focused on storywriting and text adventure, with a side order of Assistant and Chat functionality. Like its ancestor Psyfighter-2 this model will function better if you let it improvise and riff on your concepts rather than feeding it an excess of detail. Additionally, either the removal of the ChatML vocab or the stack merging process itself has resulted in not only an uncensored model but an actively anti-censored model, so please be aware that this model can and will kill you during adventures or output NSFW material if prompted accordingly. During testing, the model exhibited an especially strong affinity for science fiction and space opera writing, while handling fantasy elements quite well and horror elements slightly less so. Refer to the Psyfighter-2 model card for best prompting practices. Despite that, we have tested the model out to 16000 context via Rope scaling and the model does not drive towards NSFW on its own. It will follow your tone and style very well. Please enjoy, and if you encounter anything exciting or weird, please reach out to me at [jebcarter@pm.me]. Special thanks as always to the KoboldAI crew who provided the mergebox, testing, and feedback on this model.
null
transformers
text-generation
null
null
null
null
null
null
null
null
null
TheBloke/psyonic-cetacean-20B-GPTQ
[ -0.6712871193885803, -0.5944963693618774, 0.37370195984840393, 0.1371312290430069, -0.45139211416244507, -0.23024387657642365, -0.04785601422190666, -0.5250054001808167, 0.1524381935596466, 0.5875493884086609, -0.6030194163322449, -0.5470088720321655, -0.3794720768928528, -0.10685446858406...
rntc/pubmedbert-bigbio_blurb-bc5disease
rntc
2023-11-29T13:31:43Z
1
0
null
[ "transformers", "safetensors", "bert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-11-29T13:31:43Z
2023-11-29T12:10:03.000Z
null
null
Entry not found
null
transformers
token-classification
null
null
null
null
null
null
null
null
null
rntc/pubmedbert-bigbio_blurb-bc5disease
[ -0.32276493310928345, -0.2256845235824585, 0.8622258305549622, 0.43461519479751587, -0.5282987356185913, 0.7012961506843567, 0.7915714979171753, 0.076186403632164, 0.774602472782135, 0.2563222646713257, -0.7852815389633179, -0.22573848068714142, -0.910447895526886, 0.5715668797492981, -0...
mjcarleb/llama_7B_priscilla-fine-tuned
mjcarleb
2023-11-29T12:43:59Z
1
0
null
[ "transformers", "pytorch", "llama", "text-generation", "endpoints_compatible", "text-generation-inference", "region:us" ]
2023-11-29T12:43:59Z
2023-11-29T12:37:23.000Z
null
null
Entry not found
null
transformers
text-generation
null
null
null
null
null
null
null
null
null
mjcarleb/llama_7B_priscilla-fine-tuned
[ -0.32276493310928345, -0.2256845235824585, 0.8622258305549622, 0.43461519479751587, -0.5282987356185913, 0.7012961506843567, 0.7915714979171753, 0.076186403632164, 0.774602472782135, 0.2563222646713257, -0.7852815389633179, -0.22573848068714142, -0.910447895526886, 0.5715668797492981, -0...
theblackhacker/cono_s1.6.3
theblackhacker
2023-11-29T12:49:53Z
1
0
null
[ "transformers", "safetensors", "mistral", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
2023-11-29T12:49:53Z
2023-11-29T12:47:38.000Z
null
null
Entry not found
null
transformers
text-generation
null
null
null
null
null
null
null
null
null
theblackhacker/cono_s1.6.3
[ -0.32276493310928345, -0.2256845235824585, 0.8622258305549622, 0.43461519479751587, -0.5282987356185913, 0.7012961506843567, 0.7915714979171753, 0.076186403632164, 0.774602472782135, 0.2563222646713257, -0.7852815389633179, -0.22573848068714142, -0.910447895526886, 0.5715668797492981, -0...
nestormauro/model
nestormauro
2023-11-29T13:45:05Z
1
0
null
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "region:us" ]
2023-11-29T13:45:05Z
2023-11-29T13:27:58.000Z
null
null
--- license: creativeml-openrail-m base_model: CompVis/stable-diffusion-v1-4 instance_prompt: a photo of sks dog tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - nestormauro/model These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) LoRA for the text encoder was enabled: False.
null
diffusers
text-to-image
null
null
null
null
null
null
null
null
null
nestormauro/model
[ -0.13312336802482605, -0.4324726164340973, 0.44975748658180237, 0.24428372085094452, -0.7101303339004517, 0.14844770729541779, 0.2744590640068054, -0.18913693726062775, 0.921059787273407, 0.5278290510177612, -0.4430214762687683, -0.4745023846626282, -0.6899975538253784, -0.1351330727338791...
ErnestBeckham/phi-1_5-new-summarizer
ErnestBeckham
2023-11-29T14:41:56Z
1
0
null
[ "transformers", "tensorboard", "safetensors", "phi", "text-generation", "generated_from_trainer", "custom_code", "base_model:microsoft/phi-1_5", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-11-29T14:41:56Z
2023-11-29T13:58:46.000Z
null
null
--- license: other base_model: microsoft/phi-1_5 tags: - generated_from_trainer model-index: - name: phi-1_5-new-summarizer results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phi-1_5-new-summarizer This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - training_steps: 1000 ### Training results ### Framework versions - Transformers 4.35.0 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.14.1
null
transformers
text-generation
null
null
null
null
null
null
null
null
null
ErnestBeckham/phi-1_5-new-summarizer
[ -0.4387205243110657, -0.5022523403167725, -0.020276587456464767, 0.21285277605056763, -0.45948705077171326, -0.5141726732254028, 0.19841210544109344, -0.23589059710502625, 0.33416345715522766, 0.3649996817111969, -0.779889702796936, -0.4732239842414856, -0.6873376369476318, 0.1005302891135...
vinhtran2611/flan-t5-small-finetuned-squad
vinhtran2611
2023-11-29T14:48:09Z
1
0
null
[ "transformers", "tensorboard", "safetensors", "t5", "question-answering", "generated_from_trainer", "dataset:squad", "base_model:google/flan-t5-small", "license:apache-2.0", "endpoints_compatible", "text-generation-inference", "region:us" ]
2023-11-29T14:48:09Z
2023-11-29T14:09:33.000Z
null
null
--- license: apache-2.0 base_model: google/flan-t5-small tags: - generated_from_trainer datasets: - squad model-index: - name: flan-t5-small-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan-t5-small-finetuned-squad This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.8856 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 5.822 | 0.16 | 100 | 5.4541 | | 5.3622 | 0.31 | 200 | 4.8810 | | 4.7972 | 0.47 | 300 | 3.7796 | | 4.0388 | 0.62 | 400 | 3.0339 | | 3.551 | 0.78 | 500 | 2.6939 | | 3.2096 | 0.94 | 600 | 2.4673 | | 2.9578 | 1.09 | 700 | 2.3025 | | 2.8282 | 1.25 | 800 | 2.2245 | | 2.7047 | 1.4 | 900 | 2.1346 | | 2.6184 | 1.56 | 1000 | 2.0558 | | 2.5335 | 1.72 | 1100 | 2.0463 | | 2.478 | 1.87 | 1200 | 2.0043 | | 2.3972 | 2.03 | 1300 | 1.9531 | | 2.3655 | 2.18 | 1400 | 1.9188 | | 2.2355 | 2.34 | 1500 | 1.9123 | | 2.2738 | 2.5 | 1600 | 1.9028 | | 2.2448 | 2.65 | 1700 | 1.8926 | | 2.2777 | 2.81 | 1800 | 1.8893 | | 2.2982 | 2.96 | 1900 | 1.8856 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
null
transformers
question-answering
null
null
null
null
null
null
null
null
null
vinhtran2611/flan-t5-small-finetuned-squad
[ -0.6104077100753784, -0.5634669661521912, 0.15149684250354767, 0.08418065309524536, -0.10464008897542953, -0.1349157691001892, -0.04834737256169319, -0.22469620406627655, 0.30025774240493774, 0.3457946479320526, -0.9712090492248535, -0.6147080659866333, -0.6432090401649475, -0.061805311590...
S4sch/zephyr-neural-chat-frankenmerge11b-gguf
S4sch
2023-11-29T14:36:16Z
1
1
null
[ "transformers", "gguf", "mistral", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
2023-11-29T14:36:16Z
2023-11-29T14:10:30.000Z
null
null
--- license: apache-2.0 ---
null
transformers
text-generation
null
null
null
null
null
null
null
null
null
S4sch/zephyr-neural-chat-frankenmerge11b-gguf
[ -0.12853312492370605, -0.18616832792758942, 0.6529129147529602, 0.494362473487854, -0.19319364428520203, 0.23607414960861206, 0.36071962118148804, 0.05056367814540863, 0.5793655514717102, 0.7400145530700684, -0.6508100032806396, -0.237839937210083, -0.7102250456809998, -0.0478254035115242,...
TheBloke/Iambe-20B-DARE-AWQ
TheBloke
2023-11-29T22:07:02Z
1
0
null
[ "transformers", "safetensors", "llama", "text-generation", "base_model:athirdpath/Iambe-20b-DARE", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "4-bit", "region:us" ]
2023-11-29T22:07:02Z
2023-11-29T14:20:38.000Z
null
null
--- base_model: athirdpath/Iambe-20b-DARE inference: false license: cc-by-nc-4.0 model_creator: LadyBabyloin model_name: Iambe 20B Dare model_type: llama prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Iambe 20B Dare - AWQ - Model creator: [LadyBabyloin](https://huggingface.co/athirdpath) - Original model: [Iambe 20B Dare](https://huggingface.co/athirdpath/Iambe-20b-DARE) <!-- description start --> ## Description This repo contains AWQ model files for [LadyBabyloin's Iambe 20B Dare](https://huggingface.co/athirdpath/Iambe-20b-DARE). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - Llama and Mistral models only - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Iambe-20B-DARE-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Iambe-20B-DARE-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Iambe-20B-DARE-GGUF) * [LadyBabyloin's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/athirdpath/Iambe-20b-DARE) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- licensing start --> ## Licensing The creator of the source model has listed its license as `cc-by-nc-4.0`, and this quantization has therefore used that same license. As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly. In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [LadyBabyloin's Iambe 20B Dare](https://huggingface.co/athirdpath/Iambe-20b-DARE). <!-- licensing end --> <!-- README_AWQ.md-provided-files start --> ## Provided files, and AWQ parameters I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered. Models are released as sharded safetensors files. | Branch | Bits | GS | AWQ Dataset | Seq Len | Size | | ------ | ---- | -- | ----------- | ------- | ---- | | [main](https://huggingface.co/TheBloke/Iambe-20B-DARE-AWQ/tree/main) | 4 | 128 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 10.87 GB <!-- README_AWQ.md-provided-files end --> <!-- README_AWQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/Iambe-20B-DARE-AWQ`. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `Iambe-20B-DARE-AWQ` 7. Select **Loader: AutoAWQ**. 8. Click Load, and the model will load and is now ready for use. 9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. 10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_AWQ.md-text-generation-webui end --> <!-- README_AWQ.md-use-from-vllm start --> ## Multi-user inference server: vLLM Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/). - Please ensure you are using vLLM version 0.2 or later. - When using vLLM as a server, pass the `--quantization awq` parameter. For example: ```shell python3 -m vllm.entrypoints.api_server --model TheBloke/Iambe-20B-DARE-AWQ --quantization awq --dtype auto ``` - When using vLLM from Python code, again set `quantization=awq`. For example: ```python from vllm import LLM, SamplingParams prompts = [ "Tell me about AI", "Write a story about llamas", "What is 291 - 150?", "How much wood would a woodchuck chuck if a woodchuck could chuck wood?", ] prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ''' prompts = [prompt_template.format(prompt=prompt) for prompt in prompts] sampling_params = SamplingParams(temperature=0.8, top_p=0.95) llm = LLM(model="TheBloke/Iambe-20B-DARE-AWQ", quantization="awq", dtype="auto") outputs = llm.generate(prompts, sampling_params) # Print the outputs. for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") ``` <!-- README_AWQ.md-use-from-vllm start --> <!-- README_AWQ.md-use-from-tgi start --> ## Multi-user inference server: Hugging Face Text Generation Inference (TGI) Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/Iambe-20B-DARE-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: ", response) ``` <!-- README_AWQ.md-use-from-tgi end --> <!-- README_AWQ.md-use-from-python start --> ## Inference from Python code using Transformers ### Install the necessary packages - Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later. - Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later. ```shell pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0" ``` Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0. If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command: ```shell pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl ``` If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y autoawq git clone https://github.com/casper-hansen/AutoAWQ cd AutoAWQ pip3 install . ``` ### Transformers example code (requires Transformers 4.35.0 and later) ```python from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer model_name_or_path = "TheBloke/Iambe-20B-DARE-AWQ" tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) model = AutoModelForCausalLM.from_pretrained( model_name_or_path, low_cpu_mem_usage=True, device_map="cuda:0" ) # Using the text streamer to stream output one token at a time streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) prompt = "Tell me about AI" prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ''' # Convert prompt to tokens tokens = tokenizer( prompt_template, return_tensors='pt' ).input_ids.cuda() generation_params = { "do_sample": True, "temperature": 0.7, "top_p": 0.95, "top_k": 40, "max_new_tokens": 512, "repetition_penalty": 1.1 } # Generate streamed output, visible one token at a time generation_output = model.generate( tokens, streamer=streamer, **generation_params ) # Generation without a streamer, which will include the prompt in the output generation_output = model.generate( tokens, **generation_params ) # Get the tokens from the output, decode them, print them token_output = generation_output[0] text_output = tokenizer.decode(token_output) print("model.generate output: ", text_output) # Inference is also possible via Transformers' pipeline from transformers import pipeline pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, **generation_params ) pipe_output = pipe(prompt_template)[0]['generated_text'] print("pipeline output: ", pipe_output) ``` <!-- README_AWQ.md-use-from-python end --> <!-- README_AWQ.md-compatibility start --> ## Compatibility The files provided are tested to work with: - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`. - [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later. - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later. - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later. <!-- README_AWQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: LadyBabyloin's Iambe 20B Dare <p align="center"><img src="https://i.ibb.co/pbpJHpk/iambe-sml.png"/><font size="6"> <b>Iambe-20b-DARE</b> </font></p> ## Description and Role Named after a charming daughter of Echo and Pan in Greek myth, Iambe-20b-DARE is a [DARE](https://github.com/yule-BUAA/MergeLM) merge building on my recent experiments. Iambe is intended to have the best realistically possible understanding of anatomy and of a scene's state for a 20b merge, while remaining personable and authentic in "voice". ## Prompting and Context Iambe-20b-DARE uses Alpaca formatting, and has an effective context size of 4096 tokens. This model is uncensored, and the output/deployment of this model is the responsibility of the user. ## Method and Hypothesis Based on my extended vanilla model [BigLlama](https://huggingface.co/athirdpath/BigLlama-20b), this adds elements of: - [NeverSleep/Noromaid-20b-v0.1.1](https://huggingface.co/NeverSleep/Noromaid-20b-v0.1.1) - Addded to adapt the excellent writing and "soul" that come from the datasets backing this model. - [athirdpath/Eileithyia-20b](https://huggingface.co/athirdpath/Eileithyia-20b) - Added at low weight and density to capture anatomical data and its relation to fiction without the model's other... quirks. - [athirdpath/CleverGirl-20b-Blended](https://huggingface.co/athirdpath/CleverGirl-20b-Blended) - Added to capture CleverGirl's problem-solving abilities. <p align="center"><font size="7"> <b>Examples</b></font> <p align="center"><font size="3"> <i>(q5_k_m GGUF quant, Ooba textgen, Midnight Enigma preset for KISS reasons)</i></font> <p align="center"><font size="5"> Reasoning and Recall - STEM</font><img src="https://i.ibb.co/YDkRD43/base.png"/> <p align="center"><font size="5"> Reasoning and Recall - Social Science</font><img src="https://i.ibb.co/L0kc3pb/Screenshot-2023-11-29-024037.png"/> <p align="center"><font size="5"> Ethics and Prompt Interpretation</font><img src="https://i.ibb.co/b5vH6jD/Screenshot-2023-11-29-040408.png"/> <p align="center"><font size="5">Role-Playing, Complex Card</font> <p align="center"><font size="3">SillyTavern, Roleplay instruct preset, just Min_P 0.1 and Temp 1.2</font><img src="https://i.ibb.co/NrKNn2j/Screenshot-2023-11-29-051923.png"/> <p align="center"><font size="6"><b><a href="https://i.ibb.co/m5G0ZVp/Screenshot-2023-11-29-004705.png">!!NSFW!! - Erotica Writing Example - !!NSFW!!</font></a></b></p> ## Testing and Conclusions VERY Impressed so far, concrete data coming eventually. Does still have some confusion (at q5_k_m), but has instantly become my daily driver. ## Recipe merge_method: dare_ties - base_model: athirdpath/BigLlama-20b - model: NeverSleep/Noromaid-20b-v0.1.1 weight: 0.41 / density: 0.50 - model: athirdpath/Eileithyia-20b weight: 0.18 / density: 0.30 - model: athirdpath/CleverGirl-20b-Blended weight: 0.41 / density: 0.50 int8_mask: true dtype: bfloat16 ## Gratitude Thanks to brucethemoose for the recipe. Thanks to Undi95 and IkariDev at NeverSleep for Noromaid, as well as lots of inspiration. Thanks to Sao10K for [half of](https://huggingface.co/Sao10K/Mythical-Destroyer-V2-L2-13B) CleverGirl. ## Geneology Credits - Models and LoRAs, 3 levels deep - athirdpath/BigLlama-20b - TheBloke/Llama-2-13B-fp16 - iamshnoo/alpaca-2-13b-english - NeverSleep/Noromaid-20b-v0.1.1 - Aesir Private RP dataset - HuggingFaceH4/no_robots - athirdpath/CleverGirl-20b-Blended - athirdpath/Orca-2-13b-Alpaca-Uncensored - microsoft/Orca-2-13b - athirdpath/Orca-2-13b-Alpaca-Uncensored-LORA - Sao10K/Mythical-Destroyer-V2-L2-13B - TheBloke/Llama-2-13B-fp16 - Gryphe/MythoMax-L2-13b - totally-not-an-llm/PuddleJumper-13b - jondurbin/airoboros-l2-13b-2.1 - rombodawg/LosslessMegaCoder-llama2-13b-mini - The-Face-Of-Goonery/Chronos-Beluga-v2-13bfp16 - athirdpath/Eileithyia-20b - athirdpath/Harmonia-20B - Undi95/Emerhyst-20B - Undi95/Emerald-13B - Undi95/Amethyst-13B - Undi95/MXLewd-L2-20B - Undi95/MLewd-L2-13B-v2-3 - Undi95/ReMM-v2.1-L2-13B - Xwin-LM/Xwin-LM-13B-V0.1 - Undi95/Lewd-Sydney-20B - Free_Sydney_V2_13b_HF - Undi95/Xwin-MLewd-13B-V0.2 - lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT - athirdpath/Nethena-20b-Glued - NeverSleep/Nethena-20B - athirdpath/Nethena-20b-Glue-LORA - tavtav/Rose-20B - CalderaAI/13B-Thorns-l2 - NeverSleep/Noromaid-13b-v0.1.0 - Undi95/PsyMedRP-v1-20B - jondurbin/airoboros-l2-13b-3.0 - ehartford/Samantha-1.11-13b - Xwin-LM/Xwin-LM-13B-V0.1 - chaoyi-wu/MedLLaMA_13B - migtissera/Synthia-13B-v1.2 - NeverSleep/Noromaid-20b-v0.1.1 - Aesir Private RP dataset - HuggingFaceH4/no_robots - Undi95/U-Amethyst-20B - Xwin-LM/Xwin-LM-13B-V0.1 - The-Face-Of-Goonery/Huginn-13b-FP16 - zattio770/120-Days-of-LORA-v2-13B - lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT - Undi95/Unholy-v1-12L-13B - athirdpath/Eileithyia-20b-LORA Thanks again to everyone involved.
null
transformers
text-generation
null
null
null
null
null
null
null
null
null
TheBloke/Iambe-20B-DARE-AWQ
[ -0.527904748916626, -0.8793637156486511, 0.29545387625694275, 0.2815389931201935, -0.23715472221374512, -0.1657382845878601, 0.09001358598470688, -0.5511037707328796, -0.029346168041229248, 0.45287439227104187, -0.7080507874488831, -0.4967062175273895, -0.31978073716163635, -0.052298184484...
umm-maybe/StarCoder-1B-Cthulhu-Mythos
umm-maybe
2023-11-29T14:56:43Z
1
0
null
[ "transformers", "safetensors", "gpt_bigcode", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
2023-11-29T14:56:43Z
2023-11-29T14:54:42.000Z
null
null
Entry not found
null
transformers
text-generation
null
null
null
null
null
null
null
null
null
umm-maybe/StarCoder-1B-Cthulhu-Mythos
[ -0.3227648437023163, -0.2256842851638794, 0.8622258305549622, 0.4346150755882263, -0.5282991528511047, 0.7012966275215149, 0.7915719151496887, 0.07618607580661774, 0.774602472782135, 0.25632160902023315, -0.7852813005447388, -0.22573809325695038, -0.910448431968689, 0.571567177772522, -0...
AI-Explorer-92/ppo-LunarLander-v2
AI-Explorer-92
2023-11-29T15:06:33Z
1
0
null
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
2023-11-29T15:06:33Z
2023-11-29T15:04:29.000Z
null
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 271.81 +/- 17.54 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
null
stable-baselines3
reinforcement-learning
null
null
null
null
null
null
null
null
null
AI-Explorer-92/ppo-LunarLander-v2
[ -0.003174722194671631, -0.3944118320941925, 0.24817678332328796, 0.3390541076660156, -0.08787582069635391, 0.04007984697818756, 0.5000530481338501, -0.1760784089565277, 0.28882232308387756, 0.9444825649261475, -0.6269250512123108, -0.5120341181755066, -0.4980955421924591, -0.27938348054885...
shleeeee/mistral-ko-7b-tech
shleeeee
2023-11-29T15:45:15Z
1
0
null
[ "transformers", "safetensors", "mistral", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
2023-11-29T15:45:15Z
2023-11-29T15:35:30.000Z
null
null
Entry not found
null
transformers
text-generation
null
null
null
null
null
null
null
null
null
shleeeee/mistral-ko-7b-tech
[ -0.3227648437023163, -0.2256842851638794, 0.8622258305549622, 0.4346150755882263, -0.5282991528511047, 0.7012966275215149, 0.7915719151496887, 0.07618607580661774, 0.774602472782135, 0.25632160902023315, -0.7852813005447388, -0.22573809325695038, -0.910448431968689, 0.571567177772522, -0...
jhnschy/swin-peft-model
jhnschy
2023-11-29T15:42:42Z
1
0
null
[ "peft", "arxiv:1910.09700", "base_model:microsoft/swin-large-patch4-window12-384-in22k", "region:us" ]
2023-11-29T15:42:42Z
2023-11-29T15:42:31.000Z
null
null
--- library_name: peft base_model: microsoft/swin-large-patch4-window12-384-in22k --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure ### Framework versions - PEFT 0.6.2
null
peft
null
null
null
null
null
null
null
null
null
null
jhnschy/swin-peft-model
[ -0.5982630848884583, -0.556830644607544, 0.43997249007225037, 0.1051170602440834, -0.21972694993019104, -0.3080679476261139, 0.12281554937362671, -0.5639218688011169, 0.07785018533468246, 0.6828545928001404, -0.7460103034973145, -0.6527181267738342, -0.5489067435264587, -0.1354884803295135...